Amine Merzouk - Academia.edu (original) (raw)

Amine Merzouk

Muhammad Usama related author profile picture

Arif Siddiqi related author profile picture

Gil Fidel related author profile picture

Qiuxia Lai related author profile picture

Muhammed Amin Abdullah related author profile picture

ABDULLAH HANIF related author profile picture

EugĂȘnio Silva related author profile picture

Ala  Al-Fuqaha related author profile picture

Aziz singa related author profile picture

Alesia Chernikova related author profile picture

Uploads

Papers by Amine Merzouk

Research paper thumbnail of A Deeper Analysis of Adversarial Examples in Intrusion Detection

During the last decade, machine learning algorithms have massively integrated the defense arsenal... more During the last decade, machine learning algorithms have massively integrated the defense arsenal made available to security professionals, especially for intrusion detection. However, and despite the progress made in this area, machine learning models have been found to be vulnerable to slightly modified data samples called adversarial examples. Thereby, a small and well-computed perturbation may allow adversaries to evade intrusion detection systems. Numerous works have already successfully applied adversarial examples to network intrusion detection datasets. Yet little attention was given so far to the practicality of these examples in the implementation of end-to-end network attacks. In this paper, we study the applicability of network attacks based on adversarial examples in real networks. We minutely analyze adversarial examples generated with state-of-the-art algorithms to evaluate their consistency based on several criteria. Our results show a large proportion of invalid examples that are unlikely to lead to real attacks.

Research paper thumbnail of A Deeper Analysis of Adversarial Examples in Intrusion Detection

During the last decade, machine learning algorithms have massively integrated the defense arsenal... more During the last decade, machine learning algorithms have massively integrated the defense arsenal made available to security professionals, especially for intrusion detection. However, and despite the progress made in this area, machine learning models have been found to be vulnerable to slightly modified data samples called adversarial examples. Thereby, a small and well-computed perturbation may allow adversaries to evade intrusion detection systems. Numerous works have already successfully applied adversarial examples to network intrusion detection datasets. Yet little attention was given so far to the practicality of these examples in the implementation of end-to-end network attacks. In this paper, we study the applicability of network attacks based on adversarial examples in real networks. We minutely analyze adversarial examples generated with state-of-the-art algorithms to evaluate their consistency based on several criteria. Our results show a large proportion of invalid examples that are unlikely to lead to real attacks.

Log In