Mitigation of Black-Box Attacks on Intrusion Detection Systems-Based ML (original) (raw)
Related papers
Adversarial attacks against supervised machine learning based network intrusion detection systems
PLOS One, 2022
Adversarial machine learning is a recent area of study that explores both adversarial attack strategy and detection systems of adversarial attacks, which are inputs specially crafted to outwit the classification of detection systems or disrupt the training process of detection systems. In this research, we performed two adversarial attack scenarios, we used a Generative Adversarial Network (GAN) to generate synthetic intrusion traffic to test the influence of these attacks on the accuracy of machine learning-based Intrusion Detection Systems (IDSs). We conducted two experiments on adversarial attacks including poisoning and evasion attacks on two different types of machine learning models: Decision Tree and Logistic Regression. The performance of implemented adversarial attack scenarios was evaluated using the CICIDS2017 dataset. Also, it was based on a comparison of the accuracy of machine learning-based IDS before and after attacks. The results show that the proposed evasion attacks reduced the testing accuracy of both network intrusion detection systems models (NIDS). That illustrates our evasion attack scenario negatively affected the accuracy of machine learning-based network intrusion detection systems, whereas the decision tree model was more affected than logistic regression. Furthermore, our poisoning attack scenario disrupted the training process of machine learning-based NIDS, whereas the logistic regression model was more affected than the decision tree.
Computers, Materials & Continua, 2022
Intrusion detection system plays an important role in defending networks from security breaches. End-to-end machine learning-based intrusion detection systems are being used to achieve high detection accuracy. However, in case of adversarial attacks, that cause misclassification by introducing imperceptible perturbation on input samples, performance of machine learning-based intrusion detection systems is greatly affected. Though such problems have widely been discussed in image processing domain, very few studies have investigated network intrusion detection systems and proposed corresponding defence. In this paper, we attempt to fill this gap by using adversarial attacks on standard intrusion detection datasets and then using adversarial samples to train various machine learning algorithms (adversarial training) to test their defence performance. This is achieved by first creating adversarial sample based on Jacobian-based Saliency Map Attack (JSMA) and Fast Gradient Sign Attack (FGSM) using NSLKDD, UNSW-NB15 and CICIDS17 datasets. The study then trains and tests JSMA and FGSM based adversarial examples in seen (where model has been trained on adversarial samples) and unseen (where model is unaware of adversarial packets) attacks. The experiments includes multiple machine learning classifiers to evaluate their performance against adversarial attacks. The performance parameters include Accuracy, F1-Score and Area under the receiver operating characteristic curve (AUC) Score.
Adversarial Machine Learning Attacks and Defenses in Network Intrusion Detection Systems
International journal of wireless and microwave technologies, 2022
Machine learning is now being used for applications ranging from healthcare to network security. However, machine learning models can be easily fooled into making mistakes using adversarial machine learning attacks. In this article, we focus on the evasion attacks against Network Intrusion Detection System (NIDS) and specifically on designing novel adversarial attacks and defenses using adversarial training. We propose white box attacks against intrusion detection systems. Under these attacks, the detection accuracy of model suffered significantly. Also, we propose a defense mechanism against adversarial attacks using adversarial sample augmented training. The biggest advantage of proposed defense is that it doesn't require any modification to deep neural network architecture or any additional hyperparameter tuning. The gain in accuracy using very small adversarial samples for training deep neural network was however found to be significant.
IEEE xplore, 2023
Network Intrusion Detection System (NIDS) is an essential tool in securing cyberspace from a variety of security risks and unknown cyberattacks. A number of solutions have been implemented for Machine Learning (ML), and Deep Learning (DL) based NIDS. However, all these solutions are vulnerable to adversarial attacks, in which the malicious actor tries to evade or fool the model by injecting adversarial perturbed examples into the system. The main aim of this research work is to study powerful adversarial attack algorithms and their defence method on DL-based NIDS. Fast Gradient Sign Method (FGSM), Jacobian Saliency Map Attack (JSMA), Projected Gradient Descent (PGD) and Carlini & Wagner (C&W) are four powerful adversarial attack methods implemented against the NIDS. As a defence method, Adversarial Training is used to increase the robustness of the NIDS model. The results are summarized in three phases, i.e., 1) before the adversarial attack, 2) after the adversarial attack, and 3) after the adversarial defence. The Canadian Institute for Cybersecurity Intrusion Detection System 2017 (CICIDS-2017) dataset is used for evaluation purposes with various performance measurements like f1-score, accuracy etc.
Neural Network World
In this paper, a defence mechanism is proposed against adversarial attacks. The defence is based on an ensemble classifier that is adversarially trained. This is accomplished by generating adversarial attacks from four different attack methods, i.e., Jacobian-based saliency map attack (JSMA), projected gradient descent (PGD), momentum iterative method (MIM), and fast gradient signed method (FGSM). The adversarial examples are used to identify the robust machine-learning algorithms which eventually participate in the ensemble. The adversarial attacks are divided into seen and unseen attacks. To validate our work, the experiments are conducted using NSLKDD, UNSW-NB15 and CICIDS17 datasets. Grid search for the ensemble is used to optimise results. The parameter used for performance evaluations is accuracy, F1 score and AUC score. It is shown that an adversarially trained ensemble classifier produces better results.
Generative adversarial attacks against intrusion detection systems using active learning
Proceedings of the 2nd ACM Workshop on Wireless Security and Machine Learning, 2020
Intrusion Detection Systems (IDS) are increasingly adopting machine learning (ML)-based approaches to detect threats in computer networks due to their ability to learn underlying threat patterns/features. However, ML-based models are susceptible to adversarial attacks, attacks wherein slight perturbations of the input features, cause misclassifications. We propose a method that uses active learning and generative adversarial networks to evaluate the threat of adversarial attacks on ML-based IDS. Existing adversarial attack methods require a large amount of training data or assume knowledge of the IDS model itself (e.g., loss function), which may not be possible in real-world settings. Our method overcomes these limitations by demonstrating the ability to compromise an IDS using limited training data and assuming no prior knowledge of the IDS model other than its binary classification (i.e., benign or malicious). Experimental results demonstrate the ability of our proposed model to achieve a 98.86% success rate in bypassing the IDS model using only 25 labeled data points during model training. The knowledge gained by compromising the ML-based IDS, can be integrated into the IDS in order to enhance its robustness against similar ML-based adversarial attacks. CCS CONCEPTS • Security and privacy → Intrusion/anomaly detection and malware mitigation;
Adversarial Machine Learning for Network Security
2019 IEEE International Symposium on Technologies for Homeland Security (HST), 2019
With the rapid growth of machine learning applications in communication networks, it is essential to understand the security issues associated with machine learning. In this paper, we choose a flow-based Deep Neural Network (DNN) classifier as a target and study various attacks on this target classifier. The target classifier detects malicious HTTP traffic (i.e., bots, C&C, etc.). We first launch an exploratory attack under a black box assumption against the target DNN classifier. We start from a simple case that the attacker can collect the same set of features used in the target classifier and then consider the case that the attacker can only collect a set of features based on its judgement. We also design the attacks with conditional Generative Adversarial Network (cGAN) to reduce the requirement on the amount of collected data. We show that the attacker can build its own classifier to predict the target classifier's classification results with about 93% accuracy. Once the exploratory attack is successful, we can perform further attacks, e.g., evasion attack and causative attack. We show that these attacks are very effective. Evasion attack can identify samples to double error probability of the target classifier while under causative attack, the new classifier makes classification errors on more than 60% of samples.
Hardening Random Forest Cyber Detectors Against Adversarial Attacks
2020
Machine learning algorithms are effective in several applications, but they are not as much successful when applied to intrusion detection in cyber security. Due to the high sensitivity to their training data, cyber detectors based on machine learning are vulnerable to targeted adversarial attacks that involve the perturbation of initial samples. Existing defenses assume unrealistic scenarios; their results are underwhelming in non-adversarial settings; or they can be applied only to machine learning algorithms that perform poorly for cyber security. We present an original methodology for countering adversarial perturbations targeting intrusion detection systems based on random forests. As a practical application, we integrate the proposed defense method in a cyber detector analyzing network traffic. The experimental results on millions of labelled network flows show that the new detector has a twofold value: it outperforms state-of-the-art detectors that are subject to adversarial ...
Machine Learning for Network Intrusion Detection—A Comparative Study
Future Internet
Modern society has quickly evolved to utilize communication and data-sharing media with the advent of the internet and electronic technologies. However, these technologies have created new opportunities for attackers to gain access to confidential electronic resources. As a result, data breaches have significantly impacted our society in multiple ways. To mitigate this situation, researchers have developed multiple security countermeasure techniques known as Network Intrusion Detection Systems (NIDS). Despite these techniques, attackers have developed new strategies to gain unauthorized access to resources. In this work, we propose using machine learning (ML) to develop a NIDS system capable of detecting modern attack types with a very high detection rate. To this end, we implement and evaluate several ML algorithms and compare their effectiveness using a state-of-the-art dataset containing modern attack types. The results show that the random forest model outperforms other models, ...