Search papers, labs, and topics across Lattice.
This paper introduces a multi-layer ensemble defense mechanism to enhance the robustness of ML-based Network Intrusion Detection Systems (NIDS) against adversarial attacks. The approach combines a stacking classifier layer with an autoencoder layer, where the autoencoder verifies benign classifications made by the stacking classifier. Adversarial training using examples generated by GAN and FGSM further improves resilience, demonstrating increased robustness on UNSW-NB15 and NSL-KDD datasets.
A multi-layer defense can significantly boost NIDS resilience against adversarial attacks, even when those attacks are crafted using GANs and FGSM.
Adversarial examples can represent a serious threat to machine learning (ML) algorithms. If used to manipulate the behaviour of ML-based Network Intrusion Detection Systems (NIDS), they can jeopardize network security. In this work, we aim to mitigate such risks by increasing the robustness of NIDS towards adversarial attacks. To that end, we explore two adversarial methods for generating malicious network traffic. The first method is based on Generative Adversarial Networks (GAN) and the second one is the Fast Gradient Sign Method (FGSM). The adversarial examples generated by these methods are then used to evaluate a novel multilayer defense mechanism, specifically designed to mitigate the vulnerability of ML-based NIDS. Our solution consists of one layer of stacking classifiers and a second layer based on an autoencoder. If the incoming network data are classified as benign by the first layer, the second layer is activated to ensure that the decision made by the stacking classifier is correct. We also incorporated adversarial training to further improve the robustness of our solution. Experiments on two datasets, namely UNSW-NB15 and NSL-KDD, demonstrate that the proposed approach increases resilience to adversarial attacks.