In recent years, Machine Learning (ML) approaches have been widely adopted for computer security tasks, including network intrusion detection and malware detection. However, linear and non-linear ML-based classifiers are vulnerable to adversarial examples created to deceive the classifiers. Generative Adversarial Networks (GAN) are architectures based on neural networks capable of successfully producing adversarial samples. In this study, we compare the performance of two GAN architectures based on either Transformer or Autoencoder networks in two distinct domains: Network Intrusion Detection Systems (NIDS) and mobile malware detection. We aim to evaluate their performance in terms of both effectiveness (i.e., the ability of the GAN-generated samples to reduce the detection rate of the targeted classifier) and efficiency (i.e., the capability of achieving the desired goal with fewer training epochs). Our findings reveal that the Transformer-based GAN outperforms the Autoencoder-based GAN, generating high-quality adversarial samples able to deceive both ML-based NIDS and ML-based malware detectors. Furthermore, in both scenarios, the Transformer-based architecture achieves a high deception efficacy through a reduced number of training epochs. This research sheds light on the relevance of GAN architectures, particularly Transformer-based models, and the need to consider samples produced by this architecture for improving the robustness of ML-based security solutions.
Transformer or Autoencoder? Who is the ultimate adversary for attack detectors?
Laudanna S.;Di Sorbo A.;Visaggio C. A.;Canfora G.
2025-01-01
Abstract
In recent years, Machine Learning (ML) approaches have been widely adopted for computer security tasks, including network intrusion detection and malware detection. However, linear and non-linear ML-based classifiers are vulnerable to adversarial examples created to deceive the classifiers. Generative Adversarial Networks (GAN) are architectures based on neural networks capable of successfully producing adversarial samples. In this study, we compare the performance of two GAN architectures based on either Transformer or Autoencoder networks in two distinct domains: Network Intrusion Detection Systems (NIDS) and mobile malware detection. We aim to evaluate their performance in terms of both effectiveness (i.e., the ability of the GAN-generated samples to reduce the detection rate of the targeted classifier) and efficiency (i.e., the capability of achieving the desired goal with fewer training epochs). Our findings reveal that the Transformer-based GAN outperforms the Autoencoder-based GAN, generating high-quality adversarial samples able to deceive both ML-based NIDS and ML-based malware detectors. Furthermore, in both scenarios, the Transformer-based architecture achieves a high deception efficacy through a reduced number of training epochs. This research sheds light on the relevance of GAN architectures, particularly Transformer-based models, and the need to consider samples produced by this architecture for improving the robustness of ML-based security solutions.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.