Machine and deep learning are extensively used to obtain network intrusion detection system (NIDS) models for recognizing malicious classes of traffic; however, it is a fact that NIDS models are vulnerable to adversarial attacks. Feature-space attacks apply perturbations to the input data and can induce major forms of misclassifications on a victim NIDS. This practical experience report puts forward the intuition that the misclassifications observed might be due to mere distortions of the underlying classes of traffic, rather than genuine vulnerabilities of the NIDS model under test. The experiments reported are based on the application of two popular adversarial attacks to normal and Denial of Service traffic. The results obtained provide an initial warning on the (mis)use of feature-space attacks in NIDS and suggest that the lessons learned on adversarial robustness, training and defense of NIDS based on the use of feature-space attacks should be treated with extreme caution.

A Critique on the (Mis)Use of Feature-Space Attacks for Adversarial Machine Learning in NIDS

Catillo, Marta;Pecchia, Antonio;Repola, Antonio;Villano, Umberto
2025-01-01

Abstract

Machine and deep learning are extensively used to obtain network intrusion detection system (NIDS) models for recognizing malicious classes of traffic; however, it is a fact that NIDS models are vulnerable to adversarial attacks. Feature-space attacks apply perturbations to the input data and can induce major forms of misclassifications on a victim NIDS. This practical experience report puts forward the intuition that the misclassifications observed might be due to mere distortions of the underlying classes of traffic, rather than genuine vulnerabilities of the NIDS model under test. The experiments reported are based on the application of two popular adversarial attacks to normal and Denial of Service traffic. The results obtained provide an initial warning on the (mis)use of feature-space attacks in NIDS and suggest that the lessons learned on adversarial robustness, training and defense of NIDS based on the use of feature-space attacks should be treated with extreme caution.
2025
adversarial attacks
deep learning
Denial of Service
intrusion detection
security
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.12070/71746
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? ND
social impact