Adversarial Attacks on Intrusion Detection Systems in In-Vehicle Networks of Connected and Autonomous Vehicles.

Sensors (Basel)

School of Computer Science and Informatics, Cardiff University, Cardiff CF10 3AT, UK.

Published: June 2024

Rapid advancements in connected and autonomous vehicles (CAVs) are fueled by breakthroughs in machine learning, yet they encounter significant risks from adversarial attacks. This study explores the vulnerabilities of machine learning-based intrusion detection systems (IDSs) within in-vehicle networks (IVNs) to adversarial attacks, shifting focus from the common research on manipulating CAV perception models. Considering the relatively simple nature of IVN data, we assess the susceptibility of IVN-based IDSs to manipulation-a crucial examination, as adversarial attacks typically exploit complexity. We propose an adversarial attack method using a substitute IDS trained with data from the onboard diagnostic port. In conducting these attacks under black-box conditions while adhering to realistic IVN traffic constraints, our method seeks to deceive the IDS into misclassifying both normal-to-malicious and malicious-to-normal cases. Evaluations on two IDS models-a baseline IDS and a state-of-the-art model, MTH-IDS-demonstrated substantial vulnerability, decreasing the F1 scores from 95% to 38% and from 97% to 79%, respectively. Notably, inducing false alarms proved particularly effective as an adversarial strategy, undermining user trust in the defense mechanism. Despite the simplicity of IVN-based IDSs, our findings reveal critical vulnerabilities that could threaten vehicle safety and necessitate careful consideration in the development of IVN-based IDSs and in formulating responses to the IDSs' alarms.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC11207422PMC
http://dx.doi.org/10.3390/s24123848DOI Listing

Publication Analysis

Top Keywords

adversarial attacks
16
ivn-based idss
12
intrusion detection
8
detection systems
8
in-vehicle networks
8
connected autonomous
8
autonomous vehicles
8
adversarial
6
attacks intrusion
4
systems in-vehicle
4

Similar Publications

Confronting adversarial attacks and data imbalances, attaining adversarial robustness under long-tailed distribution presents a challenging problem. Adversarial training (AT) is a conventional solution for enhancing adversarial robustness, which generates adversarial examples (AEs) in a generation phase and subsequently trains on these AEs in a training phase. Existing long-tailed adversarial learning methods follow the AT framework and rebalance the AE classification in the training phase.

View Article and Find Full Text PDF

Transferable adversarial examples, which are generated by transfer-based attacks, have strong adaptability for attacking a completely unfamiliar victim model without knowing its architecture, parameters and outputs. While current transfer-based attacks easily defeat surrogate model with minor perturbations, they struggle to transfer these perturbations to unfamiliar victim models. To characterize these untransferable adversarial examples, which consist of natural examples and perturbations, we define the concept of fuzzy domain.

View Article and Find Full Text PDF

Background: A change in the output of deep neural networks (DNNs) via the perturbation of a few pixels of an image is referred to as an adversarial attack, and these perturbed images are known as adversarial samples. This study examined strategies for compromising the integrity of DNNs under stringent conditions, specifically by inducing the misclassification of medical images of disease with minimal pixel modifications.

Methods: This study used the following three publicly available datasets: the chest radiograph of emphysema (cxr) dataset, the melanocytic lesion (derm) dataset, and the Kaggle diabetic retinopathy (dr) dataset.

View Article and Find Full Text PDF

Machine learning is central to mainstream technology and outperforms classical approaches to handcrafted feature design. Aside from its learning process for artificial feature extraction, it has an end-to-end paradigm from input to output, reaching outstandingly accurate results. However, security concerns about its robustness to malicious and imperceptible perturbations have drawn attention since humans or machines can change the predictions of programs entirely.

View Article and Find Full Text PDF

: A Framework for Discerning Services on Remote Medical Devices.

Sensors (Basel)

November 2024

Department of Computer Science and Engineering, Kangwon National University, 1 Kangwondaehak-gil, Chuncheon-si 24341, Republic of Korea.

In the medical domain, computer systems in digital healthcare have increased connectivity continuously and the Message Service Element (DIMSE) protocol has a critical role in exchanging biomedical imaging data among different digital healthcare systems. As the data communication technology is used to handle sensitive information such as patient information (e.g.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!