Machine learning (ML) represents one of the main pillars of the current digital era, specifically in modern real-world applications. The Internet of Things (IoT) technology is foundational in developing advanced intelligent systems. The convergence of ML and IoT drives significant advancements across various domains, such as making IoT-based security systems smarter and more efficient. However, ML-based IoT systems are vulnerable to lurking attacks during the training and testing phases. An adversarial attack aims to corrupt the ML model's functionality by introducing perturbed inputs. Consequently, it can pose significant risks leading to devices' malfunction, services' interruption, and personal data misuse. This article examines the severity of adversarial attacks and accentuates the importance of designing secure and robust ML models in the IoT context. A comprehensive classification of adversarial machine learning (AML) is provided. Moreover, a systematic literature review of the latest research trends (from 2020 to 2024) of the intersection of AML and IoT-based security systems is presented. The results revealed the availability of various AML attack techniques, where the Fast Gradient Signed Method (FGSM) is the most employed. Several studies recommend the adversarial training technique to defend against such attacks. Finally, potential open issues and main research directions are highlighted for future consideration and enhancement.
Download full-text PDF |
Source |
---|---|
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC11359573 | PMC |
http://dx.doi.org/10.3390/s24165150 | DOI Listing |
Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!