Many communication standards have been proposed recently and more are being developed as a vision for dynamically composable and interoperable medical equipment. However, few have security systems that are sufficiently extensive or flexible to meet current and future safety requirements. This paper aims to analyze the cybersecurity of the Integrated Clinical Environment (ICE) through the investigation of its attack graph and the application of artificial intelligence techniques that can efficiently demonstrate the subsystems' vulnerabilities. Attack graphs are widely used for assessing network security. On the other hand, they are typically too huge and sophisticated for security administrators to comprehend and evaluate. Therefore, this paper presents a Q-learning-based attack graph analysis approach in which an attack graph that is generated for the Integrated Clinical Environment system resembles the environment, and the agent is assumed to be the attacker. Q-learning can aid in determining the best route that the attacker can take in order to damage the system as much as possible with the least number of actions. Numeric values will be assigned to the attack graph to better determine the most vulnerable part of the system and suggest this analysis to be further utilized for bigger graphs.
Download full-text PDF |
Source |
---|---|
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC9220416 | PMC |
http://dx.doi.org/10.3390/bioengineering9060253 | DOI Listing |
Heliyon
January 2025
Department of Natural and Engineering Sciences, College of Applied Studies and Community Services, King Saud University, Riyadh, 11633, Saudi Arabia.
The rapid growth of Internet of Things (IoT) devices presents significant cybersecurity challenges due to their diverse and resource-constrained nature. Existing security solutions often fall short in addressing the dynamic and distributed environments of IoT systems. This study aims to propose a novel deep learning framework, SecEdge, designed to enhance real-time cybersecurity in mobile IoT environments.
View Article and Find Full Text PDFPLoS One
January 2025
Department of Computer Science and Engineering at Hanyang University ERICA, Ansan-si, Gyeonggi-do, South Korea.
Privacy-preserving record linkage (PPRL) technology, crucial for linking records across datasets while maintaining privacy, is susceptible to graph-based re-identification attacks. These attacks compromise privacy and pose significant risks, such as identity theft and financial fraud. This study proposes a zero-relationship encoding scheme that minimizes the linkage between source and encoded records to enhance PPRL systems' resistance to re-identification attacks.
View Article and Find Full Text PDFNat Med
January 2025
Department of Neurosurgery, NYU Langone Health, New York, NY, USA.
The adoption of large language models (LLMs) in healthcare demands a careful analysis of their potential to spread false medical knowledge. Because LLMs ingest massive volumes of data from the open Internet during training, they are potentially exposed to unverified medical knowledge that may include deliberately planted misinformation. Here, we perform a threat assessment that simulates a data-poisoning attack against The Pile, a popular dataset used for LLM development.
View Article and Find Full Text PDFMolecules
December 2024
Department of Chemistry, Faculty of Science, Cadi Ayyad University, Marrakech 40000, Morocco.
Understanding the relationship between elastic, chemical, and thermal properties is essential for the prevention of the behavior of SiO flint aggregates during their application. In fact, the elastic properties of silica depend on chemical and heat treatment. In order to identify the crystallite sizes for natural SiO before and after chemical treatment samples, Williamson-Hall plots and Scherer's formulas are used.
View Article and Find Full Text PDFNeural Netw
January 2025
School of Big Data and Computer Science, Guizhou Normal University, Guiyang 550025, China.
Graph Neural Networks (GNNs) have shown remarkable achievements and have been extensively applied in various downstream tasks, such as node classification and community detection. However, recent studies have demonstrated that GNNs are vulnerable to subtle adversarial perturbations on graphs, including node injection attacks, which negatively affect downstream tasks. Existing node injection attacks have mainly focused on the limited local nodes, neglecting the analysis of the whole graph which restricts the attack's ability.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!