Background: Community health worker (CHW)-led maternal health programs have contributed to increased facility-based deliveries and decreased maternal mortality in sub-Saharan Africa. The recent adoption of mobile devices in these programs provides an opportunity for real-time implementation of machine learning predictive models to identify women most at risk for home-based delivery. However, it is possible that falsified data could be entered into the model to get a specific prediction result - known as an "adversarial attack". The goal of this paper is to evaluate the algorithm's vulnerability to adversarial attacks.
Methods: The dataset used in this research is from the ("Safer Deliveries") program, which operated between 2016 and 2019 in Zanzibar. We used LASSO regularized logistic regression to develop the prediction model. We used "One-At-a-Time (OAT)" adversarial attacks across four different types of input variables: binary - access to electricity at home, categorical - previous delivery location, ordinal - educational level, and continuous - gestational age. We evaluated the percent of predicted classifications that change due to these adversarial attacks.
Results: Manipulating input variables affected prediction results. The variable with the greatest vulnerability was previous delivery location, with 55.65% of predicted classifications changing when applying adversarial attacks from previously delivered at a facility to previously delivered at home, and 37.63% of predicted classifications changing when applying adversarial attacks from previously delivered at home to previously delivered at a facility.
Conclusion: This paper investigates the vulnerability of an algorithm to predict facility-based delivery when facing adversarial attacks. By understanding the effect of adversarial attacks, programs can implement data monitoring strategies to assess for and deter these manipulations. Ensuring fidelity in algorithm deployment secures that CHWs target those women who are actually at high risk of delivering at home.
Download full-text PDF |
Source |
---|---|
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC10205516 | PMC |
http://dx.doi.org/10.1016/j.heliyon.2023.e16244 | DOI Listing |
Sci Rep
January 2025
Torrens University Australia, Fortitude Valley, QLD 4006, Leaders Institute, 76 Park Road, Woolloongabba, QLD 4102, Brisbane, Queensland, Australia.
Sensors (Basel)
January 2025
School of Computer Science, Hubei University of Technology, Wuhan 430068, China.
Large visual language models like Contrastive Language-Image Pre-training (CLIP), despite their excellent performance, are highly vulnerable to the influence of adversarial examples. This work investigates the accuracy and robustness of visual language models (VLMs) from a novel multi-modal perspective. We propose a multi-modal fine-tuning method called Multi-modal Depth Adversarial Prompt Tuning (MDAPT), which guides the generation of visual prompts through text prompts to improve the accuracy and performance of visual language models.
View Article and Find Full Text PDFSensors (Basel)
December 2024
Department of Automation and Industrial Informatics, Faculty of Automatic Control and Computer Sciences, National University of Science and Technology Polithenica Bucharest, 313 Spl. Independenței, RO060042 Bucharest, Romania.
Every day, a considerable number of new cybersecurity attacks are reported, and the traditional methods of defense struggle to keep up with them. In the current context of the digital era, where industrial environments handle large data volumes, new cybersecurity solutions are required, and intrusion detection systems (IDSs) based on artificial intelligence (AI) algorithms are coming up with an answer to this critical issue. This paper presents an approach for implementing a generic model of a network-based intrusion detection system for Industry 4.
View Article and Find Full Text PDFAJNR Am J Neuroradiol
January 2025
From the Department of Radiology (A.T.T., D.Z., D.K., S. Payabvash) and Neurology (S. Park), NewYork-Presbyterian/Columbia University Irving Medical Center, Columbia University, New York, NY; Department of Radiology and Biomedical Imaging (G.A., A.M.) and Neurology (G.J.F., K.N.S.), Yale School of Medicine, New Haven, CT; Zeenat Qureshi Stroke Institute and Department of Neurology (A.I.Q.), University of Missouri, Columbia, MO; Department of Neurosurgery (S.M.), Icahn School of Medicine at Mount Sinai, Mount Sinai Hospital, New York, NY; and Department of Neurology (S.B.M.), Weill Cornell Medical College, Cornell University, New York, NY.
Background And Purpose: Robustness against input data perturbations is essential for deploying deep-learning models in clinical practice. Adversarial attacks involve subtle, voxel-level manipulations of scans to increase deep-learning models' prediction errors. Testing deep-learning model performance on examples of adversarial images provides a measure of robustness, and including adversarial images in the training set can improve the model's robustness.
View Article and Find Full Text PDFPatterns (N Y)
December 2024
Department of Biomedical Engineering, Rensselaer Polytechnic Institute, Troy, NY 12180, USA.
To achieve adequate trust in patient-critical medical tasks, artificial intelligence must be able to recognize instances where they cannot operate confidently. Ensemble methods are deployed to estimate uncertainty, but models in an ensemble often share the same vulnerabilities to adversarial attacks. We propose an ensemble approach based on feature decorrelation and Fourier partitioning for teaching networks diverse features, reducing the chance of perturbation-based fooling.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!