AI Article Synopsis

  • CHW-led maternal health programs in sub-Saharan Africa have improved facility-based deliveries and reduced maternal mortality, with mobile tech offering potential for real-time risk identification using machine learning.
  • The study uses data from the "Safer Deliveries" program in Zanzibar (2016-2019) and analyzes how manipulating different input variables affects prediction outcomes via LASSO logistic regression and adversarial attacks.
  • Results show that the variable "previous delivery location" is particularly vulnerable, with significant shifts in predicted classifications, indicating a need for strong data monitoring to prevent exploitation of the algorithm.

Article Abstract

Background: Community health worker (CHW)-led maternal health programs have contributed to increased facility-based deliveries and decreased maternal mortality in sub-Saharan Africa. The recent adoption of mobile devices in these programs provides an opportunity for real-time implementation of machine learning predictive models to identify women most at risk for home-based delivery. However, it is possible that falsified data could be entered into the model to get a specific prediction result - known as an "adversarial attack". The goal of this paper is to evaluate the algorithm's vulnerability to adversarial attacks.

Methods: The dataset used in this research is from the ("Safer Deliveries") program, which operated between 2016 and 2019 in Zanzibar. We used LASSO regularized logistic regression to develop the prediction model. We used "One-At-a-Time (OAT)" adversarial attacks across four different types of input variables: binary - access to electricity at home, categorical - previous delivery location, ordinal - educational level, and continuous - gestational age. We evaluated the percent of predicted classifications that change due to these adversarial attacks.

Results: Manipulating input variables affected prediction results. The variable with the greatest vulnerability was previous delivery location, with 55.65% of predicted classifications changing when applying adversarial attacks from previously delivered at a facility to previously delivered at home, and 37.63% of predicted classifications changing when applying adversarial attacks from previously delivered at home to previously delivered at a facility.

Conclusion: This paper investigates the vulnerability of an algorithm to predict facility-based delivery when facing adversarial attacks. By understanding the effect of adversarial attacks, programs can implement data monitoring strategies to assess for and deter these manipulations. Ensuring fidelity in algorithm deployment secures that CHWs target those women who are actually at high risk of delivering at home.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC10205516PMC
http://dx.doi.org/10.1016/j.heliyon.2023.e16244DOI Listing

Publication Analysis

Top Keywords

adversarial attacks
24
predicted classifications
12
facility-based delivery
8
machine learning
8
adversarial
8
input variables
8
previous delivery
8
delivery location
8
classifications changing
8
changing applying
8

Similar Publications

Secure IoT data dissemination with blockchain and transfer learning techniques.

Sci Rep

January 2025

Torrens University Australia, Fortitude Valley, QLD 4006, Leaders Institute, 76 Park Road, Woolloongabba, QLD 4102, Brisbane, Queensland, Australia.

Article Synopsis
  • Streaming IoT data is crucial for building trust in sustainable IoT solutions, but current systems often face issues with reliability, security, and transparency due to their centralized structures.
  • The research introduces TraVel, a framework that uses blockchain and transfer learning to improve the security of IoT data management, utilizing decentralized IPFS for data storage and a private Ethereum blockchain for enhanced data integrity.
  • TraVel implements self-executing smart contracts for access control and uses an adversarial domain adaptation model to filter out malicious data, ensuring only validated data is stored, with successful performance shown in simulations.
View Article and Find Full Text PDF

Large visual language models like Contrastive Language-Image Pre-training (CLIP), despite their excellent performance, are highly vulnerable to the influence of adversarial examples. This work investigates the accuracy and robustness of visual language models (VLMs) from a novel multi-modal perspective. We propose a multi-modal fine-tuning method called Multi-modal Depth Adversarial Prompt Tuning (MDAPT), which guides the generation of visual prompts through text prompts to improve the accuracy and performance of visual language models.

View Article and Find Full Text PDF

Every day, a considerable number of new cybersecurity attacks are reported, and the traditional methods of defense struggle to keep up with them. In the current context of the digital era, where industrial environments handle large data volumes, new cybersecurity solutions are required, and intrusion detection systems (IDSs) based on artificial intelligence (AI) algorithms are coming up with an answer to this critical issue. This paper presents an approach for implementing a generic model of a network-based intrusion detection system for Industry 4.

View Article and Find Full Text PDF

Improving the Robustness of Deep-Learning Models in Predicting Hematoma Expansion from Admission Head CT.

AJNR Am J Neuroradiol

January 2025

From the Department of Radiology (A.T.T., D.Z., D.K., S. Payabvash) and Neurology (S. Park), NewYork-Presbyterian/Columbia University Irving Medical Center, Columbia University, New York, NY; Department of Radiology and Biomedical Imaging (G.A., A.M.) and Neurology (G.J.F., K.N.S.), Yale School of Medicine, New Haven, CT; Zeenat Qureshi Stroke Institute and Department of Neurology (A.I.Q.), University of Missouri, Columbia, MO; Department of Neurosurgery (S.M.), Icahn School of Medicine at Mount Sinai, Mount Sinai Hospital, New York, NY; and Department of Neurology (S.B.M.), Weill Cornell Medical College, Cornell University, New York, NY.

Background And Purpose: Robustness against input data perturbations is essential for deploying deep-learning models in clinical practice. Adversarial attacks involve subtle, voxel-level manipulations of scans to increase deep-learning models' prediction errors. Testing deep-learning model performance on examples of adversarial images provides a measure of robustness, and including adversarial images in the training set can improve the model's robustness.

View Article and Find Full Text PDF

Decorrelative network architecture for robust electrocardiogram classification.

Patterns (N Y)

December 2024

Department of Biomedical Engineering, Rensselaer Polytechnic Institute, Troy, NY 12180, USA.

To achieve adequate trust in patient-critical medical tasks, artificial intelligence must be able to recognize instances where they cannot operate confidently. Ensemble methods are deployed to estimate uncertainty, but models in an ensemble often share the same vulnerabilities to adversarial attacks. We propose an ensemble approach based on feature decorrelation and Fourier partitioning for teaching networks diverse features, reducing the chance of perturbation-based fooling.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!