Adversarial examples are carefully crafted input patterns that are surprisingly poorly classified by artificial and/or natural neural networks. Here we examine adversarial vulnerabilities in the processes responsible for learning and choice in humans. Building upon recent recurrent neural network models of choice processes, we propose a general framework for generating adversarial opponents that can shape the choices of individuals in particular decision-making tasks toward the behavioral patterns desired by the adversary. We show the efficacy of the framework through three experiments involving action selection, response inhibition, and social decision-making. We further investigate the strategy used by the adversary in order to gain insights into the vulnerabilities of human choice. The framework may find applications across behavioral sciences in helping detect and avoid flawed choice.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7682379PMC
http://dx.doi.org/10.1073/pnas.2016921117DOI Listing

Publication Analysis

Top Keywords

adversarial vulnerabilities
8
vulnerabilities human
8
adversarial
4
human decision-making
4
decision-making adversarial
4
adversarial examples
4
examples carefully
4
carefully crafted
4
crafted input
4
input patterns
4

Similar Publications

The increasing reliance on deep neural network-based object detection models in various applications has raised significant security concerns due to their vulnerability to adversarial attacks. In physical 3D environments, existing adversarial attacks that target object detection (3D-AE) face significant challenges. These attacks often require large and dispersed modifications to objects, making them easily noticeable and reducing their effectiveness in real-world scenarios.

View Article and Find Full Text PDF

GAN-based data reconstruction attacks in split learning.

Neural Netw

January 2025

Key Laboratory of Aerospace Information Security and Trusted Computing, Ministry of Education, School of Cyber Science and Engineering, Wuhan University, Wuhan 430072, China. Electronic address:

Due to the distinctive distributed privacy-preserving architecture, split learning has found widespread application in scenarios where computational resources on the client side are limited. Unlike clients in federated learning retaining the whole model, split learning partitions the model into two segments situated separately on the server and client ends, thereby preventing direct access to the complete model structure by either party and fortifying its resilience against attacks. However, existing studies have demonstrated that even with access restricted to partial model outputs, split learning remains susceptible to data reconstruction attacks.

View Article and Find Full Text PDF

Large visual language models like Contrastive Language-Image Pre-training (CLIP), despite their excellent performance, are highly vulnerable to the influence of adversarial examples. This work investigates the accuracy and robustness of visual language models (VLMs) from a novel multi-modal perspective. We propose a multi-modal fine-tuning method called Multi-modal Depth Adversarial Prompt Tuning (MDAPT), which guides the generation of visual prompts through text prompts to improve the accuracy and performance of visual language models.

View Article and Find Full Text PDF

Decorrelative network architecture for robust electrocardiogram classification.

Patterns (N Y)

December 2024

Department of Biomedical Engineering, Rensselaer Polytechnic Institute, Troy, NY 12180, USA.

To achieve adequate trust in patient-critical medical tasks, artificial intelligence must be able to recognize instances where they cannot operate confidently. Ensemble methods are deployed to estimate uncertainty, but models in an ensemble often share the same vulnerabilities to adversarial attacks. We propose an ensemble approach based on feature decorrelation and Fourier partitioning for teaching networks diverse features, reducing the chance of perturbation-based fooling.

View Article and Find Full Text PDF

The 5G-AKA protocol, a foundational component for 5G network authentication, has been found vulnerable to various security threats, including linkability attacks that compromise user privacy. To address these vulnerabilities, we previously proposed the 5G-AKA-Forward Secrecy (5G-AKA-FS) protocol, which introduces an ephemeral key pair within the home network (HN) to support forward secrecy and prevent linkability attacks. However, a re-evaluation uncovered minor errors in the initial BAN-logic verification and highlighted the need for more rigorous security validation using formal methods.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!