AI Article Synopsis

Article Abstract

This study examined if listener behavior and responding by exclusion would emerge after training 3 participants with autism to tact stimuli. Tacts for 2 of 3 stimuli were directly trained using discrete trial training methodology and were followed by an auditory-visual discrimination probe in which auditory-visual discrimination by naming (i.e., bidirectional naming of trained tacts) and auditory-visual discrimination by exclusion were assessed; in subsequent sessions, tacting by exclusion probes were conducted in which tacts for the exclusion target (i.e., stimulus not trained as a tact) were assessed. All 3 participants demonstrated auditory-visual discrimination by naming, auditory-visual discrimination by exclusion, and tacting by exclusion across all comparisons. Results suggest that programming for learning by exclusion can provide an efficient way to enhance skill acquisition.

Download full-text PDF

Source
http://dx.doi.org/10.1002/jaba.927DOI Listing

Publication Analysis

Top Keywords

auditory-visual discrimination
24
discrimination naming
8
discrimination exclusion
8
tacting exclusion
8
exclusion
7
discrimination
6
auditory-visual
5
emergence auditory-visual
4
tacts
4
discrimination tacts
4

Similar Publications

Fine social discrimination of siblings in mice: Implications for early detection of Alzheimer's disease.

Neurobiol Dis

January 2025

Centre de Recherches sur la Cognition Animale, Centre de Biologie Intégrative, Université de Toulouse, CNRS, UPS, 31062, France. Electronic address:

The ability to distinguish between individuals is crucial for social species and supports behaviors such as reproduction, hierarchy formation, and cooperation. In rodents, social discrimination relies on memory and the recognition of individual-specific cues, known as "individual signatures". While olfactory signals are central, other sensory cues - such as auditory, visual, and tactile inputs - also play a role.

View Article and Find Full Text PDF

Background: Evidence from the fields of evolutionary biology and neuroscience supports the theory that spatial cognition and social cognition share neural mechanisms. Rodent models are widely used to study either spatial or social cognition, but few studies have explored the interactions between the spatial and social cognitive domains due to the lack of appropriate paradigms.

New Method: Our study introduces the Vertical Maze (VM), a novel behavioral apparatus designed to measure multiple aspects of spatial and social cognition.

View Article and Find Full Text PDF

EEG Dataset for the Recognition of Different Emotions Induced in Voice-User Interaction.

Sci Data

October 2024

Department of Electronics and Information Engineering, Korea University, Sejong, 30019, Republic of Korea.

Electroencephalography (EEG)-based open-access datasets are available for emotion recognition studies, where external auditory/visual stimuli are used to artificially evoke pre-defined emotions. In this study, we provide a novel EEG dataset containing the emotional information induced during a realistic human-computer interaction (HCI) using a voice user interface system that mimics natural human-to-human communication. To validate our dataset via neurophysiological investigation and binary emotion classification, we applied a series of signal processing and machine learning methods to the EEG data.

View Article and Find Full Text PDF

Subsecond temporal processing is crucial for activities requiring precise timing. Here, we investigated perceptual learning of crossmodal (auditory-visual or visual-auditory) temporal interval discrimination (TID) and its impacts on unimodal (visual or auditory) TID performance. The research purpose was to test whether learning is based on a more abstract and conceptual representation of subsecond time, which would predict crossmodal to unimodal learning transfer.

View Article and Find Full Text PDF

In face-to-face conversations, listeners gather visual speech information from a speaker's talking face that enhances their perception of the incoming auditory speech signal. This auditory-visual (AV) speech benefit is evident even in quiet environments but is stronger in situations that require greater listening effort such as when the speech signal itself deviates from listeners' expectations. One example is infant-directed speech (IDS) presented to adults.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!