This study examined if listener behavior and responding by exclusion would emerge after training 3 participants with autism to tact stimuli. Tacts for 2 of 3 stimuli were directly trained using discrete trial training methodology and were followed by an auditory-visual discrimination probe in which auditory-visual discrimination by naming (i.e., bidirectional naming of trained tacts) and auditory-visual discrimination by exclusion were assessed; in subsequent sessions, tacting by exclusion probes were conducted in which tacts for the exclusion target (i.e., stimulus not trained as a tact) were assessed. All 3 participants demonstrated auditory-visual discrimination by naming, auditory-visual discrimination by exclusion, and tacting by exclusion across all comparisons. Results suggest that programming for learning by exclusion can provide an efficient way to enhance skill acquisition.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1002/jaba.927 | DOI Listing |
Neurobiol Dis
January 2025
Centre de Recherches sur la Cognition Animale, Centre de Biologie Intégrative, Université de Toulouse, CNRS, UPS, 31062, France. Electronic address:
The ability to distinguish between individuals is crucial for social species and supports behaviors such as reproduction, hierarchy formation, and cooperation. In rodents, social discrimination relies on memory and the recognition of individual-specific cues, known as "individual signatures". While olfactory signals are central, other sensory cues - such as auditory, visual, and tactile inputs - also play a role.
View Article and Find Full Text PDFBackground: Evidence from the fields of evolutionary biology and neuroscience supports the theory that spatial cognition and social cognition share neural mechanisms. Rodent models are widely used to study either spatial or social cognition, but few studies have explored the interactions between the spatial and social cognitive domains due to the lack of appropriate paradigms.
New Method: Our study introduces the Vertical Maze (VM), a novel behavioral apparatus designed to measure multiple aspects of spatial and social cognition.
Sci Data
October 2024
Department of Electronics and Information Engineering, Korea University, Sejong, 30019, Republic of Korea.
Electroencephalography (EEG)-based open-access datasets are available for emotion recognition studies, where external auditory/visual stimuli are used to artificially evoke pre-defined emotions. In this study, we provide a novel EEG dataset containing the emotional information induced during a realistic human-computer interaction (HCI) using a voice user interface system that mimics natural human-to-human communication. To validate our dataset via neurophysiological investigation and binary emotion classification, we applied a series of signal processing and machine learning methods to the EEG data.
View Article and Find Full Text PDFPerception
November 2024
Peking University, China.
Subsecond temporal processing is crucial for activities requiring precise timing. Here, we investigated perceptual learning of crossmodal (auditory-visual or visual-auditory) temporal interval discrimination (TID) and its impacts on unimodal (visual or auditory) TID performance. The research purpose was to test whether learning is based on a more abstract and conceptual representation of subsecond time, which would predict crossmodal to unimodal learning transfer.
View Article and Find Full Text PDFJ Cogn Neurosci
November 2023
The MARCS Institute of Brain, Behaviour and Development, Western Sydney University, Australia.
In face-to-face conversations, listeners gather visual speech information from a speaker's talking face that enhances their perception of the incoming auditory speech signal. This auditory-visual (AV) speech benefit is evident even in quiet environments but is stronger in situations that require greater listening effort such as when the speech signal itself deviates from listeners' expectations. One example is infant-directed speech (IDS) presented to adults.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!