The purpose of this paper is to describe a software package that can be used for performing such routine tasks as controlling listening experiments (e.g., simple labeling, discrimination, sentence intelligibility, and magnitude estimation), recording responses and response latencies, analyzing and plotting the results of those experiments, displaying instructions, and making scripted audio-recordings. The software runs under Windows and is controlled by creating text files that allow the experimenter to specify key features of the experiment such as the stimuli that are to be presented, the randomization scheme, interstimulus and intertrial intervals, the format of the output file, and the layout of response alternatives on the screen. Although the software was developed primarily with speech-perception and psychoacoustics research in mind, it has uses in other areas as well, such as written or auditory word recognition, written or auditory sentence processing, and visual perception.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1044/1092-4388(2005/005) | DOI Listing |
J Exp Psychol Hum Percept Perform
January 2025
School of Psychology, University of Sussex.
Human listeners have a remarkable capacity to adapt to severe distortions of the speech signal. Previous work indicates that perceptual learning of degraded speech reflects changes to sublexical representations, though the precise format of these representations has not yet been established. Inspired by the neurophysiology of auditory cortex, we hypothesized that perceptual learning involves changes to perceptual representations that are tuned to acoustic modulations of the speech signal.
View Article and Find Full Text PDFCogn Affect Behav Neurosci
January 2025
Aix Marseille Univ, Inserm, INS, Inst Neurosci Syst, Marseille, France.
Focusing on a single source within a complex auditory scene is challenging. M/EEG-based auditory attention detection (AAD) allows to detect which stream an individual is attending to within a set of multiple concurrent streams. The high interindividual variability in the auditory attention detection performance often is attributed to physiological factors and signal-to-noise ratio of neural data.
View Article and Find Full Text PDFBMC Oral Health
January 2025
Department of Orthodontics, Faculty of Dentistry, Alexandria University, Alexandria, Egypt.
Objective: Dental occlusion and the alignment of the dentition play crucial roles in producing speech sounds. The Arabic language is specifically complex, with many varieties and geographically dependent dialects. This study investigated the relationship between malocclusion and speech abnormalities in the form of misarticulations of Arabic sounds.
View Article and Find Full Text PDFAm J Speech Lang Pathol
January 2025
Good Samaritan Medical Center Foundation, Lafayette, CO.
Purpose: The aim of this study was to gauge the impacts of cognitive empathy training experiential learning on traumatic brain injury (TBI) knowledge, awareness, confidence, and empathy in a pilot study of speech-language pathology graduate students.
Method: A descriptive quasi-experimental convergent parallel mixed methods design intervention pilot study (QUAL + QUANT) was conducted with a diverse convenience sample of 19 first- and second-year speech-language pathology graduate students who engaged in a half-day TBI point-of-view simulation. The simulation was co-constructed through a participatory design with those living with TBI based on Kolb's experiential learning model and followed the recommendations for point-of-view simulation ethics.
Commun Biol
January 2025
School of Psychology, Shenzhen University, Shenzhen, China.
Speech processing involves a complex interplay between sensory and motor systems in the brain, essential for early language development. Recent studies have extended this sensory-motor interaction to visual word processing, emphasizing the connection between reading and handwriting during literacy acquisition. Here we show how language-motor areas encode motoric and sensory features of language stimuli during auditory and visual perception, using functional magnetic resonance imaging (fMRI) combined with representational similarity analysis.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!