Open source software for experiment design and control.

J Speech Lang Hear Res

Department of Speech Pathology and Audiology, Western Michigan University, Kalamazoo, MI 49008, USA.

Published: February 2005

The purpose of this paper is to describe a software package that can be used for performing such routine tasks as controlling listening experiments (e.g., simple labeling, discrimination, sentence intelligibility, and magnitude estimation), recording responses and response latencies, analyzing and plotting the results of those experiments, displaying instructions, and making scripted audio-recordings. The software runs under Windows and is controlled by creating text files that allow the experimenter to specify key features of the experiment such as the stimuli that are to be presented, the randomization scheme, interstimulus and intertrial intervals, the format of the output file, and the layout of response alternatives on the screen. Although the software was developed primarily with speech-perception and psychoacoustics research in mind, it has uses in other areas as well, such as written or auditory word recognition, written or auditory sentence processing, and visual perception.

Download full-text PDF

Source
http://dx.doi.org/10.1044/1092-4388(2005/005)DOI Listing

Publication Analysis

Top Keywords

written auditory
8
open source
4
software
4
source software
4
software experiment
4
experiment design
4
design control
4
control purpose
4
purpose paper
4
paper describe
4

Similar Publications

Perceptual learning of modulation filtered speech.

J Exp Psychol Hum Percept Perform

January 2025

School of Psychology, University of Sussex.

Human listeners have a remarkable capacity to adapt to severe distortions of the speech signal. Previous work indicates that perceptual learning of degraded speech reflects changes to sublexical representations, though the precise format of these representations has not yet been established. Inspired by the neurophysiology of auditory cortex, we hypothesized that perceptual learning involves changes to perceptual representations that are tuned to acoustic modulations of the speech signal.

View Article and Find Full Text PDF

Attentional Inhibition Ability Predicts Neural Representation During Challenging Auditory Streaming.

Cogn Affect Behav Neurosci

January 2025

Aix Marseille Univ, Inserm, INS, Inst Neurosci Syst, Marseille, France.

Focusing on a single source within a complex auditory scene is challenging. M/EEG-based auditory attention detection (AAD) allows to detect which stream an individual is attending to within a set of multiple concurrent streams. The high interindividual variability in the auditory attention detection performance often is attributed to physiological factors and signal-to-noise ratio of neural data.

View Article and Find Full Text PDF

Objective: Dental occlusion and the alignment of the dentition play crucial roles in producing speech sounds. The Arabic language is specifically complex, with many varieties and geographically dependent dialects. This study investigated the relationship between malocclusion and speech abnormalities in the form of misarticulations of Arabic sounds.

View Article and Find Full Text PDF

Purpose: The aim of this study was to gauge the impacts of cognitive empathy training experiential learning on traumatic brain injury (TBI) knowledge, awareness, confidence, and empathy in a pilot study of speech-language pathology graduate students.

Method: A descriptive quasi-experimental convergent parallel mixed methods design intervention pilot study (QUAL + QUANT) was conducted with a diverse convenience sample of 19 first- and second-year speech-language pathology graduate students who engaged in a half-day TBI point-of-view simulation. The simulation was co-constructed through a participatory design with those living with TBI based on Kolb's experiential learning model and followed the recommendations for point-of-view simulation ethics.

View Article and Find Full Text PDF

Speech processing involves a complex interplay between sensory and motor systems in the brain, essential for early language development. Recent studies have extended this sensory-motor interaction to visual word processing, emphasizing the connection between reading and handwriting during literacy acquisition. Here we show how language-motor areas encode motoric and sensory features of language stimuli during auditory and visual perception, using functional magnetic resonance imaging (fMRI) combined with representational similarity analysis.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!