Powerful, portable, off-the-shelf handheld devices, such as tablet based computers (i.e., iPad(®); Galaxy(®)) or portable multimedia players (i.e., iPod(®)), can be adapted to function as speech generating devices for individuals with autism spectrum disorders or related developmental disabilities. This paper reviews the research in this new and rapidly growing area and delineates an agenda for future investigations. In general, participants using these devices acquired verbal repertoires quickly. Studies comparing these devices to picture exchange or manual sign language found that acquisition was often quicker when using a tablet computer and that the vast majority of participants preferred using the device to picture exchange or manual sign language. Future research in interface design, user experience, and extended verbal repertoires is recommended.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1007/s10803-014-2314-4 | DOI Listing |
Sci Rep
January 2025
Office for the Advancement of Educational Information, Chengdu Normal University, Chengdu, 610000, China.
In the training of teacher students, simulated teaching is a key method for enhancing teaching skills. However, traditional evaluations of simulated teaching typically rely on direct teacher involvement and guidance, increasing teachers' workload and limiting the opportunities for teacher students to practice independently. This paper introduces a Retrieval-Augmented Generation (RAG) framework constructed using various open-source tools (such as FastChat for model inference and Whisper for speech-to-text) combined with a local large language model (LLM) for audio analysis of simulated teaching.
View Article and Find Full Text PDFNat Commun
January 2025
Department of Electrical Engineering, City University of Hong Kong, Hong Kong SAR, China.
Analog In-memory Computing (IMC) has demonstrated energy-efficient and low latency implementation of convolution and fully-connected layers in deep neural networks (DNN) by using physics for computing in parallel resistive memory arrays. However, recurrent neural networks (RNN) that are widely used for speech-recognition and natural language processing have tasted limited success with this approach. This can be attributed to the significant time and energy penalties incurred in implementing nonlinear activation functions that are abundant in such models.
View Article and Find Full Text PDFJ Voice
January 2025
Department of Otolaryngology Head and Neck Surgery, Drexel University College of Medicine, Philadelphia, PA. Electronic address:
Introduction: Voice abuse and misuse are the most common causes of benign vocal fold lesions (BVFL). Treatment may include a combination of voice therapy, singing sessions, or surgical resection. Otolaryngologists and speech language pathologists advocate for preoperative, as well as postoperative, voice therapy.
View Article and Find Full Text PDFAm J Speech Lang Pathol
January 2025
School of Communication Sciences & Disorders, Elborn College, Western University, London, Ontario Canada.
Purpose: Cerebral palsy (CP) is the most prevalent motor disability affecting children. Many children with CP have significant speech difficulties and require augmentative and alternative communication (AAC) to participate in communication. Despite demonstrable benefits, the use of AAC systems among children with CP remains constrained, although research in Canada is lacking.
View Article and Find Full Text PDFCodas
January 2025
Department of Speech and Hearing, Manipal College of Health Professions, Manipal Academy of Higher Education, Manipal, Karnataka, India.
Purpose: Investigations on identifying the nature of stuttering present varying views. The argument remains whether the stuttering dysfluencies have a motor or a linguistic foundation. Though stuttering is considered a speech-motor disorder, linguistic factors are increasingly reported to play a role in stuttering.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!