The objective of this study was to determine the usefulness of acoustic tracking as a supplement to perceptual judgements during remediation. One child, receiving weekly individual treatment, participated in a drill-and-practice approach to remediation for a single [w]-for-/r/ substitution error. Imitations of consonant-vowel (CV), VCV, and matched /r/ and /w/ sentence stimuli were audiotape recorded. The temporal and spectral characteristics of the recorded stimuli (i.e., the F2 transition rates and F2 values for the changing [r]) were spectrographically tracked and analyzed over a 70-day remediative period. The acoustic data were compared with the perceptual judgments of articulatory change. Generally, as the child's productions moved from the [w] toward the [r], the measured F2 values became higher and the F2 transition rates lowered. Results suggest that acoustic tracking of a child's productions may become a useful tool to augment perceptual tracking. Implications are discussed for the applications of acoustic tracking within clinical practice.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1044/jshd.5404.530 | DOI Listing |
Nat Commun
January 2025
Department of Biomedical Engineering, Duke University, Durham, NC, USA.
Acoustically probing biological tissues with light or sound, photoacoustic and ultrasound imaging can provide anatomical, functional, and/or molecular information at depths far beyond the optical diffusion limit. However, most photoacoustic and ultrasound imaging systems rely on linear-array transducers with elevational focusing and are limited to two-dimensional imaging with anisotropic resolutions. Here, we present three-dimensional diffractive acoustic tomography (3D-DAT), which uses an off-the-shelf linear-array transducer with single-slit acoustic diffraction.
View Article and Find Full Text PDFSci Adv
January 2025
Department of Electrical and Computer Engineering, University of Wisconsin-Madison, 3436 Engineering Hall, 1415 Engineering Drive, Madison, WI 53706, USA.
There is a long-existing trade-off between the imaging resolution and penetration depth in acoustic imaging caused by the diffraction limit. Most existing approaches addressing this trade-off require controlled "labels," i.e.
View Article and Find Full Text PDFSci Rep
January 2025
RITMO Centre for Interdisciplinary Studies in Rhythm, Time and Motion, University of Oslo, Forskningsveien 3A, Oslo, 0373, Norway.
Periodic sensory inputs entrain oscillatory brain activity, reflecting a neural mechanism that might be fundamental to temporal prediction and perception. Most environmental rhythms and patterns in human behavior, such as walking, dancing, and speech do not, however, display strict isochrony but are instead quasi-periodic. Research has shown that neural tracking of speech is driven by modulations of the amplitude envelope, especially via sharp acoustic edges, which serve as prominent temporal landmarks.
View Article and Find Full Text PDFJ Exp Biol
January 2025
Michigan State University, Department of Fisheries and Wildlife, East Lansing, MI, USA.
Efficient navigation is crucial for the reproductive success of many migratory species, often driven by competing pressures to conserve energy and reduce predation risk. Little is known about how non-homing species achieve this balance. We show that sea lamprey (Petromyzon marinus), an ancient extant vertebrate, uses persistent patterns in hydro-geomorphology to quickly and efficiently navigate through complex ecosystems.
View Article and Find Full Text PDFJ Acoust Soc Am
January 2025
Key Laboratory of Modern Acoustic, Nanjing University, Nanjing 210093, China.
Due to the limited size of the quiet zone created by active headrests (AHR) near the human ear, noise reduction (NR) at the human ear decreases dramatically when the head moves. Combined with a head tracking system can improve the NR performance when the head moves, but most such studies currently only consider head translation. To improve the robustness when the head translates or rotates, an ear-positioning (EP) system based on a depth camera and human pose estimation model is presented in this paper and integrated with AHR.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!