The perception of speech and music requires processing of variations in spectra and amplitude over different time intervals. Near-term fetuses can discriminate acoustic features, such as frequencies and spectra, but whether they can process complex auditory streams, such as speech sequences and more specifically their temporal variations, fast or relatively slow acoustic variations, is unclear. We recorded the cardiac activity of 82 near-term fetuses (38 weeks GA) in quiet sleep during a silent control condition and four 15 s streams presented at 90 dB SPL Leq: two piano melodies with opposite contours, a natural Icelandic sentence and a chimera of the sentence--all its spectral information was replaced with broadband noise, leaving its specific temporal variations in amplitude intact without any phonological information.
View Article and Find Full Text PDFBackground: Human hearing develops progressively during the last trimester of gestation. Near-term fetuses can discriminate acoustic features, such as frequencies and spectra, and process complex auditory streams. Fetal and neonatal studies show that they can remember frequently recurring sounds.
View Article and Find Full Text PDFMaturation of fetal response to music was characterized over the last trimester of pregnancy using a 5-minute piano recording of Brahms' Lullaby, played at an average of 95, 100, 105 or 110 dB (A). Within 30 seconds of the onset of the music, the youngest fetuses (28-32 weeks GA) showed a heart rate increase limited to the two highest dB levels; over gestation, the threshold level decreased and a response shift from acceleration to deceleration was observed for the lower dB levels, indicating attention to the stimulus. Over 5 minutes of music, fetuses older than 33 weeks GA showed a sustained increase in heart rate; body movement changes occurred at 35 weeks GA.
View Article and Find Full Text PDF