People readily extract regularity in rhythmic auditory patterns, enabling prediction of the onset of the next beat. Recent magnetoencephalography (MEG) research suggests that such prediction is reflected by the entrainment of oscillatory networks in the brain to the tempo of the sequence. In particular, induced beta-band oscillatory activity from auditory cortex decreases after each beat onset and rebounds prior to the onset of the next beat across tempi in a predictive manner.
View Article and Find Full Text PDFThe auditory environment typically contains several sound sources that overlap in time, and the auditory system parses the complex sound wave into streams or voices that represent the various sound sources. Music is also often polyphonic. Interestingly, the main melody (spectral/pitch information) is most often carried by the highest-pitched voice, and the rhythm (temporal foundation) is most often laid down by the lowest-pitched voice.
View Article and Find Full Text PDFPrevious research suggests that when two streams of pitched tones are presented simultaneously, adults process each stream in a separate memory trace, as reflected by mismatch negativity (MMN), a component of the event-related potential (ERP). Furthermore, a superior encoding of the higher tone or voice in polyphonic sounds has been found for 7-month-old infants and both musician and non-musician adults in terms of a larger amplitude MMN in response to pitch deviant stimuli in the higher than the lower voice. These results, in conjunction with modeling work, suggest that the high voice superiority effect might originate in characteristics of the peripheral auditory system.
View Article and Find Full Text PDFNatural auditory environments contain multiple simultaneously-sounding objects and the auditory system must parse the incoming complex sound wave they collectively create into parts that represent each of these individual objects. Music often similarly requires processing of more than one voice or stream at the same time, and behavioral studies demonstrate that human listeners show a systematic perceptual bias in processing the highest voice in multi-voiced music. Here, we review studies utilizing event-related brain potentials (ERPs), which support the notions that (1) separate memory traces are formed for two simultaneous voices (even without conscious awareness) in auditory cortex and (2) adults show more robust encoding (i.
View Article and Find Full Text PDFMusical enculturation is a complex, multifaceted process that includes the development of perceptual processing specialized for the pitch and rhythmic structures of the musical system in the culture, understanding of esthetic and expressive norms, and learning the pragmatic uses of music in different social situations. Here, we summarize the results of a study in which 6-month-old Western infants were randomly assigned to 6 months of either an active participatory music class or a class in which they experienced music passively while playing. Active music participation resulted in earlier enculturation to Western tonal pitch structure, larger and/or earlier brain responses to musical tones, and a more positive social trajectory.
View Article and Find Full Text PDFInfants must learn to make sense of real-world auditory environments containing simultaneous and overlapping sounds. In adults, event-related potential studies have demonstrated the existence of separate preattentive memory traces for concurrent note sequences and revealed perceptual dominance for encoding of the voice with higher fundamental frequency of 2 simultaneous tones or melodies. Here, we presented 2 simultaneous streams of notes (15 semitones apart) to 7-month-old infants.
View Article and Find Full Text PDFAfter a brief historical perspective of the relationship between language and music, we review our work on transfer of training from music to speech that aimed at testing the general hypothesis that musicians should be more sensitive than non-musicians to speech sounds. In light of recent results in the literature, we argue that when long-term experience in one domain influences acoustic processing in the other domain, results can be interpreted as common acoustic processing. But when long-term experience in one domain influences the building-up of abstract and specific percepts in another domain, results are taken as evidence for transfer of training effects.
View Article and Find Full Text PDFThe aim of this study was to examine the influence of musical expertise in 9-year-old children on passive (as reflected by MMN) and active (as reflected by discrimination accuracy) processing of speech sounds. Musician and nonmusician children were presented with a sequence of syllables that included standards and deviants in vowel frequency, vowel duration, and VOT. Both the passive and the active processing of duration and VOT deviants were enhanced in musician compared with nonmusician children.
View Article and Find Full Text PDFThe aim of this experiment was two-fold. Our first goal was to determine whether linguistic expertise influences the pre-attentive [as reflected by the Mismatch Negativity - (MMN)] and the attentive processing (as reflected by behavioural discrimination accuracy) of non-speech, harmonic sounds. The second was to directly compare the effects of linguistic and musical expertise.
View Article and Find Full Text PDFA same-different task was used to test the hypothesis that musical expertise improves the discrimination of tonal and segmental (consonant, vowel) variations in a tone language, Mandarin Chinese. Two four-word sequences (prime and target) were presented to French musicians and nonmusicians unfamiliar with Mandarin, and event-related brain potentials were recorded. Musicians detected both tonal and segmental variations more accurately than nonmusicians.
View Article and Find Full Text PDFThe present study aimed to examine the influence of musical expertise on the metric and semantic aspects of speech processing. In two attentional conditions (metric and semantic tasks), musicians listened to short sentences ending in trisyllabic words that were semantically and/or metrically congruous or incongruous. Both ERPs and behavioral data were analyzed and the results were compared to previous nonmusicians' data.
View Article and Find Full Text PDFThe aim of these experiments was to compare conceptual priming for linguistic and for a homogeneous class of nonlinguistic sounds, impact sounds, by using both behavioral (percentage errors and RTs) and electrophysiological measures (ERPs). Experiment 1 aimed at studying the neural basis of impact sound categorization by creating typical and ambiguous sounds from different material categories (wood, metal, and glass). Ambiguous sounds were associated with slower RTs and larger N280, smaller P350/P550 components, and larger negative slow wave than typical impact sounds.
View Article and Find Full Text PDFWe describe here a late extramedullary ovarian relapse in an 18-year-old female who was diagnosed with hypotetraploid cell acute lymphoblastic leukaemia (cALL) at the age of 6. At both occurrences of the disease cells were analyzed by morphology, immunophenotyping, cytogenetics and molecular methods. TEL/AML1 was detected by RT-PCR and FISH analysis in both events.
View Article and Find Full Text PDF