Statistical learning is an ability that allows individuals to effortlessly extract patterns from the environment, such as sound patterns in speech. Some prior evidence suggests that statistical learning operates more robustly for speech compared to non-speech stimuli, supporting the idea that humans are predisposed to learn language. However, any apparent statistical learning advantage for speech could be driven by signal acoustics, rather than the subjective perception per se of sounds as speech. To resolve this issue, the current study assessed whether there is a statistical learning advantage for ambiguous sounds that are subjectively perceived as speech-like compared to the same sounds perceived as non-speech, thereby controlling for acoustic features. We first induced participants to perceive sine-wave speech (SWS)-a degraded form of speech not immediately perceptible as speech-as either speech or non-speech. After this induction phase, participants were exposed to a continuous stream of repeating trisyllabic nonsense words, composed of SWS syllables, and then completed an explicit familiarity rating task and an implicit target detection task to assess learning. Critically, participants showed robust and equivalent performance on both measures, regardless of their subjective speech perception. In contrast, participants who perceived the SWS syllables as more speech-like showed better detection of individual syllables embedded in speech streams. These results suggest that speech perception facilitates processing of individual sounds, but not the ability to extract patterns across sounds. Our findings suggest that statistical learning is not influenced by the perceived linguistic relevance of sounds, and that it may be conceptualized largely as an automatic, stimulus-driven mechanism.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1016/j.cognition.2023.105649 | DOI Listing |
Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!