Speech-in-speech perception can be challenging because the processing of competing acoustic and linguistic information leads to informational masking. Here, a method is proposed to isolate the linguistic component of informational masking while keeping the distractor's acoustic information unchanged. Participants performed a dichotic listening cocktail-party task before and after training on 4-band noise-vocoded sentences that became intelligible through the training. Distracting noise-vocoded speech interfered more with target speech comprehension after training (i.e., when intelligible) than before training (i.e., when unintelligible) at -3 dB SNR. These findings confirm that linguistic and acoustic information have distinct masking effects during speech-in-speech comprehension.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1121/1.4977590 | DOI Listing |
Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!