Loopholes offer an opening. Rather than comply or directly refuse, people can subvert an intended request by an intentional misunderstanding. Such behaviors exploit ambiguity and under-specification in language.
View Article and Find Full Text PDFHuman language comprehension is remarkably robust to ill-formed inputs (e.g., word transpositions).
View Article and Find Full Text PDFHow do polyglots-individuals who speak five or more languages-process their languages, and what can this population tell us about the language system? Using fMRI, we identified the language network in each of 34 polyglots (including 16 hyperpolyglots with knowledge of 10+ languages) and examined its response to the native language, non-native languages of varying proficiency, and unfamiliar languages. All language conditions engaged all areas of the language network relative to a control condition. Languages that participants rated as higher proficiency elicited stronger responses, except for the native language, which elicited a similar or lower response than a non-native language of similar proficiency.
View Article and Find Full Text PDFTransformer models such as GPT generate human-like language and are predictive of human brain responses to language. Here, using functional-MRI-measured brain responses to 1,000 diverse sentences, we first show that a GPT-based encoding model can predict the magnitude of the brain response associated with each sentence. We then use the model to identify new sentences that are predicted to drive or suppress responses in the human language network.
View Article and Find Full Text PDFWhat constitutes a language? Natural languages share features with other domains: from math, to music, to gesture. However, the brain mechanisms that process linguistic input are highly specialized, showing little response to diverse non-linguistic tasks. Here, we examine constructed languages (conlangs) to ask whether they draw on the same neural mechanisms as natural languages, or whether they instead pattern with domains like math and programming languages.
View Article and Find Full Text PDFTransformer models such as GPT generate human-like language and are highly predictive of human brain responses to language. Here, using fMRI-measured brain responses to 1,000 diverse sentences, we first show that a GPT-based encoding model can predict the magnitude of brain response associated with each sentence. Then, we use the model to identify new sentences that are predicted to drive or suppress responses in the human language network.
View Article and Find Full Text PDFHow do polyglots-individuals who speak five or more languages-process their languages, and what can this population tell us about the language system? Using fMRI, we identified the language network in each of 34 polyglots (including 16 hyperpolyglots with knowledge of 10+ languages) and examined its response to the native language, non-native languages of varying proficiency, and unfamiliar languages. All language conditions engaged all areas of the language network relative to a control condition. Languages that participants rated as higher-proficiency elicited stronger responses, except for the native language, which elicited a similar or lower response than a non-native language of similar proficiency.
View Article and Find Full Text PDF