Publications by authors named "Jennie E Pyers"

Across sign languages, nouns can be derived from verbs through morphophonological changes in movement by (1) movement reduplication and size reduction or (2) size reduction alone. We asked whether these cross-linguistic similarities arise from cognitive biases in how humans construe objects and actions. We tested nonsigners' sensitivity to differences in noun-verb pairs in American Sign Language (ASL) by asking MTurk workers to match images of actions and objects to videos of ASL noun-verb pairs.

View Article and Find Full Text PDF

People frequently gesture when a word is on the tip of their tongue (TOT), yet research is mixed as to whether and why gesture aids lexical retrieval. We tested three accounts: the lexical retrieval hypothesis, which predicts that semantically related gestures facilitate successful lexical retrieval; the cognitive load account, which predicts that matching gestures facilitate lexical retrieval only when retrieval is hard, as in the case of a TOT; and the motor movement account, which predicts that any motor movements should support lexical retrieval. In Experiment 1 (a between-subjects study; N = 90), gesture inhibition, but not neck inhibition, affected TOT resolution but not overall lexical retrieval; participants in the gesture-inhibited condition resolved fewer TOTs than participants who were allowed to gesture.

View Article and Find Full Text PDF

Vocabulary is a critical early marker of language development. The MacArthur Bates Communicative Development Inventory has been adapted to dozens of languages, and provides a bird's-eye view of children's early vocabularies which can be informative for both research and clinical purposes. We present an update to the American Sign Language Communicative Development Inventory (the ASL-CDI 2.

View Article and Find Full Text PDF

Lexical iconicity-signs or words that resemble their meaning-is overrepresented in children's early vocabularies. Embodied theories of language acquisition predict that symbols are more learnable when they are grounded in a child's firsthand experiences. As such, pantomimic iconic signs, which use the signer's body to represent a body, might be more readily learned than other types of iconic signs.

View Article and Find Full Text PDF

Iconic mappings between words and their meanings are far more prevalent than once estimated and seem to support children's acquisition of new words, spoken or signed. We asked whether iconicity's prevalence in sign language overshadows two other factors known to support the acquisition of spoken vocabulary: neighborhood density (the number of lexical items phonologically similar to the target) and lexical frequency. Using mixed-effects logistic regressions, we reanalyzed 58 parental reports of native-signing deaf children's productive acquisition of 332 signs in American Sign Language (ASL; Anderson & Reilly, 2002) and found that iconicity, neighborhood density, and lexical frequency independently facilitated vocabulary acquisition.

View Article and Find Full Text PDF

Iconicity is prevalent in gesture and in sign languages, yet the degree to which children recognize and leverage iconicity for early language learning is unclear. In Experiment 1 of the current study, we presented sign-naïve 3-, 4- and 5-year-olds (n=87) with iconic shape gestures and no additional scaffolding to ask whether children can spontaneously map iconic gestures to their referents. Four- and five-year-olds, but not three-year-olds, recognized the referents of iconic shape gestures above chance.

View Article and Find Full Text PDF

Sign languages express viewpoint-dependent spatial relations (e.g., left, right) iconically but must conventionalize from whose viewpoint the spatial relation is being described, the signer's or the perceiver's.

View Article and Find Full Text PDF

Although spatial language and spatial cognition covary over development and across languages, determining the causal direction of this relationship presents a challenge. Here we show that mature human spatial cognition depends on the acquisition of specific aspects of spatial language. We tested two cohorts of deaf signers who acquired an emerging sign language in Nicaragua at the same age but during different time periods: the first cohort of signers acquired the language in its infancy, and 10 y later the second cohort of signers acquired the language in a more complex form.

View Article and Find Full Text PDF

Developmental studies have identified a strong correlation in the timing of language development and false-belief understanding. However, the nature of this relationship remains unresolved. Does language promote false-belief understanding, or does it merely facilitate development that could occur independently, albeit on a delayed timescale? We examined language development and false-belief understanding in deaf learners of an emerging sign language in Nicaragua.

View Article and Find Full Text PDF

Bilinguals report more tip-of-the-tongue (TOT) failures than monolinguals. Three accounts of this disadvantage are that bilinguals experience between-language interference at (a) semantic and/or (b) phonological levels, or (c) that bilinguals use each language less frequently than monolinguals. Bilinguals who speak one language and sign another help decide between these alternatives because their languages lack phonological overlap.

View Article and Find Full Text PDF

Bilinguals often outperform monolinguals on nonverbal tasks that require resolving conflict from competing alternatives. The regular need to select a target language is argued to enhance executive control. We investigated whether this enhancement stems from a general effect of bilingualism (the representation of two languages) or from a modality constraint that forces language selection.

View Article and Find Full Text PDF

Bimodal bilinguals, fluent in a signed and a spoken language, provide unique insight into the nature of syntactic integration and language control. We investigated whether bimodal bilinguals who are conversing with English monolinguals produce American Sign Language (ASL) grammatical facial expressions to accompany parallel syntactic structures in spoken English. In ASL, raised eyebrows mark conditionals, and furrowed eyebrows mark wh-questions; the grammatical brow movement is synchronized with the manual onset of the clause.

View Article and Find Full Text PDF