Language is unbounded in its generativity, enabling the flexible combination of words into novel sentences. Critically, these constructions are intelligible to others due to our ability to derive a sentence's compositional meaning from the semantic relationships among its components. Some animals also concatenate meaningful calls into compositional-like combinations to communicate more complex information.
View Article and Find Full Text PDFThis experimental study investigated whether infants use iconicity in speech and gesture cues to interpret word meanings. Specifically, we tested infants' sensitivity to size sound symbolism and iconic gesture cues and asked whether combining these cues in a multimodal fashion would enhance infants' sensitivity in a superadditive manner. Thirty-six 14-17-month-old infants participated in a preferential looking task in which they heard a spoken nonword (e.
View Article and Find Full Text PDFSpeech comprehension is crucial for human social interaction, relying on the integration of auditory and visual cues across various levels of representation. While research has extensively studied multisensory integration (MSI) using idealised, well-controlled stimuli, there is a need to understand this process in response to complex, naturalistic stimuli encountered in everyday life. This study investigated behavioural and neural MSI in neurotypical adults experiencing audio-visual speech within a naturalistic, social context.
View Article and Find Full Text PDFPrevious research has shown a strong positive association between right-handed gesturing and vocabulary development. However, the causal nature of this relationship remains unclear. In the current study, we tested whether gesturing with the right hand enhances linguistic processing in the left hemisphere, which is contralateral to the right hand.
View Article and Find Full Text PDFA considerable body of research has documented the emergence of what appears to be instrumental helping behavior in early childhood. The current study tested the hypothesis that one basic psychological mechanism motivating this behavior is a preference for completing unfinished actions. To test this, a paradigm was implemented in which 2-year-olds (n = 34, 16 females/18 males, mostly White middle-class children) could continue an adult's action when the adult no longer wanted to complete the action.
View Article and Find Full Text PDFReports an error in "Prior experience with unlabeled actions promotes 3-year-old children's verb learning" by Suzanne Aussems, Katherine H. Mumford and Sotaro Kita (, Advanced Online Publication, Jul 15, 2021, np). In the original article, acknowledgment of and formatting for Economic and Social Research Council funding was omitted.
View Article and Find Full Text PDFJ Exp Psychol Gen
January 2022
[Correction Notice: An Erratum for this article was reported online in on Jan 6 2022 (see record 2022-20753-001). In the original article, acknowledgment of and formatting for Economic and Social Research Council funding was omitted. The author note and copyright line now reflect the standard acknowledgment of and formatting for the funding received for this article.
View Article and Find Full Text PDFTwo-year-olds typically extend labels of novel objects by the objects' shape (), whereas adults do so by the objects' function. Is this because shape is conceptually easier to comprehend than function? To test whether the conceptual complexity of function prevents infants from developing a function bias, we trained twelve 17-month-olds (function-training group) to focus on objects' functions when labeling the objects over a period of 7 weeks. Our training was similar to previously used methods in which 17-month-olds were successfully taught to focus on the shape of objects, resulting in a precocious shape bias.
View Article and Find Full Text PDFJ Exp Child Psychol
September 2021
Previous research has established that goal tracking emerges early in the first year of life and rapidly becomes increasingly sophisticated. However, it has not yet been shown whether young children continue to update their representations of others' goals over time. The current study investigated this by probing young children's (24- to 30-month-olds; N = 24) ability to differentiate between goal-directed actions that have been halted because the goal was interrupted and those that have been halted because the goal was abandoned.
View Article and Find Full Text PDFThis study investigated whether seeing iconic gestures depicting verb referents promotes two types of generalization. We taught 3- to 4-year-olds novel locomotion verbs. Children who saw iconic manner gestures during training generalized more verbs to novel events (first-order generalization) than children who saw interactive gestures (Experiment 1, N = 48; Experiment 2, N = 48) and path-tracing gestures (Experiment 3, N = 48).
View Article and Find Full Text PDFThis paper demonstrates a new quantitative approach to examine cross-linguistically shared and language-specific sound symbolism in languages. Unlike most previous studies taking a hypothesis-testing approach, we employed a data mining approach to uncover unknown sound-symbolic correspondences in the domain of locomotion, without limiting ourselves to pre-determined sound-meaning correspondences. In the experiment, we presented 70 locomotion videos to Japanese and English speakers and asked them to create a sound symbolically matching word for each action.
View Article and Find Full Text PDFAn experiment with 72 three-year-olds investigated whether encoding events while seeing iconic gestures boosts children's memory representation of these events. The events, shown in videos of actors moving in an unusual manner, were presented with either iconic gestures depicting how the actors performed these actions, interactive gestures, or no gesture. In a recognition memory task, children in the iconic gesture condition remembered actors and actions better than children in the control conditions.
View Article and Find Full Text PDFHuman locomotion is a fundamental class of events, and manners of locomotion (e.g., how the limbs are used to achieve a change of location) are commonly encoded in language and gesture.
View Article and Find Full Text PDFThis study examined spatial story representations created by speaker's cohesive gestures. Participants were presented with three-sentence discourse with two protagonists. In the first and second sentences, gestures consistently located the two protagonists in the gesture space: one to the right and the other to the left.
View Article and Find Full Text PDFPeople spontaneously produce gestures during speaking and thinking. The authors focus here on gestures that depict or indicate information related to the contents of concurrent speech or thought (i.e.
View Article and Find Full Text PDFJ Exp Psychol Learn Mem Cogn
June 2017
Research suggests that speech-accompanying gestures influence cognitive processes, but it is not clear whether the gestural benefit is specific to the gesturing hand. Two experiments tested the "(right/left) hand-specificity" hypothesis for self-oriented functions of gestures: gestures with a particular hand enhance cognitive processes involving the hemisphere contralateral to the gesturing hand. Specifically, we tested whether left-hand gestures enhance metaphor explanation, which involves right-hemispheric processing.
View Article and Find Full Text PDFPrevious research has shown that children aged 4-5 years, but not 2-3 years, show adult-like interference from a partner when performing a joint task (Milward, Kita, & Apperly, 2014). This raises questions about the cognitive skills involved in the development of such "corepresentation (CR)" of a partner (Sebanz, Knoblich, & Prinz, 2003). Here, individual differences data from one hundred and thirteen 4- to 5-year-olds showed theory of mind (ToM) and inhibitory control (IC) as predictors of ability to avoid CR interference, suggesting that children with better ToM abilities are more likely to succeed in decoupling self and other representations in a joint task, while better IC is likely to help children avoid interference from a partner's response when selecting their own response on the task.
View Article and Find Full Text PDFJ Exp Psychol Learn Mem Cogn
February 2016
People spontaneously gesture when they speak (co-speech gestures) and when they solve problems silently (co-thought gestures). In this study, we first explored the relationship between these 2 types of gestures and found that individuals who produced co-thought gestures more frequently also produced co-speech gestures more frequently (Experiments 1 and 2). This suggests that the 2 types of gestures are generated from the same process.
View Article and Find Full Text PDFWe examined whether children's ability to integrate speech and gesture follows the pattern of a broader developmental shift between 3- and 5-year-old children (Ramscar & Gitcho, 2007) regarding the ability to process two pieces of information simultaneously. In Experiment 1, 3-year-olds, 5-year-olds, and adults were presented with either an iconic gesture or a spoken sentence or a combination of the two on a computer screen, and they were instructed to select a photograph that best matched the message. The 3-year-olds did not integrate information in speech and gesture, but 5-year-olds and adults did.
View Article and Find Full Text PDFSound symbolism, or the nonarbitrary link between linguistic sound and meaning, has often been discussed in connection with language evolution, where the oral imitation of external events links phonetic forms with their referents (e.g., Ramachandran & Hubbard, 2001).
View Article and Find Full Text PDFA fundamental question in language development is how infants start to assign meaning to words. Here, using three Electroencephalogram (EEG)-based measures of brain activity, we establish that preverbal 11-month-old infants are sensitive to the non-arbitrary correspondences between language sounds and concepts, that is, to sound symbolism. In each trial, infant participants were presented with a visual stimulus (e.
View Article and Find Full Text PDFResearch on the neural basis of metaphor provides contradicting evidence about the role of right and left hemispheres. We used the mouth-opening asymmetry technique to investigate the relative involvement of the two hemispheres whilst right-handed healthy male participants explained the meaning of English phrases. This technique is based on the contralateral cortical control of the facial musculature and reflects the relative hemispheric involvement during different cognitive tasks.
View Article and Find Full Text PDFPhilos Trans R Soc Lond B Biol Sci
September 2014
Sound symbolism is a non-arbitrary relationship between speech sounds and meaning. We review evidence that, contrary to the traditional view in linguistics, sound symbolism is an important design feature of language, which affects online processing of language, and most importantly, language acquisition. We propose the sound symbolism bootstrapping hypothesis, claiming that (i) pre-verbal infants are sensitive to sound symbolism, due to a biologically endowed ability to map and integrate multi-modal input, (ii) sound symbolism helps infants gain referential insight for speech sounds, (iii) sound symbolism helps infants and toddlers associate speech sounds with their referents to establish a lexical representation and (iv) sound symbolism helps toddlers learn words by allowing them to focus on referents embedded in a complex scene, alleviating Quine's problem.
View Article and Find Full Text PDFResearch on Nicaraguan Sign Language, created by deaf children, has suggested that young children use gestures to segment the semantic elements of events and linearize them in ways similar to those used in signed and spoken languages. However, it is unclear whether this is due to children's learning processes or to a more general effect of iterative learning. We investigated whether typically developing children, without iterative learning, segment and linearize information.
View Article and Find Full Text PDFWhen two adults jointly perform a task, they often show interference effects whereby the other's task interferes with their own performance (Sebanz, Knoblich, & Prinz, 2003). The current study investigated whether these co-representation effects can be observed in young children. This phenomenon can be used as a criterion for adult-like joint action in children, which has been under debate in existing literature due to the difficulty in identifying what mechanisms underlie the behaviours observed (Brownell, 2011).
View Article and Find Full Text PDF