Publications by authors named "Gabriella Vigliocco"

Language is acquired and processed in complex and dynamic naturalistic contexts, involving the simultaneous processing of connected speech, faces, bodies, objects, etc. How words and their associated concepts are encoded in the brain during real-world processing is still unknown. Here, the representational structure of concrete and abstract concepts was investigated during movie watching to address the extent to which brain responses dynamically change depending on visual context.

View Article and Find Full Text PDF

In face-to-face contexts, discourse is accompanied by various cues, like gestures and mouth movements. Here, we asked whether the presence of gestures and mouth movements benefits discourse comprehension under clear and challenging listening conditions and, if so, whether this multimodal benefit depends on the communicative environment in which interlocutors are situated. In two online experiments, participants watched videoclips of a speaker telling stories, and they answered yes-no questions about the content of each story.

View Article and Find Full Text PDF
Article Synopsis
  • Tulving defined semantic memory as a large storehouse of meanings crucial for language and cognition, prompting various fields to research it with unique methods and terms.
  • The varied interpretations of key concepts like "concept" across disciplines create confusion, contributing to the replication crisis in psychology and impacting communication and theory development.
  • To address these issues, a multidisciplinary semantic glossary is being developed to provide clear definitions and foster shared understanding among researchers while acknowledging the challenges of bias and prescriptiveness.
View Article and Find Full Text PDF

The ecology of human communication is face to face. In these contexts, speakers dynamically modify their communication across vocal (e.g.

View Article and Find Full Text PDF

Most language use is displaced, referring to past, future, or hypothetical events, posing the challenge of how children learn what words refer to when the referent is not physically available. One possibility is that iconic cues that imagistically evoke properties of absent referents support learning when referents are displaced. In an audio-visual corpus of caregiver-child dyads, English-speaking caregivers interacted with their children (N = 71, 24-58 months) in contexts in which the objects talked about were either familiar or unfamiliar to the child, and either physically present or displaced.

View Article and Find Full Text PDF
Article Synopsis
  • - In face-to-face communication, multimodal cues like prosody, gestures, and mouth movements help both native (L1) and non-native (L2) language processing, but their effects on L2 comprehension are less understood.
  • - The study measured the impact of these multimodal cues on L2 comprehenders by analyzing their brain responses to language while watching videos, finding that these cues can facilitate comprehension but are used less effectively by L2 learners than by L1 speakers.
  • - Results indicated that while L2 comprehenders benefitted from meaningful gestures and informative mouth movements, they overall relied on multimodal cues to a lesser extent than L1 comprehenders, who processed all types of cues more efficiently.
View Article and Find Full Text PDF

Iconicity refers to a resemblance between word form and meaning. Previous work has shown that iconic words are learned earlier and processed faster. Here, we examined whether iconic words are recognized better on a recognition memory task.

View Article and Find Full Text PDF

Theories of embodied cognition postulate that perceptual, sensorimotor, and affective properties of concepts support language learning and processing. In this paper, we argue that language acquisition, as well as processing, is situated in addition to being embodied. In particular, first, it is the situated nature of initial language development that affords for the developing system to become embodied.

View Article and Find Full Text PDF

Mouth and facial movements are part and parcel of face-to-face communication. The primary way of assessing their role in speech perception has been by manipulating their presence (e.g.

View Article and Find Full Text PDF

Aphasia is a language disorder that often involves speech comprehension impairments affecting communication. In face-to-face settings, speech is accompanied by mouth and facial movements, but little is known about the extent to which they benefit aphasic comprehension. This study investigated the benefit of visual information accompanying speech for word comprehension in people with aphasia (PWA) and the neuroanatomic substrates of any benefit.

View Article and Find Full Text PDF

Learning in humans is highly embedded in social interaction: since the very early stages of our lives, we form memories and acquire knowledge about the world from and with others. Yet, within cognitive science and neuroscience, human learning is mainly studied in isolation. The focus of past research in learning has been either exclusively on the learner or (less often) on the teacher, with the primary aim of determining developmental trajectories and/or effective teaching techniques.

View Article and Find Full Text PDF
Article Synopsis
  • Recent research challenges the idea that language is purely arbitrary, showing that it can iconically represent the objects it refers to, exemplified by the maluma/takete effect, which connects certain sounds to shapes.
  • The study explored whether the maluma/takete effect is influenced by visual aspects of speech (unimodal) or purely auditory attributes (crossmodal) by which participants paired made-up words with shapes in different conditions.
  • Findings revealed that seeing the pronunciation of nonwords didn't enhance the effect; rather, it sometimes diminished it, suggesting that the maluma/takete effect likely stems from crossmodal associations rather than visual matching.
View Article and Find Full Text PDF

Child-directed language can support language learning, but how? We addressed two questions: (1) how caregivers prosodically modulated their speech as a function of word familiarity (known or unknown to the child) and accessibility of referent (visually present or absent from the immediate environment); (2) whether such modulations affect children's unknown word learning and vocabulary development. We used data from 38 English-speaking caregivers (from the ECOLANG corpus) talking about toys (both known and unknown to their children aged 3-4 years) both when the toys are present and when absent. We analyzed prosodic dimensions (i.

View Article and Find Full Text PDF

Iconicity is the property whereby signs (vocal or manual) resemble their referents. Iconic signs are easy to relate to the world, facilitating learning and processing. In this study, we examined whether the benefits of iconicity would lead to its emergence and to maintenance in language.

View Article and Find Full Text PDF
Article Synopsis
  • Scientists studied how our brains understand both concrete things (like objects) and abstract ideas (like emotions).
  • They found specific parts of the left side of the brain react differently to these categories, showing that our brains have special places for each type.
  • The results indicate that just like concrete concepts, some abstract ideas also have their own brain areas that help us understand them better.
View Article and Find Full Text PDF

Human face-to-face communication is multimodal: it comprises speech as well as visual cues, such as articulatory and limb gestures. In the current study, we assess how iconic gestures and mouth movements influence audiovisual word recognition. We presented video clips of an actress uttering single words accompanied, or not, by more or less informative iconic gestures.

View Article and Find Full Text PDF

Human learning is highly social. Advances in technology have increasingly moved learning online, and the recent coronavirus disease 2019 (COVID-19) pandemic has accelerated this trend. Online learning can vary in terms of how "socially" the material is presented (e.

View Article and Find Full Text PDF

In the last decade, a growing body of work has convincingly demonstrated that languages embed a certain degree of non-arbitrariness (mostly in the form of iconicity, namely the presence of imagistic links between linguistic form and meaning). Most of this previous work has been limited to assessing the degree (and role) of non-arbitrariness in the speech (for spoken languages) or manual components of signs (for sign languages). When approached in this way, non-arbitrariness is acknowledged but still considered to have little presence and purpose, showing a diachronic movement towards more arbitrary forms.

View Article and Find Full Text PDF

The ecology of human language is face-to-face interaction, comprising cues such as prosody, co-speech gestures and mouth movements. Yet, the multimodal context is usually stripped away in experiments as dominant paradigms focus on linguistic processing only. In two studies we presented video-clips of an actress producing naturalistic passages to participants while recording their electroencephalogram.

View Article and Find Full Text PDF

We investigated the neural basis of newly learned words in Spanish as a mother tongue (L1) and English as a second language (L2). Participants acquired new names for real but unfamiliar concepts in both languages over the course of two days. On day 3, they completed a semantic categorization task during fMRI scanning.

View Article and Find Full Text PDF

A key question in developmental research concerns how children learn associations between words and meanings in their early language development. Given a vast array of possible referents, how does the child know what a word refers to? We contend that onomatopoeia (e.g.

View Article and Find Full Text PDF

Hand gestures, imagistically related to the content of speech, are ubiquitous in face-to-face communication. Here we investigated people with aphasia's (PWA) processing of speech accompanied by gestures using lesion-symptom mapping. Twenty-nine PWA and 15 matched controls were shown a picture of an object/action and then a video-clip of a speaker producing speech and/or gestures in one of the following combinations: speech-only, gesture-only, congruent speech-gesture, and incongruent speech-gesture.

View Article and Find Full Text PDF

A recent study by Ponari, Norbury, and Vigliocco (2018), showed that emotional valence (i.e. whether a word evokes positive, negative, or no affect) predicts age-of-acquisition ratings and that up to the age of 8-9, children know abstract emotional words better than neutral ones.

View Article and Find Full Text PDF