In this manuscript we provide a commentary and a complementary analysis of Cirillo et al.'s (2022) study on conceptual alignment in a joint picture naming task involving a social robot (Cognition, 227, 105,213). In their study, Cirillo and collaborators present evidence suggesting automatic alignment by examining response proportions, reflecting adaptation to the lexical choices made by the artificial agent (i.e., providing category names instead of basic names for specific semantic categories). Here, we conducted a complementary analysis using the openly available dataset, employing a multiverse approach and focusing on response times as a more nuanced measure of cognitive processing and automaticity. Our findings indicate that alignment in the Category condition (i.e., when the robot provided a superordinate label) is associated with longer response times and greater variability. When providing the basic label in the Basic condition, RTs are much shorter and variability is reduced, compatible with the Basic-level advantage phenomenon. Non-alignment to each condition completely reverses the pattern. This suggests that aligning when producing a superordinate label is a strategic and effortful rather than an automatic response mechanism. Furthermore, through comprehensive visual exploration of response proportions across potentially influential variables, we observed category naming alignment primarily emerging in specific semantic categories, and mostly for stimuli with basic labels at low lexical frequency and newly designed pictures not taken from the MultiPic database, thus suggesting a limited generalizability of the effect. These insights were confirmed using leave-one-out robustness checks. In conclusion, our contribution provides complementary evidence in support of strategic rather than automatic responses when aligning with Category labels in the analyzed dataset, with a limited generalizability despite all the balancing procedures the authors carefully implemented in the experimental material. This is likely to reflect individual task strategies rather than genuine alignment. Lastly, we suggest directions for future research on linguistic alignment, building on insights from both Cirillo et al.'s study and our commentary. We also briefly discuss the Open Science principles that shaped our approach to this work.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1016/j.cognition.2025.106099 | DOI Listing |
Int J Endocrinol Metab
April 2024
Endocrine Research Center, Institute of Endocrinology and Metabolism, Iran University of Medical Sciences (IUMS), Tehran, Iran.
Background: Considering the high prevalence of benign thyroid disorders, the availability of an instrument measuring health-related quality of life (HRQoL) in this population is very important.
Objectives: The current study aims to translate and validate the Persian version of the ThyPRO-39.
Methods: In accordance with standard methodology, a double forward, reconciliation, and backward translation of the questionnaire was conducted.
Brain Commun
March 2025
Experimental Oto-Rhino-Laryngology, Department of Neurosciences, Leuven Brain Institute, KU Leuven, 3000 Leuven, Belgium.
After a stroke, approximately one-third of patients suffer from aphasia, a language disorder that impairs communication ability. Behavioural tests are the current standard to detect aphasia, but they are time-consuming, have limited ecological validity and require active patient cooperation. To address these limitations, we tested the potential of EEG-based neural envelope tracking of natural speech.
View Article and Find Full Text PDFJMIR AI
February 2025
Florida International University, 11200 SW 8th Street, Miami, US.
Background: The application of large language models (LLMs) in analyzing expert textual online data is a topic of growing importance in computational linguistics and qualitative research within healthcare settings.
Objective: The objective of this study is to understand how large language models (LLMs) can help analyze expert textual data. Topic modeling enables scaling the thematic analysis of content of a large corpus of data, but it still requires interpretation.
Radiologie (Heidelb)
March 2025
Institut für Diagnostische und Interventionelle Radiologie, Universitätsspital Zürich, Universität Zürich, Rämistrasse 100, 8091, Zürich, Schweiz.
Background: Large language models (LLMs) such as ChatGPT have rapidly revolutionized the way computers can analyze human language and the way we can interact with computers.
Objective: To give an overview of the emergence and basic principles of computational language models.
Methods: Narrative literature-based analysis of the history of the emergence of language models, the technical foundations, the training process and the limitations of LLMs.
J Am Med Inform Assoc
March 2025
Department of Biostatistics and Bioinformatics, Duke University School of Medicine, Durham, NC 27710, United States.
Objectives: Large language models (LLMs) are increasingly utilized in healthcare, transforming medical practice through advanced language processing capabilities. However, the evaluation of LLMs predominantly relies on human qualitative assessment, which is time-consuming, resource-intensive, and may be subject to variability and bias. There is a pressing need for quantitative metrics to enable scalable, objective, and efficient evaluation.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!