The use of virtual reality (VR) as a methodological tool is becoming increasingly popular in behavioral research as its flexibility allows for a wide range of applications. This new method has not been as widely accepted in the field of psycholinguistics, however, possibly due to the assumption that language processing during human-computer interactions does not accurately reflect human-human interactions. Yet at the same time there is a growing need to study human-human language interactions in a tightly controlled context, which has not been possible using existing methods. VR, however, offers experimental control over parameters that cannot be (as finely) controlled in the real world. As such, in this study we aim to show that human-computer language interaction is comparable to human-human language interaction in virtual reality. In the current study we compare participants' language behavior in a syntactic priming task with human versus computer partners: we used a human partner, a human-like avatar with human-like facial expressions and verbal behavior, and a computer-like avatar which had this humanness removed. As predicted, our study shows comparable priming effects between the human and human-like avatar suggesting that participants attributed human-like agency to the human-like avatar. Indeed, when interacting with the computer-like avatar, the priming effect was significantly decreased. This suggests that when interacting with a human-like avatar, sentence processing is comparable to interacting with a human partner. Our study therefore shows that VR is a valid platform for conducting language research and studying dialogue interactions in an ecologically valid manner.
Download full-text PDF |
Source |
---|---|
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC5352801 | PMC |
http://dx.doi.org/10.3758/s13428-015-0688-7 | DOI Listing |
Plast Reconstr Surg Glob Open
October 2024
Division of Plastic & Reconstructive Surgery, Yale School of Medicine, New Haven, Conn.
Background: Recent advancements in artificial intelligence (AI) have reshaped telehealth, with AI chatbots like Chat Generative Pretrained Transformer (ChatGPT) showing promise in various medical applications. ChatGPT is capable of offering basic patient education on procedures in plastic and reconstructive surgery (PRS), yet the preference between human AI VideoBots and traditional chatbots in plastic and reconstructive surgery remains unexplored.
Methods: We developed a VideoBot by integrating ChatGPT with Synthesia, a human AI avatar video platform.
J Med Internet Res
July 2024
Faculty of Information Technology, Data Future Institutes, Monash University, Clayton, Australia.
Background: The rising prevalence of noncommunicable diseases (NCDs) worldwide and the high recent mortality rates (74.4%) associated with them, especially in low- and middle-income countries, is causing a substantial global burden of disease, necessitating innovative and sustainable long-term care solutions.
Objective: This scoping review aims to investigate the impact of artificial intelligence (AI)-based conversational agents (CAs)-including chatbots, voicebots, and anthropomorphic digital avatars-as human-like health caregivers in the remote management of NCDs as well as identify critical areas for future research and provide insights into how these technologies might be used effectively in health care to personalize NCD management strategies.
Digit Health
June 2024
School of Literature and Media, China Three Gorges University, Yichang, Hubei, China.
Objective: This study aimed to assessing usability of intelligent guidance chatbots (IGCs) in Chinese hospitals.
Methods: A cross-sectional study based on expert survey was conducted between August to December 2023. The survey assessed the usability of chatbots in 590 Chinese hospitals.
PeerJ Comput Sci
March 2024
Interaction Science Laboratories, ATR, Seika-cho, Kyoto, Japan.
Recent advancements in tele-operated avatars, both on-screen and robotic, have expanded opportunities for human interaction that exceed spatial and physical limitations. While numerous studies have enhanced operator control and improved the impression left on remote users, one area remains underexplored: the experience of operators during touch interactions between an avatar and a remote interlocutor. Touch interactions have become commonplace with avatars, especially those displayed on or integrated with touchscreen interfaces.
View Article and Find Full Text PDFOxf Open Neurosci
December 2023
Centre for Brain and Cognitive Development, Birkbeck, University of London, Malet St, London, WC1E 7HX, UK.
A child's social world is complex and rich, but has traditionally been assessed with conventional experiments where children are presented with repeated stimuli on a screen. These assessments are impoverished relative to the dynamics of social interactions in real life, and can be challenging to implement with preschoolers, who struggle to comply with strict lab rules. The current work meets the need to develop new platforms to assess preschoolers' social development, by presenting a unique virtual-reality set-up combined with wearable functional near-infrared spectroscopy (fNIRS).
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!