The question of whether artificial intelligence (AI) can be considered conscious and therefore should be evaluated through a moral lens has surfaced in recent years. In this paper, we argue that whether AI is conscious is less of a concern than the fact that AI can be considered conscious by users during human-AI interaction, because this ascription of consciousness can lead to carry-over effects on human-human interaction. When AI is viewed as conscious like a human, then how people treat AI appears to carry over into how they treat other people due to activating schemas that are congruent to those activated during interactions with humans. In light of this potential, we might consider regulating how we treat AI, or how we build AI to evoke certain kinds of treatment from users, but not because AI is inherently sentient. This argument focuses on humanlike, social actor AI such as chatbots, digital voice assistants, and social robots. In the first part of the paper, we provide evidence for carry-over effects between perceptions of AI consciousness and behavior toward humans through literature on human-computer interaction, human-AI interaction, and the psychology of artificial agents. In the second part of the paper, we detail how the mechanism of schema activation can allow us to test consciousness perception as a driver of carry-over effects between human-AI interaction and human-human interaction. In essence, perceiving AI as conscious like a human, thereby activating congruent mind schemas during interaction, is a driver for behaviors and perceptions of AI that can carry over into how we treat humans. Therefore, the fact that people can ascribe humanlike consciousness to AI is worth considering, and moral protection for AI is also worth considering, regardless of AI's inherent conscious or moral status.
Download full-text PDF |
Source |
---|---|
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC11008604 | PMC |
http://dx.doi.org/10.3389/fpsyg.2024.1322781 | DOI Listing |
NPJ Digit Med
December 2024
Institute of Health Informatics, University College London, London, UK.
Automated clinical coding (ACC) has emerged as a promising alternative to manual coding. This study proposes a novel human-in-the-loop (HITL) framework, CliniCoCo. Using deep learning capacities, CliniCoCo focuses on how such ACC systems and human coders can work effectively and efficiently together in real-world settings.
View Article and Find Full Text PDFNat Hum Behav
December 2024
Affective Brain Lab, Department of Experimental Psychology, University College London, London, UK.
Artificial intelligence (AI) technologies are rapidly advancing, enhancing human capabilities across various fields spanning from finance to medicine. Despite their numerous advantages, AI systems can exhibit biased judgements in domains ranging from perception to emotion. Here, in a series of experiments (n = 1,401 participants), we reveal a feedback loop where human-AI interactions alter processes underlying human perceptual, emotional and social judgements, subsequently amplifying biases in humans.
View Article and Find Full Text PDFJ Thorac Dis
November 2024
Cardiothoracic Surgery Department, The Prince Charles Hospital, Chermside, Australia.
This paper explores the potential of artificial intelligence (AI) in lung cancer screening programs, particularly in the interpretation of computed tomography (CT) scans. The authors acknowledge the benefits of AI, including faster and potentially more accurate analysis of scans, but also raise concerns about clinician trust, transparency, and the deskilling of radiologists due to decreased scan exposure. The rise of AI in medicine and the introduction of national lung cancer screening programs are both increasing contemporarily and naturally the overlap and interplay between the two in the future is ensured.
View Article and Find Full Text PDFPeerJ Comput Sci
October 2024
Information Technology Department, College of Computer and Information Sciences, King Saud University, Riyadh, Saudi Arabia.
This article presents a usability evaluation and comparison of generative AI applications through the analysis of user reviews from popular digital marketplaces, specifically Apple's App Store and Google Play. The study aims to bridge the research gap in real-world usability assessments of generative AI tools. A total of 11,549 reviews were extracted and analyzed from January to March 2024 for five generative AI apps: ChatGPT, Bing AI, Microsoft Copilot, Gemini AI, and Da Vinci AI.
View Article and Find Full Text PDFJ Biomed Inform
December 2024
School of Medicine and Health Management, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430030, China. Electronic address:
Objectives: Post-discharge follow-up stands as a critical component of post-diagnosis management, and the constraints of healthcare resources impede comprehensive manual follow-up. However, patients are less cooperative with AI follow-up calls or may even hang up once AI voice robots are perceived. To improve the effectiveness of follow-up, alternative measures should be taken when patients perceive AI voice robots.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!