Human attention biases toward moral and emotional information are as prevalent online as they are offline. When these biases interact with content algorithms that curate social media users' news feeds to maximize attentional capture, moral and emotional information are privileged in the online information ecosystem. We review evidence for these human-algorithm interactions and argue that misinformation exploits this process to spread online. This framework suggests that interventions aimed at combating misinformation require a dual-pronged approach that combines person-centered and design-centered interventions to be most effective. We suggest several avenues for research in the psychological study of misinformation sharing under a framework of human-algorithm interaction.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1016/j.copsyc.2023.101770 | DOI Listing |
Forensic Sci Int
November 2024
Defence Science and Technology Group, PO Box 1500, Edinburgh, SA 5111, Australia. Electronic address:
Facial recognition plays a vital role in several security and law enforcement workflows, such as passport control and criminal investigations. The identification process typically involves a facial recognition system comparing an image against a large database of faces to return a list of probable matches, called a candidate list, for review. A human then looks at the returned images to determine whether there is a match.
View Article and Find Full Text PDFCogn Res Princ Implic
June 2024
Psychology, Faculty of Natural Sciences, University of Stirling, Stirling, Scotland, UK.
The human face is commonly used for identity verification. While this task was once exclusively performed by humans, technological advancements have seen automated facial recognition systems (AFRS) integrated into many identification scenarios. Although many state-of-the-art AFRS are exceptionally accurate, they often require human oversight or involvement, such that a human operator actions the final decision.
View Article and Find Full Text PDFCurr Opin Psychol
April 2024
Kellogg School of Management, Northwestern University, United States. Electronic address:
Human attention biases toward moral and emotional information are as prevalent online as they are offline. When these biases interact with content algorithms that curate social media users' news feeds to maximize attentional capture, moral and emotional information are privileged in the online information ecosystem. We review evidence for these human-algorithm interactions and argue that misinformation exploits this process to spread online.
View Article and Find Full Text PDFTrends Cogn Sci
October 2023
Princeton University, Department of Psychology, Princeton, NJ, USA; Princeton University, University Center for Human Values, Princeton, NJ, USA.
Human social learning is increasingly occurring on online social platforms, such as Twitter, Facebook, and TikTok. On these platforms, algorithms exploit existing social-learning biases (i.e.
View Article and Find Full Text PDFPerspect Psychol Sci
September 2024
Stanford Internet Observatory, Stanford University.
Most content consumed online is curated by proprietary algorithms deployed by social media platforms and search engines. In this article, we explore the interplay between these algorithms and human agency. Specifically, we consider the extent of entanglement or coupling between humans and algorithms along a continuum from implicit to explicit demand.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!