Detection of deception and confirmation of truth telling with conventional polygraphy raised a host of technical and ethical issues. Recently, newer methods of recording electromagnetic signals from the brain show promise in permitting the detection of deception or truth telling. Some are even being promoted as more accurate than conventional polygraphy. While the new technologies raise issues of personal privacy, acceptable forensic application, and other social issues, the focus of this paper is the technical limitations of the developing technology. Those limitations include the measurement validity of the new technologies, which remains largely unknown. Another set of questions pertains to the psychological paradigms used to model or constrain the target behavior. Finally, there is little standardization in the field, and the vulnerability of the techniques to countermeasures is unknown. Premature application of these technologies outside of research settings should be resisted, and the social conversation about the appropriate parameters of its civil, forensic, and security use should begin.

Download full-text PDF

Source
http://dx.doi.org/10.1080/15265161.2010.519238DOI Listing

Publication Analysis

Top Keywords

detection deception
8
truth telling
8
conventional polygraphy
8
emerging neurotechnologies
4
neurotechnologies lie-detection
4
lie-detection promises
4
promises perils
4
perils detection
4
deception confirmation
4
confirmation truth
4

Similar Publications

Background: Correct information is an essential tool to guide thoughts, attitudes, daily choices or more important decisions such as those regarding health. Today, a huge amount of information sources and media is available. Increasing possibilities of obtaining data also require understanding and positioning skills, particularly the ability to navigate the ocean of information and to choose what is best without becoming overwhelmed.

View Article and Find Full Text PDF

This study explores the evolving role of social media in the spread of misinformation during the Ukraine-Russia conflict, with a focus on how artificial intelligence (AI) contributes to the creation of deceptive war imagery. Specifically, the research examines the relationship between color patterns (LUTs) in war-related visuals and their perceived authenticity, highlighting the economic, political, and social ramifications of such manipulative practices. AI technologies have significantly advanced the production of highly convincing, yet artificial, war imagery, blurring the line between fact and fiction.

View Article and Find Full Text PDF

Online reviews significantly influence consumer purchasing decisions and serve as a vital reference for product improvement. With the surge of generative artificial intelligence (AI) technologies such as ChatGPT, some merchants might exploit them to fabricate deceptive positive reviews, and competitors may also fabricate negative reviews to influence the opinions of consumers and designers. Attention must be paid to the trustworthiness of online reviews.

View Article and Find Full Text PDF

Perspective taking is a critical repertoire for navigating social relationships and consists of a variety of complex verbal skills, including socially adaptive forms of deception. Detecting and being able to use socially adaptive deception likely has many practical uses, including defending oneself against bullying, telling white lies to avoid hurting others' feelings, keeping secrets and bluffing during games, and playing friendly tricks on others. Previous research has documented that some Autistic children have challenges identifying deception and playfully deceiving others (Reinecke et al.

View Article and Find Full Text PDF

Unraveling the Use of Disinformation Hashtags by Social Bots During the COVID-19 Pandemic: Social Networks Analysis.

JMIR Infodemiology

January 2025

Computational Social Science DataLab, University Institute of Research for Sustainable Social Development (INDESS), University of Cadiz, Jerez de la Frontera, Spain.

Background: During the COVID-19 pandemic, social media platforms have been a venue for the exchange of messages, including those related to fake news. There are also accounts programmed to disseminate and amplify specific messages, which can affect individual decision-making and present new challenges for public health.

Objective: This study aimed to analyze how social bots use hashtags compared to human users on topics related to misinformation during the outbreak of the COVID-19 pandemic.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!