As the influence of transformer-based approaches in general and generative artificial intelligence (AI) in particular continues to expand across various domains, concerns regarding authenticity and explainability are on the rise. Here, we share our perspective on the necessity of implementing effective detection, verification, and explainability mechanisms to counteract the potential harms arising from the proliferation of AI-generated inauthentic content and science. We recognize the transformative potential of generative AI, exemplified by ChatGPT, in the scientific landscape. However, we also emphasize the urgency of addressing associated challenges, particularly in light of the risks posed by disinformation, misinformation, and unreproducible science. This perspective serves as a response to the call for concerted efforts to safeguard the authenticity of information in the age of AI. By prioritizing detection, fact-checking, and explainability policies, we aim to foster a climate of trust, uphold ethical standards, and harness the full potential of AI for the betterment of science and society.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC10838945PMC
http://dx.doi.org/10.1016/j.isci.2024.108782DOI Listing

Publication Analysis

Top Keywords

detection fact-checking
8
safeguarding authenticity
4
authenticity mitigating
4
mitigating harms
4
harms generative
4
generative issues
4
issues agenda
4
agenda policies
4
policies detection
4
fact-checking ethical
4

Similar Publications

VERA-ARAB: unveiling the Arabic tweets credibility by constructing balanced news dataset for veracity analysis.

PeerJ Comput Sci

October 2024

Chair of Cyber Security, Department of Computer Science, College of Computer and Information Sciences, King Saud University, Riyadh, Saudi Arabia.

The proliferation of fake news on social media platforms necessitates the development of reliable datasets for effective fake news detection and veracity analysis. In this article, we introduce a veracity dataset of Arabic tweets called "VERA-ARAB", a pioneering large-scale dataset designed to enhance fake news detection in Arabic tweets. VERA-ARAB is a balanced, multi-domain, and multi-dialectal dataset, containing both fake and true news, meticulously verified by fact-checking experts from Misbar.

View Article and Find Full Text PDF

Disinformation as an obstructionist strategy in climate change mitigation: a review of the scientific literature for a systemic understanding of the phenomenon.

Open Res Eur

September 2024

Grupo Ciberimaginario, XR COM LAB, Faculty of Communication Sciences, Universidad Rey Juan Carlos (ROR 01v5cv687), Madrid, Community of Madrid, 28943, Spain.

Background: This study examines the scientific misinformation about climate change, in particular obstructionist strategies. The study aims to understand their impact on public perception and climate policy and emphasises the need for a systemic understanding that includes the financial, economic and political roots.

Methods: A systematic literature review (SLR) was conducted using the PRISMA 2020 model.

View Article and Find Full Text PDF

Leveraging Chatbots to Combat Health Misinformation for Older Adults: Participatory Design Study.

JMIR Form Res

October 2024

Department of Media and Information, Michigan State University, East Lansing, MI, United States.

Background: Older adults, a population particularly susceptible to misinformation, may experience attempts at health-related scams or defrauding, and they may unknowingly spread misinformation. Previous research has investigated managing misinformation through media literacy education or supporting users by fact-checking information and cautioning for potential misinformation content, yet studies focusing on older adults are limited. Chatbots have the potential to educate and support older adults in misinformation management.

View Article and Find Full Text PDF
Article Synopsis
  • Children become better at fact-checking claims when they've previously encountered false information.
  • In experiments, kids aged 4-7 showed increased evidence sampling and verification efforts after being exposed to inaccuracies.
  • Paradoxically, when children only hear true statements, they may become less critical, indicating that exposure to some misinformation could actually help them develop skepticism and caution in the future.
View Article and Find Full Text PDF

: Misleading health information is detrimental to public health. Even physicians can be misled by biased health information; however, medical students and physicians are not taught some of the most effective techniques for identifying bias and misinformation online. : Using the stages of Kolb's experiential learning cycle as a framework, we aimed to teach 117 third-year students at a United States medical school to apply a fact-checking technique for identifying bias and misinformation called "lateral reading" through a 50-minute learning cycle in a 90-minute class.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!