Why technical solutions for detecting AI-generated content in research and education are insufficient.

Patterns (N Y)

Cyprus Center for Algorithmic Transparency (CyCAT), Open University of Cyprus, Nicosia, Cyprus.

Published: July 2023

AI Article Synopsis

  • AI-generated content detectors are not completely reliable and can create additional issues.
  • Recent studies by Desaire et al. and Liang et al. highlight the limitations of these detectors.
  • Instead of combating AI with more AI technologies, the focus should be on fostering a creative and ethical academic culture around the use of generative AI.

Article Abstract

Artificial intelligence (AI)-generated content detectors are not foolproof and often introduce other problems, as shown by Desaire et al. and Liang et al. in papers published recently in and Rather than "fighting" AI with more AI, we must develop an academic culture that promotes the use of generative AI in a creative, ethical manner.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC10382978PMC
http://dx.doi.org/10.1016/j.patter.2023.100796DOI Listing

Publication Analysis

Top Keywords

ai-generated content
8
technical solutions
4
solutions detecting
4
detecting ai-generated
4
content education
4
education insufficient
4
insufficient artificial
4
artificial intelligence
4
intelligence ai-generated
4
content detectors
4

Similar Publications

Background: The proliferation of generative artificial intelligence (AI), such as ChatGPT, has added complexity and richness to the virtual environment by increasing the presence of AI-generated content (AIGC). Although social media platforms such as TikTok have begun labeling AIGC to facilitate the ability for users to distinguish it from human-generated content, little research has been performed to examine the effect of these AIGC labels.

Objective: This study investigated the impact of AIGC labels on perceived accuracy, message credibility, and sharing intention for misinformation through a web-based experimental design, aiming to refine the strategic application of AIGC labels.

View Article and Find Full Text PDF

Strategies for integrating ChatGPT and generative AI into clinical studies.

Blood Res

December 2024

Department of Surgery, Division of HBP Surgery, Seoul National University Hospital, Seoul National University College of Medicine, 101 Daehak-ro, Jongno-Gu, Seoul, 03080, Republic of Korea.

Large language models, specifically ChatGPT, are revolutionizing clinical research by improving content creation and providing specific useful features. These technologies can transform clinical research, including data collection, analysis, interpretation, and results sharing. However, integrating these technologies into the academic writing workflow poses significant challenges.

View Article and Find Full Text PDF

Reassessing AI in Medicine: Exploring the Capabilities of AI in Academic Abstract Synthesis.

J Med Internet Res

December 2024

Department of Radiation Oncology, Qilu Hospital (Qingdao), Cheeloo College of Medicine, Shandong University, Qingdao, China.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!