Generative large language models (LLMs) like ChatGPT can quickly produce informative essays on various topics. However, the information generated cannot be fully trusted as artificial intelligence (AI) can make factual mistakes. This poses challenges for using such tools in college classrooms. To address this, an adaptable assignment called the ChatGPT Fact-Check was developed to teach students in college science courses the benefits of using LLMs for topic exploration while emphasizing the importance of validating its claims based on evidence. The assignment requires students to use ChatGPT to generate essays, evaluate AI-generated sources, and assess the validity of AI-generated scientific claims (based on experimental evidence in primary sources). The assignment reinforces student learning around responsible AI use for exploration while maintaining evidence-based skepticism. The assignment meets objectives around efficiently leveraging beneficial features of AI, distinguishing evidence types, and evidence-based claim evaluation. Its adaptable nature allows integration across diverse courses to teach students to responsibly use AI for learning while maintaining a critical stance.

Download full-text PDF

Source
http://dx.doi.org/10.1152/advan.00142.2024DOI Listing

Publication Analysis

Top Keywords

chatgpt fact-check
8
college science
8
science courses
8
teach students
8
claims based
8
chatgpt
4
fact-check exploiting
4
exploiting limitations
4
limitations generative
4
generative develop
4

Similar Publications

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!