Objective: This study evaluated the effectiveness of an AI-based tool (ChatGPT-4) (AIT) vs a human tutor (HT) in providing feedback on dental students' assignments.
Methods: A total of 194 answers to two histology questions were assessed by both tutors using the same rubric. Students compared feedback from both tutors and evaluated its accuracy against a standard rubric. Students' perceptions were collected on five dimensions of feedback quality. A subject expert also evaluated feedback provided by the two tutors for 40 randomly selected answers.
Results: No significant differences were found in total scores between HT and AIT for one question, but a significant difference was noted for Question 2 and overall scores. Students' perceptions showed no differences regarding understanding mistakes, promoting critical thinking, feedback comprehension, or relevance. However, students felt more comfortable with HT feedback (X = 9.01, P < .05). In contrast, expert evaluation highlighted that AIT scored higher in identifying mistakes, with significant differences in clarity (W = 40.5, P < .001) and suggestions for improvement (W = 96.5, P < .001).
Conclusion: AIT demonstrates significant potential to complement HT by providing detailed feedback in a shorter timeframe. While students did not perceive differences in feedback quality, expert analysis identified AIT as superior in clarity and suggestions for improvement.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1016/j.identj.2024.12.022 | DOI Listing |
Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!