Objective: To assess the interrater reliability (IRR) and usability of the Patient Education Materials Assessment Tool (PEMAT) and the relationship between PEMAT scores and readability levels.
Methods: One hundred ten materials (80 print, 30 audiovisual) were evaluated, each by two raters, using the PEMAT. IRR was calculated using Gwet's AC1 and summarized across items in each PEMAT domain (understandability and actionability) and by material type. A survey was conducted to solicit raters' experience using the PEMAT. Readability of each material was assessed using the SMOG Index.
Results: The median IRR was 0.92 for understandability and 0.93 for actionability across all relevant items, indicating good IRR. Eight PEMAT items had Gwet's AC1 values less than 0.81. PEMAT and SMOG Index scores were inversely correlated, with a Spearman's rho of -0.20 (p=0.081) for understandability and -0.15 (p=0.194) for actionability. While 92% of raters agreed the PEMAT was easy to use, survey results suggested specific items for clarification.
Conclusion: While the PEMAT demonstrates moderate to excellent IRR overall, amendments to items with lower IRR may increase the usefulness of the tool.
Practice Implications: The PEMAT is a useful supplement to reading level alone in the assessment of educational materials.
Download full-text PDF |
Source |
---|---|
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC5839932 | PMC |
http://dx.doi.org/10.1016/j.pec.2017.09.003 | DOI Listing |
Background: Primary care physicians (PCPs) and nurse practitioners play a key role in guiding caregivers on early peanut protein (PP) introduction, yet many lack adequate knowledge.
Aim Statement: This quality improvement study aimed to enhance understanding among PCPs and caregivers about evidence-based guidelines for early PP introduction in infants' diets.
Methods: Using the Stetler Model, PCP knowledge was evaluated through pre-test, educational video and some posttest material.
Int J Obstet Anesth
December 2024
Department of Anesthesiology, 8700 Beverly Blvd #4209, Cedars-Sinai Medical Center, Los Angeles, CA 90064, United States. Electronic address:
Introduction: Over 90% of pregnant women and 76% expectant fathers search for pregnancy health information. We examined readability, accuracy and quality of answers to common obstetric anesthesia questions from the popular generative artificial intelligence (AI) chatbots ChatGPT and Bard.
Methods: Twenty questions for generative AI chatbots were derived from frequently asked questions based on professional society, hospital and consumer websites.
Int J Med Inform
December 2024
Obstetrics and Gynecology Unit, Department of Woman, Child and General and Specialized Surgery, University of Campania "Luigi Vanvitelli", Naples, Italy. Electronic address:
Background: Usefulness of hysteroscopic metroplasty to improve reproductive outcomes is controversial and debated among reproductive specialists and, consequently, patients.
Methods: We performed a cross-sectional analysis to assess the quality, reliability, and level of misinformation in YouTube, Instagram, and TikTok videos about hysteroscopic metroplasty. Videos on each social network retrieved using "hysteroscopy" and "septate uterus" or "uterine septum" as keywords were assessed using Patient Education Materials Assessment Tool for audio-visual (PEMAT A/V) content, the modified DISCERN (mDISCERN), Global Quality Scale (GQS), Video Information and Quality Index (VIQI) and Misinformation assessment.
Colorectal Dis
January 2025
Concord Institute of Academic Surgery (CIAS), Concord Repatriation General Hospital, Concord, New South Wales, Australia.
Aim: Artificial intelligence (AI) chatbots such as Chat Generative Pretrained Transformer-4 (ChatGPT-4) have made significant strides in generating human-like responses. Trained on an extensive corpus of medical literature, ChatGPT-4 has the potential to augment patient education materials. These chatbots may be beneficial to populations considering a diagnosis of colorectal cancer (CRC).
View Article and Find Full Text PDFEye (Lond)
December 2024
Department of Ophthalmology, Harvey and Bernice Jones Eye Institute, University of Arkansas for Medical Sciences, Little Rock, AR, USA.
Background/objectives: Dry eye disease (DED) is an exceedingly common diagnosis in patients, yet recent analyses have demonstrated patient education materials (PEMs) on DED to be of low quality and readability. Our study evaluated the utility and performance of three large language models (LLMs) in enhancing and generating new patient education materials (PEMs) on dry eye disease (DED).
Subjects/methods: We evaluated PEMs generated by ChatGPT-3.
Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!