A PHP Error was encountered

Severity: Warning

Message: file_get_contents(https://...@pubfacts.com&api_key=b8daa3ad693db53b1410957c26c9a51b4908&a=1): Failed to open stream: HTTP request failed! HTTP/1.1 429 Too Many Requests

Filename: helpers/my_audit_helper.php

Line Number: 176

Backtrace:

File: /var/www/html/application/helpers/my_audit_helper.php
Line: 176
Function: file_get_contents

File: /var/www/html/application/helpers/my_audit_helper.php
Line: 250
Function: simplexml_load_file_from_url

File: /var/www/html/application/helpers/my_audit_helper.php
Line: 1034
Function: getPubMedXML

File: /var/www/html/application/helpers/my_audit_helper.php
Line: 3152
Function: GetPubMedArticleOutput_2016

File: /var/www/html/application/controllers/Detail.php
Line: 575
Function: pubMedSearch_Global

File: /var/www/html/application/controllers/Detail.php
Line: 489
Function: pubMedGetRelatedKeyword

File: /var/www/html/index.php
Line: 316
Function: require_once

Simulated Patient Role-Plays with Consumers with Lived Experience of Mental Illness Post-Mental Health First Aid Training: Interrater and Test Re-Test Reliability of an Observed Behavioral Assessment Rubric. | LitMetric

Mental Health First Aid (MHFA) training teaches participants how to assist people experiencing mental health problems and crises. Observed behavioral assessments, post-training, are lacking, and the literature largely focuses on self-reported measurement of behaviors and confidence. This study explores the reliability of an observed behavioral assessment rubric used to assess pharmacy students during simulated patient (SP) role-play assessments with mental health consumers. Post-MHFA training, pharmacy students ( = 528) participated in SP role-play assessments ( = 96) of six mental health cases enacted by consumers with lived experience of mental illness. Each assessment was marked by the tutor, participating student, and consumer (three raters). Non-parametric tests were used to compare raters' means scores and pass/fail categories. Interrater reliability analyses were conducted for overall scores, as well as pass/fail categories using intra-class correlation coefficient (ICC) and Fleiss' Kappa, respectively. Test re-test reliability analyses were conducted using Pearson's correlation. For interrater reliability analyses, the intra-class correlation coefficient varied from poor-to-good to moderate-to-excellent for individual cases but was moderate-to-excellent for combined cases (0.70; CI 0.58-0.80). Fleiss' Kappa varied across cases but was fair-to-good for combined cases (0.57, < 0.001). For test re-test reliability analyses, Pearson's correlation was strong for individual and combined cases (0.87; < 0.001). Recommended modifications to the rubric, including the addition of barrier items, scoring guides, and specific examples, as well as the creation of new case-specific rubric versions, may improve reliability. The rubric can be used to facilitate the measurement of actual, observed behaviors post-MHFA training in pharmacy and other health care curricula.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7838905PMC
http://dx.doi.org/10.3390/pharmacy9010028DOI Listing

Publication Analysis

Top Keywords

mental health
16
reliability analyses
16
test re-test
12
re-test reliability
12
observed behavioral
12
combined cases
12
simulated patient
8
consumers lived
8
lived experience
8
experience mental
8

Similar Publications

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!