A PHP Error was encountered

Severity: Warning

Message: file_get_contents(https://...@pubfacts.com&api_key=b8daa3ad693db53b1410957c26c9a51b4908&a=1): Failed to open stream: HTTP request failed! HTTP/1.1 429 Too Many Requests

Filename: helpers/my_audit_helper.php

Line Number: 176

Backtrace:

File: /var/www/html/application/helpers/my_audit_helper.php
Line: 176
Function: file_get_contents

File: /var/www/html/application/helpers/my_audit_helper.php
Line: 250
Function: simplexml_load_file_from_url

File: /var/www/html/application/helpers/my_audit_helper.php
Line: 3122
Function: getPubMedXML

File: /var/www/html/application/controllers/Detail.php
Line: 575
Function: pubMedSearch_Global

File: /var/www/html/application/controllers/Detail.php
Line: 489
Function: pubMedGetRelatedKeyword

File: /var/www/html/index.php
Line: 316
Function: require_once

Evidence to support test interpretation: Evaluating the content validity of a new item set for the Transition Pragmatics Interview. | LitMetric

AI Article Synopsis

  • This study focuses on establishing evidence-based practices for selecting criterion-referenced assessments by evaluating content validity of a new social communication assessment tool.
  • Two studies were conducted: the first involved experts rating the relevance of 25 new assessment items, and the second assessed how participants engaged with these items.
  • Results showed that 23 out of 25 items were valid according to experts and prompted appropriate thinking processes in examinees, suggesting that expert evaluations combined with participant feedback enhance the understanding of content validity in assessments.

Article Abstract

Purpose: There is little consensus on evidence-based practice guidelines for the selection of criterion-referenced assessments. Having confidence in scores from criterion-referenced assessments requires evidence that items align with their intended constructs. The purposes of these studies were to demonstrate evidence of content validity for the revised item set of a developing social communication assessment and to provide clinicians with a model of content validity evaluations that can be generalised to the review of other assessments.

Method: In Study 1, 10 experts rated 25 newly-developed items for how well they represented the intended construct. In Study 2, seven participants ages 14-20 were administered the Three Step Test Interview to assess their cognitive processes for responding to new items. Examinee responses were coded for construct-relevant and construct-irrelevant factors.

Result: Twenty-three of the 25 newly-developed items were deemed representative of the intended construct by experts and elicited construct-relevant response processes from examinees.

Conclusion: The integration of expert review and examinee cognitive interviewing provides a more complete evaluation of the alignment of the items to their intended construct. Transparent reports of the methods and findings of content validity studies strengthen the ability of clinicians to select criterion-referenced assessments that support valid decisions.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC11316841PMC
http://dx.doi.org/10.1080/17549507.2023.2287424DOI Listing

Publication Analysis

Top Keywords

content validity
16
criterion-referenced assessments
12
intended construct
12
item set
8
newly-developed items
8
items
5
evidence support
4
support test
4
test interpretation
4
interpretation evaluating
4

Similar Publications

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!