Purpose In this study, we investigated the agreement between the 175-item Philadelphia Naming Test (PNT; Roach, Schwartz, Martin, Grewal, & Brecher, 1996 ) and a 30-item computer adaptive PNT (PNT-CAT; Fergadiotis, Kellough, & Hula, 2015 ; Hula, Kellough, & Fergadiotis, 2015 ) created using item response theory (IRT) methods. Method The full PNT and the PNT-CAT were administered to 47 participants with aphasia in counterbalanced order. Latent trait-naming ability estimates for the 2 PNT versions were analyzed in a Bayesian framework, and the agreement between them was evaluated using correlation and measures of constant, variable, and total error. We also evaluated the extent to which individual pairwise differences were credibly greater than 0 and whether the IRT measurement model provided an adequate indication of the precision of individual score estimates. Results The agreement between the PNT and the PNT-CAT was strong, as indicated by high correlation ( r = .95, 95% CI [.92, .97]), negligible bias, and low variable and total error. The number of statistically robust pairwise score differences did not credibly exceed the Type I error rate, and the precision of individual score estimates was reasonably well predicted by the IRT model. Discussion The strong agreement between the full PNT and the PNT-CAT suggests that the latter is a suitable measurement of anomia in group studies. The relatively robust estimates of score precision also suggest that the PNT-CAT can be useful for the clinical assessment of anomia in individual cases. Finally, the IRT methods used to construct the PNT-CAT provide a framework for additional development to further reduce measurement error. Supplemental Material https://doi.org/10.23641/asha.8202176.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6808378PMC
http://dx.doi.org/10.1044/2018_JSLHR-L-18-0344DOI Listing

Publication Analysis

Top Keywords

pnt pnt-cat
16
computer adaptive
8
irt methods
8
full pnt
8
variable total
8
total error
8
differences credibly
8
precision individual
8
individual score
8
score estimates
8

Similar Publications

Purpose: The purpose of this study was to evaluate whether a short-form computerized adaptive testing (CAT) version of the Philadelphia Naming Test (PNT) provides error profiles and model-based estimates of semantic and phonological processing that agree with the full test.

Method: Twenty-four persons with aphasia took the PNT-CAT and the full version of the PNT (hereinafter referred to as the "full PNT") at least 2 weeks apart. The PNT-CAT proceeded in two stages: (a) the PNT-CAT30, in which 30 items were selected to match the evolving ability estimate with the goal of producing a 50% error rate, and (b) the PNT-CAT60, in which an additional 30 items were selected to produce a 75% error rate.

View Article and Find Full Text PDF

Purpose The purpose of this study was to verify the equivalence of 2 alternate test forms with nonoverlapping content generated by an item response theory (IRT)-based computer-adaptive test (CAT). The Philadelphia Naming Test (PNT; Roach, Schwartz, Martin, Grewal, & Brecher, 1996)was utilized as an item bank in a prospective, independent sample of persons with aphasia. Method Two alternate CAT short forms of the PNT were administered to a sample of 25 persons with aphasia who were at least 6 months postonset and received no treatment for 2 weeks before or during the study.

View Article and Find Full Text PDF

Purpose In this study, we investigated the agreement between the 175-item Philadelphia Naming Test (PNT; Roach, Schwartz, Martin, Grewal, & Brecher, 1996 ) and a 30-item computer adaptive PNT (PNT-CAT; Fergadiotis, Kellough, & Hula, 2015 ; Hula, Kellough, & Fergadiotis, 2015 ) created using item response theory (IRT) methods. Method The full PNT and the PNT-CAT were administered to 47 participants with aphasia in counterbalanced order. Latent trait-naming ability estimates for the 2 PNT versions were analyzed in a Bayesian framework, and the agreement between them was evaluated using correlation and measures of constant, variable, and total error.

View Article and Find Full Text PDF

Purpose: The purpose of this study was to develop a computerized adaptive test (CAT) version of the Philadelphia Naming Test (PNT; Roach, Schwartz, Martin, Grewal, & Brecher, 1996), to reduce test length while maximizing measurement precision. This article is a direct extension of a companion article (Fergadiotis, Kellough, & Hula, 2015), in which we fitted the PNT to a 1-parameter logistic item-response-theory model and examined the validity and precision of the resulting item parameter and ability score estimates.

Method: Using archival data collected from participants with aphasia, we simulated two PNT-CAT versions and two previously published static PNT short forms, and compared the resulting ability score estimates to estimates obtained from the full 175-item PNT.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!