Aims: In neuropsychological evaluations, it is often difficult to ascertain whether poor performance on measures of validity is due to poor effort or malingering, or whether there is genuine cognitive impairment. Dunham and Denney created an algorithm to assess this question using the Medical Symptom Validity Test (MSVT). We assessed the ability of their algorithm to detect poor validity versus probable impairment, and concordance of failure on the MSVT with other freestanding tests of performance validity.
Methods: Two previously published datasets (n = 153 and n = 641, respectively) from outpatient neuropsychological evaluations were used to test Dunham and Denney's algorithm, and to assess concordance of failure rates with the Test of Memory Malingering and the forced choice measure of the California Verbal Learning Test, two commonly used performance validity tests.
Results: In both datasets, none of the four cutoff scores for failure on the MSVT (70%, 75%, 80%, or 85%) identified a poor validity group with proportionally aligned failure rates on other freestanding measures of performance validity. Additionally, the protocols with probable impairment did not differ from those with poor validity on cognitive measures.
Conclusions: Despite what appeared to be a promising approach to evaluating failure on the easy MSVT subtests when clinical data are unavailable (as recommended in the advanced interpretation program, or advanced interpretation [AI], of the MSVT), the current findings indicate the AI remains the gold standard for doing so. Future research should build on this effort to address shortcomings in measures of effort in neuropsychological evaluations.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1080/00207454.2018.1526800 | DOI Listing |
Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!