Our study compared the impact of administering Symptom Validity Tests (SVTs) and Performance Validity Tests (PVTs) in in-person versus remote formats and assessed different approaches to combining validity test results. Using the MMPI-2-RF, IOP-29, IOP-M, and FIT, we assessed 164 adults, with half instructed to feign mild traumatic brain injury (mTBI) and half to respond honestly. Within each subgroup, half completed the tests in person, and the other half completed them online via videoconferencing. Results from 2 ×2 analyses of variance showed no significant effects of administration format on SVT and PVT scores. When comparing feigners to controls, the MMPI-2-RF RBS exhibited the largest effect size (d = 3.05) among all examined measures. Accordingly, we conducted a series of two-step hierarchical logistic regression models by entering the MMPI-2-RF RBS first, followed by each other SVT and PVT individually. We found that the IOP-29 and IOP-M were the only measures that yielded incremental validity beyond the effects of the MMPI-2-RF RBS in predicting group membership. Taken together, these findings suggest that administering these SVTs and PVTs in-person or remotely yields similar results, and the combination of MMPI and IOP indexes might be particularly effective in identifying feigned mTBI.

Download full-text PDF

Source
http://dx.doi.org/10.1177/10731911241235465DOI Listing

Publication Analysis

Top Keywords

iop-29 iop-m
12
mmpi-2-rf rbs
12
mmpi-2-rf iop-29
8
iop-m fit
8
feigned mtbi
8
validity tests
8
pvts in-person
8
half completed
8
svt pvt
8
mmpi-2-rf
5

Similar Publications

Our study compared the impact of administering Symptom Validity Tests (SVTs) and Performance Validity Tests (PVTs) in in-person versus remote formats and assessed different approaches to combining validity test results. Using the MMPI-2-RF, IOP-29, IOP-M, and FIT, we assessed 164 adults, with half instructed to feign mild traumatic brain injury (mTBI) and half to respond honestly. Within each subgroup, half completed the tests in person, and the other half completed them online via videoconferencing.

View Article and Find Full Text PDF

The present study aims to evaluate the classification accuracy and resistance to coaching of the Inventory of Problems-29 (IOP-29) and the IOP-Memory (IOP-M) with a Spanish sample of patients diagnosed with mild traumatic brain injury (mTBI) and healthy participants instructed to feign. Using a simulation design, 37 outpatients with mTBI (clinical control group) and 213 non-clinical instructed feigners under several coaching conditions completed the Spanish versions of the IOP-29, IOP-M, Structured Inventory of Malingered Symptomatology, and Rivermead Post Concussion Symptoms Questionnaire. The IOP-29 discriminated well between clinical patients and instructed feigners, with an excellent classification accuracy for the recommended cutoff score (FDS ≥ .

View Article and Find Full Text PDF

This study was designed to compare the validity of the Inventory of Problems (IOP-29) and its newly developed memory module (IOP-M) in 150 patients clinically referred for neuropsychological assessment. Criterion groups were psychometrically derived based on established performance and symptom validity tests (PVTs and SVTs). The criterion-related validity of the IOP-29 was compared to that of the Negative Impression Management scale of the Personality Assessment Inventory (NIM) and the criterion-related validity of the IOP-M was compared to that of Trial-1 on the Test of Memory Malingering (TOMM-1).

View Article and Find Full Text PDF

Assessing the credibility of symptoms is critical to neuropsychological assessment in both clinical and forensic settings. To this end, the Inventory of Problems-29 (IOP-29) and its recently added memory module (Inventory of Problems-Memory; IOP-M) appear to be particularly useful, as they provide a rapid and cost-effective measure of both symptom and performance validity. While numerous studies have already supported the effectiveness of the IOP-29, research on its newly developed module, the IOP-M, is much sparser.

View Article and Find Full Text PDF

We investigated the classification accuracy of the Inventory of Problems - 29 (IOP-29), its newly developed memory module (IOP-M) and the Fifteen Item Test (FIT) in an Australian community sample ( = 275). One third of the participants ( = 93) were asked to respond honestly, two thirds were instructed to feign mild TBI. Half of the feigners ( = 90) were coached to avoid detection by not exaggerating, half were not ( = 92).

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!