Evaluators of examinees in forensic contexts must consider the potential for falsified or exaggerated psychiatric symptoms and/or cognitive deficits. A number of validated assessment tools assist evaluators in identifying those examinees who feign impairment; however, no comprehensive method has been established for consolidating data from multiple tests, interviews, behavioral observations, and collateral sources. The current pilot study preliminarily examined the interrater reliability and validity of a new forensic assessment tool, the Feigning Evaluation INtegrating Sources (FEINS), developed to guide evaluators in the comprehensive assessment of feigning by adding structure to the collection of relevant data. Fifty-eight male pretrial defendants undergoing restoration of competency to stand trial at a state forensic psychiatric center participated in the study. Results provided preliminary support for reliability in scoring the FEINS, construct validity, and predictive validity. FEINS items that assessed clinical presentation, and those that guided the use of test data, were more useful than items capturing historical/demographic data. Structured professional judgments developed using the FEINS appeared to be more accurate in predicting competency evaluators' perceptions of feigning than both unstructured clinical judgment (i.e., referring psychologist's perception of feigning) alone and test data alone, using hierarchical multiple regressions. Findings suggest that the FEINS may have practical utility in guiding clinical opinions regarding feigning across psychiatric, cognitive, and psycholegal/functional domains. (PsycInfo Database Record (c) 2022 APA, all rights reserved).
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1037/ser0000513 | DOI Listing |
Background: Alzheimer's disease (AD) presents unique challenges in clinical trials involving small molecules. Multifaceted issues plague such trials, emphasizing susceptibility to fraud from clinical sites and "professional patients". The relative ease of simulating Alzheimer's diagnosis, coupled with inadequate oversight by Contract Research Organizations (CROs), creates fertile ground for deceptive practices.
View Article and Find Full Text PDFClin Neuropsychol
December 2024
Department of Psychology, University of Illinois at Urbana-Champaign, Champaign, IL, USA.
Advanced algorithmic methods may improve the assessment of performance validity during neuropsychological testing. This study investigated whether unsupervised machine learning (ML) could serve as one such method. Participants were 359 adult outpatients who underwent a neuropsychological evaluation for various referral reasons.
View Article and Find Full Text PDFClin Neuropsychol
November 2024
Regional Assessment & Resource Centre, Queens University, Kingston, Ontario, Canada.
Intractable Rare Dis Res
August 2024
Department of Gastroenterological Surgery, Nippon Medical School, Tokyo, Japan.
Extrahepatic portal vein obstruction (EHPVO) is a rare disease with myeloproliferative neoplasm (MPN) as the most common cause. We report that hypersplenic hematologic changes in EHPVO might be eliminated by MPN. Through experience with splenectomy for variceal control with EHPVO, we suspected that spleen might mask MPN-induced thrombocytosis, and that MPN might have a significant influence on excessive thrombocytosis after splenectomy.
View Article and Find Full Text PDFClin Neuropsychol
July 2024
Department of Neurology, The University of Texas Health Science Center at San Antonio, TXUSA.
Objective: We examined the performance validity test (PVT) security risk presented by artificial intelligence (AI) chatbots asking questions about neuropsychological evaluation and PVTs on two popular generative AI sites.
Method: In 2023 and 2024, multiple questions were posed to ChatGPT-3 and Bard (now Gemini). One set started generally and refined follow-up questions based on AI responses.
Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!