The Loyola Language Study.

J Clin Psychol

Published: July 1957

Download full-text PDF

Source
http://dx.doi.org/10.1002/1097-4679(195707)13:3<258::aid-jclp2270130307>3.0.co;2-1DOI Listing

Publication Analysis

Top Keywords

loyola language
4
language study
4
loyola
1
study
1

Similar Publications

Background: Stigma is a pervasive and distressing problem experienced frequently by lung cancer patients, and there is a lack of psychosocial interventions that target the reduction of lung cancer stigma. Mindful self-compassion (MSC) is an empirically supported intervention demonstrated to increase self-compassion and reduce feelings of shame and distress in non-cancer populations. However, there are several anticipated challenges for delivering MSC to lung cancer patients, and modifications may be needed to improve acceptability, appropriateness, and feasibility.

View Article and Find Full Text PDF

Previous research has shown that students employ intuitive thinking when understanding scientific concepts. Three types of intuitive thinking-essentialist, teleological, and anthropic thinking-are used in biology learning and can lead to misconceptions. However, it is unknown how commonly these types of intuitive thinking, or cognitive construals, are used spontaneously in students' explanations across biological concepts and whether this usage is related to endorsement of construal-consistent misconceptions.

View Article and Find Full Text PDF

Objective: To evaluate large language models (LLMs) for pre-test diagnostic probability estimation and compare their uncertainty estimation performance with a traditional machine learning classifier.

Materials And Methods: We assessed 2 instruction-tuned LLMs, Mistral-7B-Instruct and Llama3-70B-chat-hf, on predicting binary outcomes for Sepsis, Arrhythmia, and Congestive Heart Failure (CHF) using electronic health record (EHR) data from 660 patients. Three uncertainty estimation methods-Verbalized Confidence, Token Logits, and LLM Embedding+XGB-were compared against an eXtreme Gradient Boosting (XGB) classifier trained on raw EHR data.

View Article and Find Full Text PDF

The TRIPOD-LLM reporting guideline for studies using large language models.

Nat Med

January 2025

Artificial Intelligence in Medicine (AIM) Program, Mass General Brigham, Harvard Medical School, Boston, MA, USA.

Large language models (LLMs) are rapidly being adopted in healthcare, necessitating standardized reporting guidelines. We present transparent reporting of a multivariable model for individual prognosis or diagnosis (TRIPOD)-LLM, an extension of the TRIPOD + artificial intelligence statement, addressing the unique challenges of LLMs in biomedical applications. TRIPOD-LLM provides a comprehensive checklist of 19 main items and 50 subitems, covering key aspects from title to discussion.

View Article and Find Full Text PDF

Objectives: Substance use disorder (SUD) continues to be one of the most stigmatized and under-treated conditions in the United States. Stigmatizing language used by healthcare workers can transmit bias to others within healthcare, including medical trainees. This study investigates how stigmatizing language and undergraduate medical education (UME) curricula may influence trainees' clinical decision-making for patients with SUD.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!