This brief editorial describes an emerging area of machine learning technology called large language models (LLMs). LLMs, such as ChatGPT, are the technological disruptor of this decade. They are going to be integrated into search engines (Bing and Google) and into Microsoft products in the coming months. They will therefore fundamentally change the way patients and clinicians access and receive information. It is essential that telehealth clinicians are aware of LLMs and appreciate their capabilities and limitations.

Download full-text PDF

Source
http://dx.doi.org/10.1177/1357633X231169055DOI Listing

Publication Analysis

Top Keywords

large language
8
language models
8
artificial intelligence
4
intelligence augmenting
4
augmenting telehealth
4
telehealth large
4
models editorial
4
editorial describes
4
describes emerging
4
emerging area
4

Similar Publications

Background: The large language model ChatGPT can now accept image input with the GPT4-vision (GPT4V) version. We aimed to compare the performance of GPT4V to pretrained U-Net and vision transformer (ViT) models for the identification of the progression of multiple sclerosis (MS) on magnetic resonance imaging (MRI).

Methods: Paired coregistered MR images with and without progression were provided as input to ChatGPT4V in a zero-shot experiment to identify radiologic progression.

View Article and Find Full Text PDF

Objective: This study aimed to explore the utilization of a fine-tuned language model to extract expressions related to the Age-Friendly Health Systems 4M Framework (What Matters, Medication, Mentation, and Mobility) from nursing home worker text messages, deploy automated mapping of these expressions to a taxonomy, and explore the created expressions and relationships.

Materials And Methods: The dataset included 21 357 text messages from healthcare workers in 12 Missouri nursing homes. A sample of 860 messages was annotated by clinical experts to form a "Gold Standard" dataset.

View Article and Find Full Text PDF

Objective: The objectives of this study are to synthesize findings from recent research of retrieval-augmented generation (RAG) and large language models (LLMs) in biomedicine and provide clinical development guidelines to improve effectiveness.

Materials And Methods: We conducted a systematic literature review and a meta-analysis. The report was created in adherence to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses 2020 analysis.

View Article and Find Full Text PDF

CXR-LLaVA: a multimodal large language model for interpreting chest X-ray images.

Eur Radiol

January 2025

Department of Radiology, Seoul National University College of Medicine, Seoul National University Hospital, Seoul, Republic of Korea.

Objective: This study aimed to develop an open-source multimodal large language model (CXR-LLaVA) for interpreting chest X-ray images (CXRs), leveraging recent advances in large language models (LLMs) to potentially replicate the image interpretation skills of human radiologists.

Materials And Methods: For training, we collected 592,580 publicly available CXRs, of which 374,881 had labels for certain radiographic abnormalities (Dataset 1) and 217,699 provided free-text radiology reports (Dataset 2). After pre-training a vision transformer with Dataset 1, we integrated it with an LLM influenced by the LLaVA network.

View Article and Find Full Text PDF

Various large language models (LLMs) can provide human-level medical discussions, but they have not been compared regarding rhinoplasty knowledge. To compare the leading LLMs in answering complex rhinoplasty consultation questions as evaluated by plastic surgeons. Ten open-ended rhinoplasty consultation questions were presented to ChatGPT-4o, Google Gemini, Claude, and Meta-AI LLMs.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!