AI Article Synopsis

  • The text introduces Latent Meaning Cells (LMC), a new deep learning model designed to create contextualized word representations by integrating local word context with metadata like section types and document IDs.
  • The model is particularly useful in the clinical field, where the text is often semi-structured and covers a wide range of topics.
  • In tests for zero-shot clinical acronym expansion on three datasets, the LMC outperformed various baseline models while requiring less pre-training, showcasing the importance of both metadata and the LMC's inference algorithm.

Article Abstract

We introduce Latent Meaning Cells, a deep latent variable model which learns contextualized representations of words by combining local lexical context and metadata. Metadata can refer to granular context, such as section type, or to more global context, such as unique document ids. Reliance on metadata for contextualized representation learning is apropos in the clinical domain where text is semi-structured and expresses high variation in topics. We evaluate the LMC model on the task of zero-shot clinical acronym expansion across three datasets. The LMC significantly outperforms a diverse set of baselines at a fraction of the pre-training cost and learns clinically coherent representations. We demonstrate that not only is metadata itself very helpful for the task, but that the LMC inference algorithm provides an additional large benefit.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC8594244PMC

Publication Analysis

Top Keywords

zero-shot clinical
8
clinical acronym
8
acronym expansion
8
latent meaning
8
meaning cells
8
expansion latent
4
cells introduce
4
introduce latent
4
cells deep
4
deep latent
4

Similar Publications

Natural products have long been a rich source of diverse and clinically effective drug candidates. Non-ribosomal peptides (NRPs), polyketides (PKs), and NRP-PK hybrids are three classes of natural products that display a broad range of bioactivities, including antibiotic, antifungal, anticancer, and immunosuppressant activities. However, discovering these compounds through traditional bioactivity-guided techniques is costly and time-consuming, often resulting in the rediscovery of known molecules.

View Article and Find Full Text PDF

Background: The application of natural language processing in medicine has increased significantly, including tasks such as information extraction and classification. Natural language processing plays a crucial role in structuring free-form radiology reports, facilitating the interpretation of textual content, and enhancing data utility through clustering techniques. Clustering allows for the identification of similar lesions and disease patterns across a broad dataset, making it useful for aggregating information and discovering new insights in medical imaging.

View Article and Find Full Text PDF

Assessing the performance of Microsoft Copilot, GPT-4 and Google Gemini in ophthalmology.

Can J Ophthalmol

January 2025

Faculty of Medicine, University of Montreal, Montreal, QB, Canada; Department of Ophthalmology, Centre Hospitalier de l'Université de Montréal, Montreal, QB, Canada. Electronic address:

Objective: To evaluate the performance of large language models (LLMs), specifically Microsoft Copilot, GPT-4 (GPT-4o and GPT-4o mini), and Google Gemini (Gemini and Gemini Advanced), in answering ophthalmological questions and assessing the impact of prompting techniques on their accuracy.

Design: Prospective qualitative study.

Participants: Microsoft Copilot, GPT-4 (GPT-4o and GPT-4o mini), and Google Gemini (Gemini and Gemini Advanced).

View Article and Find Full Text PDF

Purpose: The potential of Large Language Models (LLMs) in enhancing a variety of natural language tasks in clinical fields includes medical imaging reporting. This pilot study examines the efficacy of a retrieval-augmented generation (RAG) LLM system considering zero-shot learning capability of LLMs, integrated with a comprehensive database of PET reading reports, in improving reference to prior reports and decision making.

Methods: We developed a custom LLM framework with retrieval capabilities, leveraging a database of over 10 years of PET imaging reports from a single center.

View Article and Find Full Text PDF

The Segment Anything model (SAM) is a powerful vision foundation model that is revolutionizing the traditional paradigm of segmentation. Despite this, a reliance on prompting each frame and large computational cost limit its usage in robotically assisted surgery. Applications, such as augmented reality guidance, require little user intervention along with efficient inference to be usable clinically.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!