Publications by authors named "Zhiyong Lu"

In this study, CoNiOwith a tunable and hierarchical distribution of oxygen vacancies was synthesized via Ce doping and NaBHreduction to enhance its electrochemical performance. Ce doping through a hydrothermal method gave rise to lattice distortions and uniform oxygen vacancies at asymmetric sites, thereby improving the mobility and concentration of carriers within CoNiO. Moreover, the NaBHreduction process brought about a considerable number of oxygen vacancies and surface-active sites, both of which contributed to the increased conductivity and specific capacitance.

View Article and Find Full Text PDF

This study reports a comprehensive environmental scan of the generative AI (GenAI) infrastructure in the national network for clinical and translational science across 36 institutions supported by the CTSA Program led by the National Center for Advancing Translational Sciences (NCATS) of the National Institutes of Health (NIH) at the United States. Key findings indicate a diverse range of institutional strategies, with most organizations in the experimental phase of GenAI deployment. The results underscore the need for a more coordinated approach to GenAI governance, emphasizing collaboration among senior leaders, clinicians, information technology staff, and researchers.

View Article and Find Full Text PDF

Objective: To propose Deep-RPD-Net, a 3-dimensional deep learning network with semisupervised learning (SSL) for the detection of reticular pseudodrusen (RPD) on spectral-domain OCT scans, explain its decision-making, and compare it with baseline methods.

Design: Deep learning model development.

Participants: Three hundred fifteen participants from the Age-Related Eye Disease Study 2 Ancillary OCT Study (AREDS2) and 161 participants from the Dark Adaptation in Age-related Macular Degeneration Study (DAAMD).

View Article and Find Full Text PDF

Although large language models (LLMs) have been assessed for general medical knowledge using medical licensing exams, their ability to effectively support clinical decision-making tasks, such as selecting and using medical calculators, remains uncertain. Here, we evaluate the capability of both medical trainees and LLMs to recommend medical calculators in response to various multiple-choice clinical scenarios such as risk stratification, prognosis, and disease diagnosis. We assessed eight LLMs, including open-source, proprietary, and domain-specific models, with 1,009 question-answer pairs across 35 clinical calculators and measured human performance on a subset of 100 questions.

View Article and Find Full Text PDF

Large language models (LLMs) represent a transformative class of AI tools capable of revolutionizing various aspects of healthcare by generating human-like responses across diverse contexts and adapting to novel tasks following human instructions. Their potential application spans a broad range of medical tasks, such as clinical documentation, matching patients to clinical trials, and answering medical questions. In this primer paper, we propose an actionable guideline to help healthcare professionals more efficiently utilize LLMs in their work, along with a set of best practices.

View Article and Find Full Text PDF

Objectives: The National Library of Medicine (NLM) currently indexes close to a million articles each year pertaining to more than 5300 medicine and life sciences journals. Of these, a significant number of articles contain critical information about the structure, genetics, and function of genes and proteins in normal and disease states. These articles are identified by the NLM curators, and a manual link is created between these articles and the corresponding gene records at the NCBI Gene database.

View Article and Find Full Text PDF

In radiology, Artificial Intelligence (AI) has significantly advanced report generation, but automatic evaluation of these AI-produced reports remains challenging. Current metrics, such as Conventional Natural Language Generation (NLG) and Clinical Efficacy (CE), often fall short in capturing the semantic intricacies of clinical contexts or overemphasize clinical details, undermining report clarity. To overcome these issues, our proposed method synergizes the expertise of professional radiologists with Large Language Models (LLMs), like GPT-3.

View Article and Find Full Text PDF
Article Synopsis
  • Large language models (LLMs) depend on high-quality biomedical annotations for training, which are usually created through costly and slow human efforts.
  • LLMs can streamline the curation process, creating a feedback loop where improvements in one area aid the other.
  • The workshop will explore both the benefits and challenges of using LLMs in biomedical annotation and curation, highlighting the current landscape and implications for the future.
View Article and Find Full Text PDF

The emergent abilities of large language models (LLMs) have demonstrated great potential in solving medical questions. They can possess considerable medical knowledge, but may still hallucinate and are inflexible in the knowledge updates. While Retrieval-Augmented Generation (RAG) has been proposed to enhance the medical question-answering capabilities of LLMs with external knowledge bases, it may still fail in complex cases where multiple rounds of information-seeking are required.

View Article and Find Full Text PDF

Patient recruitment is challenging for clinical trials. We introduce TrialGPT, an end-to-end framework for zero-shot patient-to-trial matching with large language models. TrialGPT comprises three modules: it first performs large-scale filtering to retrieve candidate trials (TrialGPT-Retrieval); then predicts criterion-level patient eligibility (TrialGPT-Matching); and finally generates trial-level scores (TrialGPT-Ranking).

View Article and Find Full Text PDF

Summary: Over 55% of author names in PubMed are ambiguous: the same name is shared by different individual researchers. This poses significant challenges on precise literature retrieval for author name queries, a common behavior in biomedical literature search. In response, we present a comprehensive dataset of disambiguated authors.

View Article and Find Full Text PDF

Training a neural network-based biomedical named entity recognition (BioNER) model usually requires extensive and costly human annotations. While several studies have employed multi-task learning with multiple BioNER datasets to reduce human effort, this approach does not consistently yield performance improvements and may introduce label ambiguity in different biomedical corpora. We aim to tackle those challenges through transfer learning from easily accessible resources with fewer concept overlaps with biomedical datasets.

View Article and Find Full Text PDF
Article Synopsis
  • The study explores the integration of Large Language Models (LLMs) in healthcare, noting their potential for improving diagnostics and patient care while also highlighting their vulnerability to adversarial attacks.
  • It reveals that both open-source and proprietary LLMs can be manipulated during medical tasks using real-world patient data, with more advanced models requiring specific adversarial data for effective attacks.
  • The findings emphasize the importance of developing strong security measures and defensive strategies to protect LLMs in healthcare, ensuring their safe implementation.
View Article and Find Full Text PDF

The summarization capabilities of pretrained and large language models (LLMs) have been widely validated in general areas, but their use in scientific corpus, which involves complex sentences and specialized knowledge, has been less assessed. This paper presents conceptual and experimental analyses of scientific summarization, highlighting the inadequacies of traditional evaluation methods, such as -gram, embedding comparison, and QA, particularly in providing explanations, grasping scientific concepts, or identifying key content. Subsequently, we introduce the Facet-aware Metric (FM), employing LLMs for advanced semantic matching to evaluate summaries based on different aspects.

View Article and Find Full Text PDF
Article Synopsis
  • Large language models (LLMs) show potential in summarizing medical evidence, but using proprietary models can lead to issues like lack of transparency and reliance on specific vendors.
  • This study focused on enhancing the performance of open-source LLMs by fine-tuning three models—PRIMERA, LongT5, and Llama-2—using a dataset of 8,161 systematic reviews and summaries.
  • Fine-tuning resulted in significant performance improvements, with LongT5 performing similarly to GPT-3.5 in certain settings, indicating that smaller models can outperform larger models in specific tasks, like summarizing medical evidence.
View Article and Find Full Text PDF

Deep learning has enabled breakthroughs in automated diagnosis from medical imaging, with many successful applications in ophthalmology. However, standard medical image classification approaches only assess disease presence , neglecting the common clinical setting of longitudinal imaging. For slow, progressive eye diseases like age-related macular degeneration (AMD) and primary open-angle glaucoma (POAG), patients undergo repeated imaging over time to track disease progression and forecasting the future risk of developing a disease is critical to properly plan treatment.

View Article and Find Full Text PDF

Objective: Training a neural network-based biomedical named entity recognition (BioNER) model usually requires extensive and costly human annotations. While several studies have employed multi-task learning with multiple BioNER datasets to reduce human effort, this approach does not consistently yield performance improvements and may introduce label ambiguity in different biomedical corpora. We aim to tackle those challenges through transfer learning from easily accessible resources with fewer concept overlaps with biomedical datasets.

View Article and Find Full Text PDF
Article Synopsis
  • Medical texts are difficult to manage and time-consuming to curate manually, prompting the development of NLP algorithms to automate this process for improved efficiency in the biomedical field.
  • The study introduces Ascle, a user-friendly tool designed for biomedical researchers that offers generative functions like question-answering and text summarization, along with 12 essential NLP functions and search capabilities.
  • After fine-tuning 32 language models and validating through physician assessments, results showed significant improvements in text generation tasks, with notable increases in machine translation and question-answering accuracy.
View Article and Find Full Text PDF

Background: Large language models like GPT-3.5-turbo and GPT-4 hold promise for healthcare professionals, but they may inadvertently inherit biases during their training, potentially affecting their utility in medical applications. Despite few attempts in the past, the precise impact and extent of these biases remain uncertain.

View Article and Find Full Text PDF
Article Synopsis
  • Large language models (LLMs) show potential in summarizing medical evidence but are often limited by issues such as lack of transparency when using proprietary models.
  • This study examines the effects of fine-tuning open-source LLMs like PRIMERA, LongT5, and Llama-2 to enhance their performance, using a dataset of systematic reviews and summaries.
  • Results indicate that fine-tuning improves the performance of open-source models, with LongT5 performing nearly as well as GPT-3.5, and smaller fine-tuned models sometimes outperforming larger models in evaluations.
View Article and Find Full Text PDF

Expert curation is essential to capture knowledge of enzyme functions from the scientific literature in FAIR open knowledgebases but cannot keep pace with the rate of new discoveries and new publications. In this work we present EnzChemRED, for Enzyme Chemistry Relation Extraction Dataset, a new training and benchmarking dataset to support the development of Natural Language Processing (NLP) methods such as (large) language models that can assist enzyme curation. EnzChemRED consists of 1,210 expert curated PubMed abstracts where enzymes and the chemical reactions they catalyze are annotated using identifiers from the protein knowledgebase UniProtKB and the chemical ontology ChEBI.

View Article and Find Full Text PDF

Deep learning has enabled breakthroughs in automated diagnosis from medical imaging, with many successful applications in ophthalmology. However, standard medical image classification approaches only assess disease presence at the time of acquisition, neglecting the common clinical setting of longitudinal imaging. For slow, progressive eye diseases like age-related macular degeneration (AMD) and primary open-angle glaucoma (POAG), patients undergo repeated imaging over time to track disease progression and forecasting the future risk of developing a disease is critical to properly plan treatment.

View Article and Find Full Text PDF

The automatic recognition of biomedical relationships is an important step in the semantic understanding of the information contained in the unstructured text of the published literature. The BioRED track at BioCreative VIII aimed to foster the development of such methods by providing the participants the BioRED-BC8 corpus, a collection of 1000 PubMed documents manually curated for diseases, gene/proteins, chemicals, cell lines, gene variants, and species, as well as pairwise relationships between them which are disease-gene, chemical-gene, disease-variant, gene-gene, chemical-disease, chemical-chemical, chemical-variant, and variant-variant. Furthermore, relationships are categorized into the following semantic categories: positive correlation, negative correlation, binding, conversion, drug interaction, comparison, cotreatment, and association.

View Article and Find Full Text PDF

A PHP Error was encountered

Severity: Warning

Message: fopen(/var/lib/php/sessions/ci_sessions84gib8k6ruc080ugna3mk9jkkcss8n8): Failed to open stream: No space left on device

Filename: drivers/Session_files_driver.php

Line Number: 177

Backtrace:

File: /var/www/html/index.php
Line: 316
Function: require_once

A PHP Error was encountered

Severity: Warning

Message: session_start(): Failed to read session data: user (path: /var/lib/php/sessions)

Filename: Session/Session.php

Line Number: 137

Backtrace:

File: /var/www/html/index.php
Line: 316
Function: require_once