Background: Large language models (LLMs), such as ChatGPT, excel at interpreting unstructured data from public sources, yet are limited when responding to queries on private repositories, such as electronic health records (EHRs). We hypothesized that prompt engineering could enhance the accuracy of LLMs for interpreting EHR data without requiring domain knowledge, thus expanding their utility for patients and personalized diagnostics.
Methods: We designed and systematically tested prompt engineering techniques to improve the ability of LLMs to interpret EHRs for nuanced diagnostic questions, referenced to a panel of medical experts.
Domain Adapt Represent Transf Afford Healthc AI Resour Divers Glob Health (2021)
September 2021
Transfer learning from supervised ImageNet models has been frequently used in medical image analysis. Yet, no large-scale evaluation has been conducted to benchmark the efficacy of newly-developed pre-training techniques for medical image analysis, leaving several important questions unanswered. As the first step in this direction, we conduct a systematic study on the transferability of models pre-trained on iNat2021, the most recent large-scale fine-grained dataset, and 14 top self-supervised ImageNet models on 7 diverse medical tasks in comparison with the supervised ImageNet model.
View Article and Find Full Text PDFAtrial fibrillation (AF) is a major cause of heart failure and stroke. The early maintenance of sinus rhythm has been shown to reduce major cardiovascular endpoints, yet is difficult to achieve. For instance, it is unclear how discoveries at the genetic and cellular level can be used to tailor pharmacotherapy.
View Article and Find Full Text PDFIEEE Trans Neural Netw Learn Syst
August 2023
The objective of compressive sampling is to determine a sparse vector from an observation vector. This brief describes an analog neural method to achieve the objective. Unlike previous analog neural models which either resort to the l -norm approximation or are with local convergence only, the proposed method avoids any approximation of the l -norm term and is probably capable of leading to the optimum solution.
View Article and Find Full Text PDFDomain Adapt Represent Transf Distrib Collab Learn (2020)
October 2020
Contrastive representation learning is the state of the art in computer vision, but requires huge mini-batch sizes, special network design, or memory banks, making it unappealing for 3D medical imaging, while in 3D medical imaging, reconstruction-based self-supervised learning reaches a new height in performance, but lacks mechanisms to learn contrastive representation; therefore, this paper proposes a new framework for self-supervised contrastive learning via reconstruction, called Parts2Whole, because it exploits the and part-whole relationship to learn contrastive representation without using contrastive loss: Reconstructing an image (whole) from its own parts compels the model to learn similar latent features for all its own parts, while reconstructing different images (wholes) from their respective parts forces the model to simultaneously push those parts belonging to different wholes farther apart from each other in the latent space; thereby the trained model is capable of distinguishing images. We have evaluated our Parts2Whole on five distinct imaging tasks covering both classification and segmentation, and compared it with four competing publicly available 3D pretrained models, showing that Parts2Whole significantly outperforms in two out of five tasks while achieves competitive performance on the rest three. This superior performance is attributable to the contrastive representations learned with Parts2Whole.
View Article and Find Full Text PDFMed Image Comput Comput Assist Interv
October 2019
Transfer learning from image to image has established as one of the most practical paradigms in deep learning for medical image analysis. However, to fit this paradigm, 3D imaging tasks in the most prominent imaging modalities (, CT and MRI) have to be reformulated and solved in 2D, losing rich 3D anatomical information and inevitably compromising the performance. To overcome this limitation, we have built a set of models, called Generic Autodidactic Models, nicknamed Models Genesis, because they are created (with no manual labeling), self-taught (learned by self-supervision), and generic (served as source models for generating application-specific target models).
View Article and Find Full Text PDFGenerative adversarial networks (GANs) have ushered in a revolution in image-to-image translation. The development and proliferation of GANs raises an interesting question: can we train a GAN to remove an object, if present, from an image while otherwise preserving the image? Specifically, can a GAN "virtually heal" anyone by turning his medical image, with an unknown health status (diseased or healthy), into a healthy one, so that diseased regions could be revealed by subtracting those two images? Such a task requires a GAN to identify a minimal subset of target pixels for domain translation, an ability that we call fixed-point translation, which no GAN is equipped with yet. Therefore, we propose a new GAN, called Fixed-Point GAN, trained by (1) supervising same-domain translation through a conditional identity loss, and (2) regularizing cross-domain translation through revised adversarial, domain classification, and cycle consistency loss.
View Article and Find Full Text PDFCardiovascular disease (CVD) is the number one killer in the USA, yet it is largely preventable (World Health Organization 2011). To prevent CVD, carotid intima-media thickness (CIMT) imaging, a noninvasive ultrasonography method, has proven to be clinically valuable in identifying at-risk persons before adverse events. Researchers are developing systems to automate CIMT video interpretation based on deep learning, but such efforts are impeded by the lack of large annotated CIMT video datasets.
View Article and Find Full Text PDFIEEE Trans Neural Netw Learn Syst
August 2018
In the training stage of radial basis function (RBF) networks, we need to select some suitable RBF centers first. However, many existing center selection algorithms were designed for the fault-free situation. This brief develops a fault tolerant algorithm that trains an RBF network and selects the RBF centers simultaneously.
View Article and Find Full Text PDFIEEE Trans Neural Netw Learn Syst
April 2018
This paper studies the effects of uniform input noise and Gaussian input noise on the dual neural network-based WTA (DNN- WTA) model. We show that the state of the network (under either uniform input noise or Gaussian input noise) converges to one of the equilibrium points. We then derive a formula to check if the network produce correct outputs or not.
View Article and Find Full Text PDFIEEE Trans Neural Netw Learn Syst
June 2017
Many existing results on fault-tolerant algorithms focus on the single fault source situation, where a trained network is affected by one kind of weight failure. In fact, a trained network may be affected by multiple kinds of weight failure. This paper first studies how the open weight fault and the multiplicative weight noise degrade the performance of radial basis function (RBF) networks.
View Article and Find Full Text PDFIEEE Trans Neural Netw Learn Syst
October 2017
The major limitation of the Lagrange programming neural network (LPNN) approach is that the objective function and the constraints should be twice differentiable. Since sparse approximation involves nondifferentiable functions, the original LPNN approach is not suitable for recovering sparse signals. This paper proposes a new formulation of the LPNN approach based on the concept of the locally competitive algorithm (LCA).
View Article and Find Full Text PDFIEEE Trans Neural Netw Learn Syst
April 2016
Fault tolerance is one interesting property of artificial neural networks. However, the existing fault models are able to describe limited node fault situations only, such as stuck-at-zero and stuck-at-one. There is no general model that is able to describe a large class of node fault situations.
View Article and Find Full Text PDFIEEE Trans Vis Comput Graph
August 2015
Many existing pre-computed radiance transfer (PRT) approaches for all-frequency lighting store the information of a 3D object in the pre-vertex manner. To preserve the fidelity of high frequency effects, the 3D object must be tessellated densely. Otherwise, rendering artifacts due to interpolation may appear.
View Article and Find Full Text PDFIEEE Trans Neural Netw Learn Syst
September 2015
The dual neural network (DNN)-based k -winner-take-all ( k WTA) model is an effective approach for finding the k largest inputs from n inputs. Its major assumption is that the threshold logic units (TLUs) can be implemented in a perfect way. However, when differential bipolar pairs are used for implementing TLUs, the transfer function of TLUs is a logistic function.
View Article and Find Full Text PDF