The adoption of large language models (LLMs) in healthcare demands a careful analysis of their potential to spread false medical knowledge. Because LLMs ingest massive volumes of data from the open Internet during training, they are potentially exposed to unverified medical knowledge that may include deliberately planted misinformation. Here, we perform a threat assessment that simulates a data-poisoning attack against The Pile, a popular dataset used for LLM development.
View Article and Find Full Text PDFBackground And Objectives: Classical biomedical data science models are trained on a single modality and aimed at one specific task. However, the exponential increase in the size and capabilities of the foundation models inside and outside medicine shows a shift toward task-agnostic models using large-scale, often internet-based, data. Recent research into smaller foundation models trained on specific literature, such as programming textbooks, demonstrated that they can display capabilities similar to or superior to large generalist models, suggesting a potential middle ground between small task-specific and large foundation models.
View Article and Find Full Text PDFObjective: The objective of this study was to investigate the use of indocyanine green videoangiography with FLOW 800 hemodynamic parameters intraoperatively during superficial temporal artery-middle cerebral artery (STA-MCA) bypass surgery to predict patency prior to anastomosis performance.
Methods: A retrospective and exploratory data analysis was conducted using FLOW 800 software prior to anastomosis to assess four regions of interest (ROIs; proximal and distal recipients and adjacent and remote gyri) for four hemodynamic parameters (speed, delay, rise time, and time to peak). Medical records were used to classify patients into flow and no-flow groups based on immediate or perioperative anastomosis patency.