Publications by authors named "M Livne"

Large Language Models (LLMs) have substantially driven scientific progress in various domains, and many papers have demonstrated their ability to tackle complex problems with creative solutions. Our paper introduces a new foundation model, nach0, capable of solving various chemical and biological tasks: biomedical question answering, named entity recognition, molecular generation, molecular synthesis, attributes prediction, and others. nach0 is a multi-domain and multi-task encoder-decoder LLM pre-trained on unlabeled text from scientific literature, patents, and molecule strings to incorporate a range of chemical and linguistic knowledge.

View Article and Find Full Text PDF

Background: Adequate pain control following lung transplantation (LTx) surgery is paramount. Thoracic epidural analgesia (TEA) is the gold standard; however, the potential use of extracorporeal membrane oxygenation (ECMO) and consequent anticoagulation therapy raises safety concerns, prompting clinicians to seek safer alternatives. The utility of thoracic wall blocks in general thoracic surgery is well established; however, their role in the context of LTx has been poorly investigated.

View Article and Find Full Text PDF
Article Synopsis
  • The study examines the use of a deep learning (DL) autosegmentation model to speed up organ-at-risk segmentation in radiation therapy for head and neck cancer, aiming to improve treatment access without sacrificing accuracy.
  • Expert radiation oncologists created gold standard (GS) contours on CT images, and a custom 3D U-Net DL model was trained to generate contour predictions, which were then compared to contours made by medical dosimetry assistants (MDAs) in a randomized trial.
  • Results showed that using the DL model significantly reduced contouring time by 76% overall and 35% specifically in RO revisions, while the accuracy of DL-generated contours was equal or superior to those revised by MDAs, with 76%
View Article and Find Full Text PDF

Explainability for artificial intelligence (AI) in medicine is a hotly debated topic. Our paper presents a review of the key arguments in favor and against explainability for AI-powered Clinical Decision Support System (CDSS) applied to a concrete use case, namely an AI-powered CDSS currently used in the emergency call setting to identify patients with life-threatening cardiac arrest. More specifically, we performed a normative analysis using socio-technical scenarios to provide a nuanced account of the role of explainability for CDSSs for the concrete use case, allowing for abstractions to a more general level.

View Article and Find Full Text PDF

Brain arteries are routinely imaged in the clinical setting by various modalities, e.g., time-of-flight magnetic resonance angiography (TOF-MRA).

View Article and Find Full Text PDF