Background: Accurate projections of procedural case durations are complex but critical to the planning of perioperative staffing, operating room resources, and patient communication. Nonlinear prediction models using machine learning methods may provide opportunities for hospitals to improve upon current estimates of procedure duration.
Objective: The aim of this study was to determine whether a machine learning algorithm scalable across multiple centers could make estimations of case duration within a tolerance limit because there are substantial resources required for operating room functioning that relate to case duration.
Acute kidney injury (AKI) is a major postoperative complication that lacks established intraoperative predictors. Our objective was to develop a prediction model using preoperative and high-frequency intraoperative data for postoperative AKI. In this retrospective cohort study, we evaluated 77,428 operative cases at a single academic center between 2016 and 2022.
View Article and Find Full Text PDFAnaesth Crit Care Pain Med
October 2022
Background: The field of machine learning is being employed more and more in medicine. However, studies have shown that the quality of published studies frequently lacks completeness and adherence to published reporting guidelines. This assessment has not been done in the subspecialty of anesthesiology.
View Article and Find Full Text PDFEarly detection plays a key role to enhance the outcome for Coronary Artery Disease. We utilized a big data analytics platform on ∼32,000 patients to trace patients from the first encounter to CAD treatment. There are significant gender-based differences in patients younger than 60 from the time of the first encounter to Coronary Artery Bypass Grafting with a p-value=0.
View Article and Find Full Text PDFMachine learning (ML) and artificial intelligence (AI) algorithms have the potential to derive insights from clinical data and improve patient outcomes. However, these highly complex systems are sensitive to changes in the environment and liable to performance decay. Even after their successful integration into clinical practice, ML/AI algorithms should be continuously monitored and updated to ensure their long-term safety and effectiveness.
View Article and Find Full Text PDFBackground: The Centers for Disease Control and Prevention's (CDC) March 2016 opioid prescribing guideline did not include prescribing recommendations for surgical pain. Although opioid over-prescription for surgical patients has been well-documented, the potential effects of the CDC guideline on providers' opioid prescribing practices for surgical patients in the United States remains unclear.
Methods: We conducted an interrupted time series analysis (ITSA) of 37,009 opioid-naïve adult patients undergoing inpatient surgery from 2013-2019 at an academic medical center.
Introduction: Management of patients in the acute care setting requires accurate diagnosis and rapid initiation of validated treatments; therefore, this setting is likely to be an environment in which cognitive augmentation of the clinician's provision of care with technology rooted in artificial intelligence, such as machine learning (ML), is likely to eventuate.
Sources Of Data: PubMed and Google Scholar with search terms that included ML, intensive/critical care unit, electronic health records (EHR), anesthesia information management systems and clinical decision support were the primary sources for this report.
Areas Of Agreement: Different categories of learning of large clinical datasets, often contained in EHRs, are used for training in ML.
Background: Accurate, pragmatic risk stratification for postoperative delirium (POD) is necessary to target preventative resources toward high-risk patients. Machine learning (ML) offers a novel approach to leveraging electronic health record (EHR) data for POD prediction. We sought to develop and internally validate a ML-derived POD risk prediction model using preoperative risk features, and to compare its performance to models developed with traditional logistic regression.
View Article and Find Full Text PDFOpal is the first published example of a full-stack platform infrastructure for an implementation science designed for ML in anesthesia that solves the problem of leveraging ML for clinical decision support. Users interact with a secure online Opal web application to select a desired operating room (OR) case cohort for data extraction, visualize datasets with built-in graphing techniques, and run in-client ML or extract data for external use. Opal was used to obtain data from 29,004 unique OR cases from a single academic institution for pre-operative prediction of post-operative acute kidney injury (AKI) based on creatinine KDIGO criteria using predictors which included pre-operative demographic, past medical history, medications, and flowsheet information.
View Article and Find Full Text PDFUnlabelled: Early prediction of whether a liver allograft will be utilized for transplantation may allow better resource deployment during donor management and improve organ allocation. The national donor management goals (DMG) registry contains critical care data collected during donor management. We developed a machine learning model to predict transplantation of a liver graft based on data from the DMG registry.
View Article and Find Full Text PDFWhile coronary angiography is the gold standard diagnostic tool for coronary artery disease (CAD), but it is associated with procedural risk, it is an invasive technique requiring arterial puncture, and it subjects the patient to radiation and iodinated contrast exposure. Artificial intelligence (AI) can provide a pretest probability of disease that can be used to triage patients for angiography. This review comprehensively investigates published papers in the domain of CAD detection using different AI techniques from 1991 to 2020, in order to discern broad trends and geographical differences.
View Article and Find Full Text PDFBackground: Postoperative gastrointestinal leak and venous thromboembolism (VTE) are devastating complications of bariatric surgery. The performance of currently available predictive models for these complications remains wanting, while machine learning has shown promise to improve on traditional modeling approaches. The purpose of this study was to compare the ability of two machine learning strategies, artificial neural networks (ANNs), and gradient boosting machines (XGBs) to conventional models using logistic regression (LR) in predicting leak and VTE after bariatric surgery.
View Article and Find Full Text PDFUntil recently, astronaut blood samples were collected in-flight, transported to earth on the Space Shuttle, and analyzed in terrestrial laboratories. If humans are to travel beyond low Earth orbit, a transition towards space-ready, point-of-care (POC) testing is required. Such testing needs to be comprehensive, easy to perform in a reduced-gravity environment, and unaffected by the stresses of launch and spaceflight.
View Article and Find Full Text PDF