Publications by authors named "Sara Gerke"

The rapid advancement of artificial intelligence and machine learning (AI/ML) technologies in healthcare presents significant opportunities for enhancing patient care through innovative diagnostic tools, monitoring systems, and personalized treatment plans. However, these innovative advancements might result in regulatory challenges given recent Supreme Court decisions that impact the authority of regulatory agencies like the Food and Drug Administration (FDA). This paper explores the implications of regulatory uncertainty for the healthcare industry related to balancing innovation in biotechnology and biocomputing with ensuring regulatory uniformity and patient safety.

View Article and Find Full Text PDF

Artificial intelligence (AI) and machine learning (ML) tools are now proliferating in biomedical contexts, and there is no sign this will slow down any time soon. AI/ML and related technologies promise to improve scientific understanding of health and disease and have the potential to spur the development of innovative and effective diagnostics, treatments, cures, and medical technologies. Concerns about AI/ML are prominent, but attention to two specific aspects of AI/ML have so far received little research attention: synthetic data and computational checklists that might promote not only the reproducibility of AI/ML tools but also increased attention to ethical, legal, and social implications (ELSI) of AI/ML tools.

View Article and Find Full Text PDF

Researchers and practitioners are increasingly using machine-generated synthetic data as a tool for advancing health science and practice, by expanding access to health data while-potentially-mitigating privacy and related ethical concerns around data sharing. While using synthetic data in this way holds promise, we argue that it also raises significant ethical, legal, and policy concerns, including persistent privacy and security problems, accuracy and reliability issues, worries about fairness and bias, and new regulatory challenges. The virtue of synthetic data is often understood to be its detachment from the data subjects whose measurement data is used to generate it.

View Article and Find Full Text PDF

While the literature on putting a "human in the loop" in artificial intelligence (AI) and machine learning (ML) has grown significantly, limited attention has been paid to how human expertise ought to be combined with AI/ML judgments. This design question arises because of the ubiquity and quantity of algorithmic decisions being made today in the face of widespread public reluctance to forgo human expert judgment. To resolve this conflict, we propose that human expert judges be included via appeals processes for review of algorithmic decisions.

View Article and Find Full Text PDF

Background: Operating-room audiovisual recording is increasingly proposed, although its ethical implications need elucidation. The aim of this systematic review was to examine the published literature on ethical aspects regarding operating-room recording.

Methods: MEDLINE (via PubMed), Embase, and Cochrane databases were systematically searched for articles describing ethical aspects regarding surgical (both intracorporeal and operating room) recording from database inception to the present (the last search was undertaken in July 2022).

View Article and Find Full Text PDF

Commercial software based on artificial intelligence (AI) is entering clinical practice in neuroradiology. Consequently, medico-legal aspects of using Software as a Medical Device (SaMD) become increasingly important. These medico-legal issues warrant an interdisciplinary approach and may affect the way we work in daily practice.

View Article and Find Full Text PDF

Two newly proposed Directives impact liability for artificial intelligence in the EU: a Product Liability Directive (PLD) and an AI Liability Directive (AILD). While these proposed Directives provide some uniform liability rules for AI-caused harm, they fail to fully accomplish the EU's goal of providing clarity and uniformity for liability for injuries caused by AI-driven goods and services. Instead, the Directives leave potential liability gaps for injuries caused by some black-box medical AI systems, which use opaque and complex reasoning to provide medical decisions and/or recommendations.

View Article and Find Full Text PDF
Article Synopsis
  • * "Expert panels" are created to help assess devices for certification, and the role of "notified bodies" is expanded to ensure manufacturers comply with the new standards.
  • * All existing medical devices must be recertified by 2027 or 2028, creating uncertainty for manufacturers and highlighting the need for better collaboration between the industry and healthcare professionals to avoid device shortages and support the development of new devices.
View Article and Find Full Text PDF

Explainability for artificial intelligence (AI) in medicine is a hotly debated topic. Our paper presents a review of the key arguments in favor and against explainability for AI-powered Clinical Decision Support System (CDSS) applied to a concrete use case, namely an AI-powered CDSS currently used in the emergency call setting to identify patients with life-threatening cardiac arrest. More specifically, we performed a normative analysis using socio-technical scenarios to provide a nuanced account of the role of explainability for CDSSs for the concrete use case, allowing for abstractions to a more general level.

View Article and Find Full Text PDF

As the use of artificial intelligence and machine learning (AI/ML) continues to expand in healthcare, much attention has been given to mitigating bias in algorithms to ensure they are employed fairly and transparently. Less attention has fallen to addressing potential bias among AI/ML's human users or factors that influence user reliance. We argue for a systematic approach to identifying the existence and impacts of user biases while using AI/ML tools and call for the development of embedded interface design features, drawing on insights from decision science and behavioral economics, to nudge users towards more critical and reflective decision making using AI/ML.

View Article and Find Full Text PDF

Digital applications (apps) are commonly used across the research ecosystem. While apps are frequently updated in the course of clinical and behavioral research, there is limited guidance as to when an app update should trigger action related to human research participant protections and who should be responsible for monitoring and reviewing these updates. We term this the "update problem" and argue that, while it is the principal investigator's duty to track all relevant updates, the level of involvement and re-review by the institutional review board (IRB) of an approved research protocol should vary depending on whether the update may be classified as minor, not minor, or significant.

View Article and Find Full Text PDF

Once-daily oral tenofovir/emtricitabine is highly effective as pre-exposure prophylaxis (PrEP) against HIV but is dependent on adherence, which may be challenging for men who have sex with men (MSM) and use substances. Digital pill systems (DPS) permit the direct, real-time measurement of adherence, though user perceptions of data privacy in this context are unknown. Thirty prospective DPS users - HIV-negative MSM with non-alcohol substance use - completed in-depth qualitative interviews exploring preferences around privacy, access, and sharing of DPS adherence data.

View Article and Find Full Text PDF

When applied in the health sector, AI-based applications raise not only ethical but legal and safety concerns, where algorithms trained on data from majority populations can generate less accurate or reliable results for minorities and other disadvantaged groups.

View Article and Find Full Text PDF

Employers and governments are interested in the use of serological (antibody) testing to allow people to return to work before there is a vaccine for SARS-CoV-2. We articulate the preconditions needed for the implementation of antibody testing, including the role of the U.S.

View Article and Find Full Text PDF

To control pharmaceutical spending and improve access, the United States could adopt strategies similar to those introduced in Germany by the 2011 German Pharmaceutical Market Reorganization Act. In Germany, manufacturers sell new drugs immediately upon receiving marketing approval. During the first year, the German Federal Joint Committee assesses new drugs to determine their added medical benefit.

View Article and Find Full Text PDF

Policy Points With increasing integration of artificial intelligence and machine learning in medicine, there are concerns that algorithm inaccuracy could lead to patient injury and medical liability. While prior work has focused on medical malpractice, the artificial intelligence ecosystem consists of multiple stakeholders beyond clinicians. Current liability frameworks are inadequate to encourage both safe clinical implementation and disruptive innovation of artificial intelligence.

View Article and Find Full Text PDF

A PHP Error was encountered

Severity: Notice

Message: fwrite(): Write of 34 bytes failed with errno=28 No space left on device

Filename: drivers/Session_files_driver.php

Line Number: 272

Backtrace:

A PHP Error was encountered

Severity: Warning

Message: session_write_close(): Failed to write session data using user defined save handler. (session.save_path: /var/lib/php/sessions)

Filename: Unknown

Line Number: 0

Backtrace: