Download full-text PDF

Source
http://dx.doi.org/10.1007/s00381-015-2714-6DOI Listing

Publication Analysis

Top Keywords

ethical fallacies
4
fallacies tricky
4
tricky ambiguities
4
ambiguities misinterpretation
4
misinterpretation outcomes
4
outcomes cranioplasty
4
cranioplasty mild
4
mild trigonocephaly
4
ethical
1
tricky
1

Similar Publications

Objectives: We report our experience implementing an algorithm for the detection of large vessel occlusion (LVO) for suspected stroke in the emergency setting, including its performance, and offer an explanation as to why it was poorly received by radiologists.

Materials And Methods: An algorithm was deployed in the emergency room at a single tertiary care hospital for the detection of LVO on CT angiography (CTA) between September 1st-27th, 2021. A retrospective analysis of the algorithm's accuracy was performed.

View Article and Find Full Text PDF

The influence of Instagram, as a social media platform, in shaping perceptions of aesthetic surgery cannot be understated. The idea of a more "aesthetic" self cultivates a desire for cosmetic enhancements. This article underscores the profound impact of Instagram on aesthetic surgery, shedding light on both its fantasies and fallacies.

View Article and Find Full Text PDF

This study critically examines the biases and methodological shortcomings in studies comparing deaf and hearing populations, demonstrating their implications for both the reliability and ethics of research in deaf education. Upon reviewing the 20 most-cited deaf-hearing comparison studies, we identified recurring fallacies such as the presumption of hearing ideological biases, the use of heterogeneously small samples, and the misinterpretation of critical variables. Our research reveals a propensity to biased conclusions based on the norms of white, hearing, monolingual English speakers.

View Article and Find Full Text PDF

When time is of the essence: ethical reconsideration of XAI in time-sensitive environments.

J Med Ethics

September 2024

National Center for Tumor Diseases (NCT) Heidelberg, NCT Heidelberg, a partnership between DKFZ and Heidelberg University Hospital, Germany, Heidelberg University, Medical Faculty Heidelberg, Heidelberg University Hospital, Department of Medical Oncology, Section Translational Medical Ethics, Heidelberg, Germany.

The objective of explainable artificial intelligence systems designed for clinical decision support (XAI-CDSS) is to enhance physicians' diagnostic performance, confidence and trust through the implementation of interpretable methods, thus providing for a superior epistemic positioning, a robust foundation for critical reflection and trustworthiness in times of heightened technological dependence. However, recent studies have revealed shortcomings in achieving these goals, questioning the widespread endorsement of XAI by medical professionals, ethicists and policy-makers alike. Based on a surgical use case, this article challenges generalising calls for XAI-CDSS and emphasises the significance of time-sensitive clinical environments which frequently preclude adequate consideration of system explanations.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!