One of the challenges of AI technologies is its "black box" nature, or the lack of explainability and interpretability of these technologies. This chapter explores whether AI systems in healthcare generally, and in neurosurgery specifically, should be explainable, for what purposes, and whether the current XAI ("explainable AI") approaches and techniques are able to achieve these purposes. The chapter concludes that XAI techniques, at least currently, are not the only and not necessarily the best way to achieve trust in AI and ensure patient autonomy or improved clinical decision, and they are of limited significance in determining liability.
View Article and Find Full Text PDFAdv Exp Med Biol
November 2024
The introduction of novel medical technology, such as artificial intelligence (AI), into traditional clinical practice presents legal liability challenges that need to be squarely addressed by litigants and courts when something goes wrong. Some of the most promising applications for the use of AI in medicine will lead to vexed liability questions. As AI in health care is in its relative infancy, there is a paucity of case law globally upon which to draw.
View Article and Find Full Text PDF