The integration of AI in radiology raises significant legal questions about responsibility for errors. Radiologists fear AI may introduce new legal challenges, despite its potential to enhance diagnostic accuracy. AI tools, even those approved by regulatory bodies like the FDA or CE, are not perfect, posing a risk of failure. The key issue is how AI is implemented: as a stand-alone diagnostic tool or as an aid to radiologists. The latter approach could reduce undesired side effects. However, it's unclear who should be held liable for AI failures, with potential candidates ranging from engineers and radiologists involved in AI development to companies and department heads who integrate these tools into clinical practice. The EU's AI Act, recognizing AI's risks, categorizes applications by risk level, with many radiology-related AI tools considered high risk. Legal precedents in autonomous vehicles offer some guidance on assigning responsibility. Yet, the existing legal challenges in radiology, such as diagnostic errors, persist. AI's potential to improve diagnostics raises questions about the legal implications of not using available AI tools. For instance, an AI tool improving the detection of pediatric fractures could reduce legal risks. This situation parallels innovations like car turn signals, where ignoring available safety enhancements could lead to legal problems. The debate underscores the need for further research and regulation to clarify AI's role in radiology, balancing innovation with legal and ethical considerations.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1016/j.ejrad.2024.111462 | DOI Listing |
Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!