Unlabelled: The multimorbidity problem involves the identification and mitigation of adverse interactions that occur when multiple computer interpretable guidelines are applied concurrently to develop a treatment plan for a patient diagnosed with multiple diseases. Solving this problem requires decision support approaches which are difficult to comprehend for physicians. As such, the rationale for treatment plans generated by these approaches needs to be provided.

Objective: To develop an explainability component for an automated planning-based approach to the multimorbidity problem, and to assess the fidelity and interpretability of generated explanations using a clinical case study.

Methods: The explainability component leverages the task-network model for representing computer interpretable guidelines. It generates post-hoc explanations composed of three aspects that answer why specific clinical actions are in a treatment plan, why specific revisions were applied, and how factors like medication cost, patient's adherence, etc. influence the selection of specific actions. The explainability component is implemented as part of MitPlan, where we revised our planning-based approach to support explainability. We developed an evaluation instrument based on the system causability scale and other vetted surveys to evaluate the fidelity and interpretability of its explanations using a two dimensional comparison study design.

Results: The explainability component was implemented for MitPlan and tested in the context of a clinical case study. The fidelity and interpretability of the generated explanations were assessed using a physician-focused evaluation study involving 21 participants from two different specialties and two levels of experience. Results show that explanations provided by the explainability component in MitPlan are of acceptable fidelity and interpretability, and that the clinical justification of the actions in a treatment plan is important to physicians.

Conclusion: We created an explainability component that enriches an automated planning-based approach to solving the multimorbidity problem with meaningful explanations for actions in a treatment plan. This component relies on the task-network model to represent computer interpretable guidelines and as such can be ported to other approaches that also use the task-network model representation. Our evaluation study demonstrated that explanations that support a physician's understanding of the clinical reasons for the actions in a treatment plan are useful and important.

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.jbi.2024.104681DOI Listing

Publication Analysis

Top Keywords

explainability component
24
treatment plan
20
planning-based approach
16
multimorbidity problem
16
fidelity interpretability
16
actions treatment
16
automated planning-based
12
computer interpretable
12
interpretable guidelines
12
task-network model
12

Similar Publications

Induction motors are essential components in industry due to their efficiency and cost-effectiveness. This study presents an innovative methodology for automatic fault detection by analyzing images generated from the Fourier spectra of current signals using deep learning techniques. A new preprocessing technique incorporating a distinctive background to enhance spectral feature learning is proposed, enabling the detection of four types of faults: healthy motor coupled to a generator with a broken bar (HGB), broken rotor bar (BRB), race bearing fault (RBF), and bearing ball fault (BBF).

View Article and Find Full Text PDF

Purpose: To implement and evaluate deep learning-based methods for the classification of pediatric brain tumors (PBT) in magnetic resonance (MR) data.

Methods: A subset of the "Children's Brain Tumor Network" dataset was retrospectively used ( = 178 subjects, female = 72, male = 102, NA = 4, age range [0.01, 36.

View Article and Find Full Text PDF

One of the most critical components of reinforced concrete structures are beam-column joint systems, which greatly affect the overall behavior of a structure during a major seismic event. According to modern design codes, if the system fails, it should fail due to the flexural yielding of the beam and not due to the shear failure of the joint. Thus, a reliable tool is required for the prediction of the failure mode of the joints in a preexisting population of structures.

View Article and Find Full Text PDF

Attention-based image segmentation and classification model for the preoperative risk stratification of thyroid nodules.

World J Surg

December 2024

Monash University Endocrine Surgery Unit, Department of General Surgery, Alfred Hospital, Melbourne, Victoria, Australia.

Background: Despite widespread use of standardized classification systems, risk stratification of thyroid nodules is nuanced and often requires diagnostic surgery. Genomic sequencing is available for this dilemma however, costs and access restricts global applicability. Artificial intelligence (AI) has the potential to overcome this issue nevertheless, the need for black-box interpretability is pertinent.

View Article and Find Full Text PDF

Examining the responsible use of zero-shot AI approaches to scoring essays.

Sci Rep

December 2024

Educational Testing Service, Research Division, 08541, Princeton, New Jersey, USA.

Article Synopsis
  • The text discusses the potential benefits of AI in grading and writing instruction but stresses that accuracy is just one factor in its responsible use in education.
  • It outlines principles for responsible AI use in assessments, including fairness, privacy, transparency, educational impact, and ongoing improvement.
  • The authors present an evaluation of AI scoring through GPT-4o, focusing on fairness and explainability of automated scoring models.
View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!