Recent years have seen a boom in interest in interpretable machine learning systems built on models that can be understood, at least to some degree, by domain experts. However, exactly what kinds of models are truly human-interpretable remains poorly understood. This work advances our understanding of precisely which factors make models interpretable in the context of decision sets, a specific class of logic-based model. We conduct carefully controlled human-subject experiments in two domains across three tasks based on human-simulatability through which we identify specific types of complexity that affect performance more heavily than others-trends that are consistent across tasks and domains. These results can inform the choice of regularizers during optimization to learn more interpretable models, and their consistency suggests that there may exist common design principles for interpretable machine learning systems.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7899148PMC

Publication Analysis

Top Keywords

interpretable machine
8
machine learning
8
learning systems
8
models
5
human evaluation
4
evaluation models
4
models built
4
built interpretability
4
interpretability years
4
years boom
4

Similar Publications

Background: Addressing language barriers through accurate interpretation is crucial for providing quality care and establishing trust. While the ability of artificial intelligence (AI) to translate medical documentation has been studied, its role for patient-provider communication is less explored. This review evaluates AI's effectiveness in clinical translation by assessing accuracy, usability, satisfaction, and feedback on its use.

View Article and Find Full Text PDF

CRISPR-Cas-based lateral flow assays (LFAs) have emerged as a promising diagnostic tool for ultrasensitive detection of nucleic acids, offering improved speed, simplicity and cost-effectiveness compared to polymerase chain reaction (PCR)-based assays. However, visual interpretation of CRISPR-Cas-based LFA test results is prone to human error, potentially leading to false-positive or false-negative outcomes when analyzing test/control lines. To address this limitation, we have developed two neural network models: one based on a fully convolutional neural network and the other on a lightweight mobile-optimized neural network for automated interpretation of CRISPR-Cas-based LFA test results.

View Article and Find Full Text PDF

Objectives: Posttonsillectomy hemorrhage (PTH) is a common and potentially life-threatening complication in pediatric tonsillectomy. Early identification and prediction of PTH are of great significance. Currently, there are very few tools available for clinicians to accurately assess the risk of PTH.

View Article and Find Full Text PDF

Background A minority of patients receiving stereotactic body radiation therapy (SBRT) for non-small cell lung cancer (NSCLC) are not good responders. Radiomic features can be used to generate predictive algorithms and biomarkers that can determine treatment outcomes and stratify patients to their therapeutic options. This study investigated and attempted to validate the radiomic and clinical features obtained from early-stage and oligometastatic NSCLC patients who underwent SBRT, to predict local response.

View Article and Find Full Text PDF

Microorganisms, crucial for environmental equilibrium, could be destructive, resulting in detrimental pathophysiology to the human host. Moreover, with the emergence of antibiotic resistance (ABR), the microbial communities pose the century's largest public health challenges in terms of effective treatment strategies. Furthermore, given the large diversity and number of known bacterial strains, describing treatment choices for infected patients using experimental methodologies is time-consuming.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!