A PHP Error was encountered

Severity: Warning

Message: file_get_contents(https://...@pubfacts.com&api_key=b8daa3ad693db53b1410957c26c9a51b4908&a=1): Failed to open stream: HTTP request failed! HTTP/1.1 429 Too Many Requests

Filename: helpers/my_audit_helper.php

Line Number: 176

Backtrace:

File: /var/www/html/application/helpers/my_audit_helper.php
Line: 176
Function: file_get_contents

File: /var/www/html/application/helpers/my_audit_helper.php
Line: 250
Function: simplexml_load_file_from_url

File: /var/www/html/application/helpers/my_audit_helper.php
Line: 3122
Function: getPubMedXML

File: /var/www/html/application/controllers/Detail.php
Line: 575
Function: pubMedSearch_Global

File: /var/www/html/application/controllers/Detail.php
Line: 489
Function: pubMedGetRelatedKeyword

File: /var/www/html/index.php
Line: 316
Function: require_once

DeepConsensus: Consensus-based Interpretable Deep Neural Networks with Application to Mortality Prediction. | LitMetric

AI Article Synopsis

  • Deep neural networks are very effective for challenging tasks but their "black-box" nature raises concerns in critical fields like healthcare due to risks from adversarial examples and poor generalization to unfamiliar inputs.
  • The paper introduces a new consensus algorithm designed to be robust against adversarial examples, improve classification accuracy, and enhance interpretability by clustering linear approximations of different models.
  • Experimental results, particularly from an ICU dataset, demonstrate that this method not only maintains interpretability similar to simpler models like logistic regression but also significantly boosts prediction accuracy for one-year patient mortality.

Article Abstract

Deep neural networks have achieved remarkable success in various challenging tasks. However, the black-box nature of such networks is not acceptable to critical applications, such as healthcare. In particular, the existence of adversarial examples and their overgeneralization to irrelevant, out-of-distribution inputs with high confidence makes it difficult, if not impossible, to explain decisions by such networks. In this paper, we analyze the underlying mechanism of generalization of deep neural networks and propose an (, ) consensus algorithm which is insensitive to adversarial examples and can reliably reject out-of-distribution samples. Furthermore, the consensus algorithm is able to improve classification accuracy by using multiple trained deep neural networks. To handle the complexity of deep neural networks, we cluster linear approximations of individual models and identify highly correlated clusters among different models to capture feature importance robustly, resulting in improved interpretability. Motivated by the importance of building accurate and interpretable prediction models for healthcare, our experimental results on an ICU dataset show the effectiveness of our algorithm in enhancing both the prediction accuracy and the interpretability of deep neural network models on one-year patient mortality prediction. In particular, while the proposed method maintains similar interpretability as conventional shallow models such as logistic regression, it improves the prediction accuracy significantly.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7583142PMC
http://dx.doi.org/10.1109/ijcnn48605.2020.9206678DOI Listing

Publication Analysis

Top Keywords

deep neural
24
neural networks
20
mortality prediction
8
adversarial examples
8
consensus algorithm
8
prediction accuracy
8
networks
7
deep
6
neural
6
prediction
5

Similar Publications

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!