A PHP Error was encountered

Severity: Warning

Message: file_get_contents(https://...@pubfacts.com&api_key=b8daa3ad693db53b1410957c26c9a51b4908&a=1): Failed to open stream: HTTP request failed! HTTP/1.1 429 Too Many Requests

Filename: helpers/my_audit_helper.php

Line Number: 176

Backtrace:

File: /var/www/html/application/helpers/my_audit_helper.php
Line: 176
Function: file_get_contents

File: /var/www/html/application/helpers/my_audit_helper.php
Line: 250
Function: simplexml_load_file_from_url

File: /var/www/html/application/helpers/my_audit_helper.php
Line: 3122
Function: getPubMedXML

File: /var/www/html/application/controllers/Detail.php
Line: 575
Function: pubMedSearch_Global

File: /var/www/html/application/controllers/Detail.php
Line: 489
Function: pubMedGetRelatedKeyword

File: /var/www/html/index.php
Line: 316
Function: require_once

Explainable classifier for improving the accountability in decision-making for colorectal cancer diagnosis from histopathological images. | LitMetric

Pathologists are responsible for cancer type diagnoses from histopathological cancer tissues. However, it is known that microscopic examination is tedious and time-consuming. In recent years, a long list of machine learning approaches to image classification and whole-slide segmentation has been developed to support pathologists. Although many showed exceptional performances, the majority of them are not able to rationalize their decisions. In this study, we developed an explainable classifier to support decision making for medical diagnoses. The proposed model does not provide an explanation about the causality between the input and the decisions, but offers a human-friendly explanation about the plausibility of the decision. Cumulative Fuzzy Class Membership Criterion (CFCMC) explains its decisions in three ways: through a semantical explanation about the possibilities of misclassification, showing the training sample responsible for a certain prediction and showing training samples from conflicting classes. In this paper, we explain about the mathematical structure of the classifier, which is not designed to be used as a fully automated diagnosis tool but as a support system for medical experts. We also report on the accuracy of the classifier against real world histopathological data for colorectal cancer. We also tested the acceptability of the system through clinical trials by 14 pathologists. We show that the proposed classifier is comparable to state of the art neural networks in accuracy, but more importantly it is more acceptable to be used by human experts as a diagnosis tool in the medical domain.

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.jbi.2020.103523DOI Listing

Publication Analysis

Top Keywords

explainable classifier
8
colorectal cancer
8
showing training
8
diagnosis tool
8
classifier improving
4
improving accountability
4
accountability decision-making
4
decision-making colorectal
4
cancer
4
cancer diagnosis
4

Similar Publications

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!