A PHP Error was encountered

Severity: Warning

Message: file_get_contents(https://...@pubfacts.com&api_key=b8daa3ad693db53b1410957c26c9a51b4908&a=1): Failed to open stream: HTTP request failed! HTTP/1.1 429 Too Many Requests

Filename: helpers/my_audit_helper.php

Line Number: 176

Backtrace:

File: /var/www/html/application/helpers/my_audit_helper.php
Line: 176
Function: file_get_contents

File: /var/www/html/application/helpers/my_audit_helper.php
Line: 250
Function: simplexml_load_file_from_url

File: /var/www/html/application/helpers/my_audit_helper.php
Line: 3122
Function: getPubMedXML

File: /var/www/html/application/controllers/Detail.php
Line: 575
Function: pubMedSearch_Global

File: /var/www/html/application/controllers/Detail.php
Line: 489
Function: pubMedGetRelatedKeyword

File: /var/www/html/index.php
Line: 316
Function: require_once

Explainable multi-module semantic guided attention based network for medical image segmentation. | LitMetric

Explainable multi-module semantic guided attention based network for medical image segmentation.

Comput Biol Med

Ngee Ann Polytechnic, Department of Electronics and Computer Engineering, 599489, Singapore; Department of Biomedical Engineering, School of science and Technology, SUSS university, Singapore; Department of Biomedical Informatics and Medical Engineering, Asia university, Taichung, Taiwan. Electronic address:

Published: December 2022

Automated segmentation of medical images is crucial for disease diagnosis and treatment planning. Medical image segmentation has been improved based on the convolutional neural networks (CNNs) models. Unfortunately, they are still limited by scenarios in which the segmentation objective has large variations in size, boundary, position, and shape. Moreover, current CNNs have low explainability, restricting their use in clinical decisions. In this paper, we involve substantial use of various attentions in a CNN model and present an explainable multi-module semantic guided attention based network (MSGA-Net) for explainable and highly accurate medical image segmentation, which involves considering the most significant spatial regions, boundaries, scales, and channels. Specifically, we present a multi-scale attention module (MSA) to extract the most salient features at various scales from medical images. Then, we propose a semantic region-guided attention mechanism (SRGA) including location attention (LAM), channel-wise attention (CWA), and edge attention (EA) modules to extract the most important spatial, channel-wise, boundary-related features for interested regions. Moreover, we present a sequence of fine-tuning steps with the SRGA module to gradually weight the significance of interesting regions while simultaneously reducing the noise. In this work, we experimented with three different types of medical images such as dermoscopic images (HAM10000 dataset), multi-organ CT images (CHAOS 2019 dataset), and Brain tumor MRI images (BraTS 2020 dataset). Extensive experiments on all types of medical images revealed that our proposed MSGA-Net substantially increased the overall performance of all metrics over the existing models. Moreover, displaying the attention feature maps has more explainability than state-of-the-art models.

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.compbiomed.2022.106231DOI Listing

Publication Analysis

Top Keywords

medical images
16
medical image
12
image segmentation
12
explainable multi-module
8
multi-module semantic
8
semantic guided
8
attention
8
guided attention
8
attention based
8
based network
8

Similar Publications

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!