Deep Vision for Breast Cancer Classification and Segmentation.

Cancers (Basel)

United States Air Force, Palmdale, CA 93551, USA.

Published: October 2021

(1) Background: Female breast cancer diagnoses odds have increased from 11:1 in 1975 to 8:1 today. Mammography false positive rates (FPR) are associated with overdiagnoses and overtreatment, while false negative rates (FNR) increase morbidity and mortality. (2) Methods: Deep vision supervised learning classifies 299 × 299 pixel de-noised mammography images as negative or non-negative using models built on 55,890 pre-processed training images and applied to 15,364 unseen test images. A small image representation from the fitted training model is returned to evaluate the portion of the loss function gradient with respect to the image that maximizes the classification probability. This gradient is then re-mapped back to the original images, highlighting the areas of the original image that are most influential for classification (perhaps masses or boundary areas). (3) Results: initial classification results were 97% accurate, 99% specific, and 83% sensitive. Gradient techniques for unsupervised region of interest mapping identified areas most associated with the classification results clearly on positive mammograms and might be used to support clinician analysis. (4) Conclusions: deep vision techniques hold promise for addressing the overdiagnoses and treatment, underdiagnoses, and automated region of interest identification on mammography.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC8582536PMC
http://dx.doi.org/10.3390/cancers13215384DOI Listing

Publication Analysis

Top Keywords

deep vision
12
breast cancer
8
region interest
8
classification
5
vision breast
4
cancer classification
4
classification segmentation
4
segmentation background
4
background female
4
female breast
4

Similar Publications

Multi-class Classification of Retinal Eye Diseases from Ophthalmoscopy Images Using Transfer Learning-Based Vision Transformers.

J Imaging Inform Med

January 2025

College of Engineering, Department of Computer Engineering, Koç University, Rumelifeneri Yolu, 34450, Sarıyer, Istanbul, Turkey.

This study explores a transfer learning approach with vision transformers (ViTs) and convolutional neural networks (CNNs) for classifying retinal diseases, specifically diabetic retinopathy, glaucoma, and cataracts, from ophthalmoscopy images. Using a balanced subset of 4217 images and ophthalmology-specific pretrained ViT backbones, this method demonstrates significant improvements in classification accuracy, offering potential for broader applications in medical imaging. Glaucoma, diabetic retinopathy, and cataracts are common eye diseases that can cause vision loss if not treated.

View Article and Find Full Text PDF

To address the challenges of high computational complexity and poor real-time performance in binocular vision-based Unmanned Aerial Vehicle (UAV) formation flight, this paper introduces a UAV localization algorithm based on a lightweight object detection model. Firstly, we optimized the YOLOv5s model using lightweight design principles, resulting in Yolo-SGN. This model achieves a 65.

View Article and Find Full Text PDF

The Sharp-van der Heijde score (SvH) is crucial for assessing joint damage in rheumatoid arthritis (RA) through radiographic images. However, manual scoring is time-consuming and subject to variability. This study proposes a multistage deep learning model to predict the Overall Sharp Score (OSS) from hand X-ray images.

View Article and Find Full Text PDF

Background: Malaria is a critical and potentially fatal disease caused by the Plasmodium parasite and is responsible for more than 600,000 deaths globally. Early and accurate detection of malaria parasites is crucial for effective treatment, yet conventional microscopy faces limitations in variability and efficiency.

Methods: We propose a novel computer-aided detection framework based on deep learning and attention mechanisms, extending the YOLO-SPAM and YOLO-PAM models.

View Article and Find Full Text PDF

Objective: To design a deep learning-based model for early screening of diabetic retinopathy, predict the condition, and provide interpretable justifications.

Methods: The experiment's model structure is designed based on the Vision Transformer architecture which was initiated in March 2023 and the first version was produced in July 2023 at Affiliated Hospital of Hangzhou Normal University. We use the publicly available EyePACS dataset as input to train the model.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!