Tissue/region segmentation of pathology images is essential for quantitative analysis in digital pathology. Previous studies usually require full supervision (e.g., pixel-level annotation) which is challenging to acquire. In this paper, we propose a weakly-supervised model using joint Fully convolutional and Graph convolutional Networks (FGNet) for automated segmentation of pathology images. Instead of using pixel-wise annotations as supervision, we employ an image-level label (i.e., foreground proportion) as weakly-supervised information for training a unified convolutional model. Our FGNet consists of a feature extraction module (with a fully convolutional network) and a classification module (with a graph convolutional network). These two modules are connected via a dynamic superpixel operation, making the joint training possible. To achieve robust segmentation performance, we propose to use mutable numbers of superpixels for both training and inference. Besides, to achieve strict supervision, we employ an uncertainty range constraint in FGNet to reduce the negative effect of inaccurate image-level annotations. Compared with fully-supervised methods, the proposed FGNet achieves competitive segmentation results on three pathology image datasets (i.e., HER2, KI67, and H&E) for cancer region segmentation, suggesting the effectiveness of our method. The code is made publicly available at https://github.com/zhangjun001/FGNet.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1016/j.media.2021.102183 | DOI Listing |
Front Artif Intell
December 2024
School of Industrial Engineering and Management, Oklahoma State University, Stillwater, OK, United States.
The ability to accurately predict the yields of different crop genotypes in response to weather variability is crucial for developing climate resilient crop cultivars. Genotype-environment interactions introduce large variations in crop-climate responses, and are hard to factor in to breeding programs. Data-driven approaches, particularly those based on machine learning, can help guide breeding efforts by factoring in genotype-environment interactions when making yield predictions.
View Article and Find Full Text PDFFront Radiol
December 2024
Computer Vision and Machine Intelligence Group, Department of Computer Science, University of the Philippines-Diliman, Quezon City, Philippines.
Pneumothorax, a life-threatening condition characterized by air accumulation in the pleural cavity, requires early and accurate detection for optimal patient outcomes. Chest X-ray radiographs are a common diagnostic tool due to their speed and affordability. However, detecting pneumothorax can be challenging for radiologists because the sole visual indicator is often a thin displaced pleural line.
View Article and Find Full Text PDFNeural Netw
December 2024
Hunan Provincial Key Lab on Bioinformatics, School of Computer Science and Engineering, Central South University, Changsha 410083, PR China.
Correctly diagnosing Alzheimer's disease (AD) and identifying pathogenic brain regions and genes play a vital role in understanding the AD and developing effective prevention and treatment strategies. Recent works combine imaging and genetic data, and leverage the strengths of both modalities to achieve better classification results. In this work, we propose MCA-GCN, a Multi-stream Cross-Attention and Graph Convolutional Network-based classification method for AD patients.
View Article and Find Full Text PDFFront Neurosci
December 2024
Department of Tuina, The First Affiliated Hospital of Zhejiang Chinese Medical University (Zhejiang Provincial Hospital of Chinese Medicine), Hangzhou, China.
Introduction: Emotion recognition using electroencephalography (EEG) is a key aspect of brain-computer interface research. Achieving precision requires effectively extracting and integrating both spatial and temporal features. However, many studies focus on a single dimension, neglecting the interplay and complementarity of multi-feature information, and the importance of fully integrating spatial and temporal dynamics to enhance performance.
View Article and Find Full Text PDFCogn Neurodyn
December 2024
School of Automation, Hangzhou Dianzi University, Hangzhou, 310018 Zhejiang China.
Brain-computer interface (BCI) based on the motor imagery paradigm typically utilizes multi-channel electroencephalogram (EEG) to ensure accurate capture of physiological phenomena. However, excessive channels often contain redundant information and noise, which can significantly degrade BCI performance. Although there have been numerous studies on EEG channel selection, most of them require manual feature extraction, and the extracted features are difficult to fully represent the effective information of EEG signals.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!