IEEE Trans Neural Netw Learn Syst
January 2024
Capsule networks (CapsNets) aim to parse images into a hierarchy of objects, parts, and their relationships using a two-step process involving part-whole transformation and hierarchical component routing. However, this hierarchical relationship modeling is computationally expensive, which has limited the wider use of CapsNet despite its potential advantages. The current state of CapsNet models primarily focuses on comparing their performance with capsule baselines, falling short of achieving the same level of proficiency as deep convolutional neural network (CNN) variants in intricate tasks.
View Article and Find Full Text PDFImage-to-image (I2I) translation has become a key asset for generative adversarial networks. Convolutional neural networks (CNNs), despite having a significant performance, are not able to capture the spatial relationships among different parts of an object and, thus, do not qualify as the ideal representative model for image translation tasks. As a remedy to this problem, capsule networks have been proposed to represent patterns for a visual object in such a way that preserves hierarchical spatial relationships.
View Article and Find Full Text PDFWeakly-supervised learning (WSL) has recently triggered substantial interest as it mitigates the lack of pixel-wise annotations. Given global image labels, WSL methods yield pixel-level predictions (segmentations), which enable to interpret class predictions. Despite their recent success, mostly with natural images, such methods can face important challenges when the foreground and background regions have similar visual cues, yielding high false-positive rates in segmentations, as is the case in challenging histology images.
View Article and Find Full Text PDFWidely used loss functions for CNN segmentation, e.g., Dice or cross-entropy, are based on integrals over the segmentation regions.
View Article and Find Full Text PDFWeakly-supervised learning based on, e.g., partially labelled images or image-tags, is currently attracting significant attention in CNN segmentation as it can mitigate the need for full and laborious pixel/voxel annotations.
View Article and Find Full Text PDFPurpose: Precise segmentation of bladder walls and tumor regions is an essential step toward noninvasive identification of tumor stage and grade, which is critical for treatment decision and prognosis of patients with bladder cancer (BC). However, the automatic delineation of bladder walls and tumor in magnetic resonance images (MRI) is a challenging task, due to important bladder shape variations, strong intensity inhomogeneity in urine, and very high variability across the population, particularly on tumors' appearance. To tackle these issues, we propose to leverage the representation capacity of deep fully convolutional neural networks.
View Article and Find Full Text PDFIEEE Trans Neural Netw Learn Syst
May 2019
A growing number of applications, e.g., video surveillance and medical image analysis, require training recognition systems from large amounts of weakly annotated data, while some targeted interactions with a domain expert are allowed to improve the training process.
View Article and Find Full Text PDF