Deep neural networks have enabled major progresses in semantic segmentation. However, even the most advanced neural architectures suffer from important limitations. First, they are vulnerable to catastrophic forgetting, i.e., they perform poorly when they are required to incrementally update their model as new classes are available. Second, they rely on large amount of pixel-level annotations to produce accurate segmentation maps. To tackle these issues, we introduce a novel incremental class learning approach for semantic segmentation taking into account a peculiar aspect of this task: since each training step provides annotation only for a subset of all possible classes, pixels of the background class exhibit a semantic shift. Therefore, we revisit the traditional distillation paradigm by designing novel loss terms which explicitly account for the background shift. Additionally, we introduce a novel strategy to initialize classifier's parameters at each step in order to prevent biased predictions toward the background class. Finally, we demonstrate that our approach can be extended to point- and scribble-based weakly supervised segmentation, modeling the partial annotations to create priors for unlabeled pixels. We demonstrate the effectiveness of our approach with an extensive evaluation on the Pascal-VOC, ADE20K, and Cityscapes datasets, significantly outperforming state-of-the-art methods.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1109/TPAMI.2021.3133954 | DOI Listing |
Sensors (Basel)
December 2024
Master's Program in Information and Computer Science, Doshisha University, Kyoto 610-0394, Japan.
The semantic segmentation of bone structures demands pixel-level classification accuracy to create reliable bone models for diagnosis. While Convolutional Neural Networks (CNNs) are commonly used for segmentation, they often struggle with complex shapes due to their focus on texture features and limited ability to incorporate positional information. As orthopedic surgery increasingly requires precise automatic diagnosis, we explored SegFormer, an enhanced Vision Transformer model that better handles spatial awareness in segmentation tasks.
View Article and Find Full Text PDFSensors (Basel)
December 2024
State Grid Tianjin Electric Power Research Institute, Tianjin 300180, China.
Large oil-immersed transformers have metal-enclosed shells, making it difficult to visually inspect the internal insulation condition. Visual inspection of internal defects is carried out using a self-developed micro-robot in this work. Carbon trace is the main visual characteristic of internal insulation defects.
View Article and Find Full Text PDFSci Data
January 2025
University of Cordoba, Department of Computing and Numerical Analysis, Córdoba, 14071, Spain.
Acquiring gait metrics and anthropometric data is crucial for evaluating an individual's physical status. Automating this assessment process alleviates the burden on healthcare professionals and accelerates patient monitoring. Current automation techniques depend on specific, expensive systems such as OptoGait or MuscleLAB, which necessitate training and physical space.
View Article and Find Full Text PDFBiomed Eng Lett
January 2025
Department of Biomedical Engineering, Asan Medical Institute of Convergence Science and Technology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea.
Unlabelled: A weight-bearing lateral radiograph (WBLR) of the foot is a gold standard for diagnosing adult-acquired flatfoot deformity. However, it is difficult to measure the major axis of bones in WBLR without using auxiliary lines. Herein, we develop semantic segmentation with a deep learning model (DLm) on the WBLR of the foot for enhanced diagnosis of pes planus and pes cavus.
View Article and Find Full Text PDFSensors (Basel)
December 2024
School of Computer Science and Technology, Changchun University of Science and Technology, Changchun 130022, China.
With the advancement of service robot technology, the demand for higher boundary precision in indoor semantic segmentation has increased. Traditional methods of extracting Euclidean features using point cloud and voxel data often neglect geodesic information, reducing boundary accuracy for adjacent objects and consuming significant computational resources. This study proposes a novel network, the Euclidean-geodesic network (EGNet), which uses point cloud-voxel-mesh data to characterize detail, contour, and geodesic features, respectively.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!