We address the problem of semantic nighttime image segmentation and improve the state-of-the-art, by adapting daytime models to nighttime without using nighttime annotations. Moreover, we design a new evaluation framework to address the substantial uncertainty of semantics in nighttime images. Our central contributions are: 1) a curriculum framework to gradually adapt semantic segmentation models from day to night through progressively darker times of day, exploiting cross-time-of-day correspondences between daytime images from a reference map and dark images to guide the label inference in the dark domains; 2) a novel uncertainty-aware annotation and evaluation framework and metric for semantic segmentation, including image regions beyond human recognition capability in the evaluation in a principled fashion; 3) the Dark Zurich dataset, comprising 2416 unlabeled nighttime and 2920 unlabeled twilight images with correspondences to their daytime counterparts plus a set of 201 nighttime images with fine pixel-level annotations created with our protocol, which serves as a first benchmark for our novel evaluation. Experiments show that our map-guided curriculum adaptation significantly outperforms state-of-the-art methods on nighttime sets both for standard metrics and our uncertainty-aware metric. Furthermore, our uncertainty-aware evaluation reveals that selective invalidation of predictions can improve results on data with ambiguous content such as our benchmark and profit safety-oriented applications involving invalid inputs.

Download full-text PDF

Source
http://dx.doi.org/10.1109/TPAMI.2020.3045882DOI Listing

Publication Analysis

Top Keywords

map-guided curriculum
8
uncertainty-aware evaluation
8
nighttime
8
semantic nighttime
8
nighttime image
8
image segmentation
8
evaluation framework
8
nighttime images
8
semantic segmentation
8
correspondences daytime
8

Similar Publications

In autonomous driving, the fusion of multiple sensors is considered essential to improve the accuracy and safety of 3D object detection. Currently, a fusion scheme combining low-cost cameras with highly robust radars can counteract the performance degradation caused by harsh environments. In this paper, we propose the IRBEVF-Q model, which mainly consists of BEV (Bird's Eye View) fusion coding module and an object decoder module.

View Article and Find Full Text PDF

Fitting Low-Resolution Protein Structures into Cryo-EM Density Maps by Multiobjective Optimization of Global and Local Correlations.

J Phys Chem B

January 2021

Institute of Image Processing and Pattern Recognition, Shanghai Jiao Tong University, and Key Laboratory of System Control and Information Processing, Ministry of Education of China, Shanghai, China.

The rigid-body fitting of predicted structural models into cryo-electron microscopy (cryo-EM) density maps is a necessary procedure for density map-guided protein structure determination and prediction. We proposed a novel multiobjective optimization protocol, MOFIT, which performs a rigid-body density-map fitting based on particle swarm optimization (PSO). MOFIT was tested on a large set of 292 nonhomologous single-domain proteins.

View Article and Find Full Text PDF

We address the problem of semantic nighttime image segmentation and improve the state-of-the-art, by adapting daytime models to nighttime without using nighttime annotations. Moreover, we design a new evaluation framework to address the substantial uncertainty of semantics in nighttime images. Our central contributions are: 1) a curriculum framework to gradually adapt semantic segmentation models from day to night through progressively darker times of day, exploiting cross-time-of-day correspondences between daytime images from a reference map and dark images to guide the label inference in the dark domains; 2) a novel uncertainty-aware annotation and evaluation framework and metric for semantic segmentation, including image regions beyond human recognition capability in the evaluation in a principled fashion; 3) the Dark Zurich dataset, comprising 2416 unlabeled nighttime and 2920 unlabeled twilight images with correspondences to their daytime counterparts plus a set of 201 nighttime images with fine pixel-level annotations created with our protocol, which serves as a first benchmark for our novel evaluation.

View Article and Find Full Text PDF

A New Protocol for Atomic-Level Protein Structure Modeling and Refinement Using Low-to-Medium Resolution Cryo-EM Density Maps.

J Mol Biol

September 2020

Department of Computational Medicine and Bioinformatics, University of Michigan, Ann Arbor, MI 48109, USA; Department of Biological Chemistry, University of Michigan, Ann Arbor, MI 48109, USA. Electronic address:

The rapid progress of cryo-electron microscopy (cryo-EM) in structural biology has raised an urgent need for robust methods to create and refine atomic-level structural models using low-resolution EM density maps. We propose a new protocol to create initial models using I-TASSER protein structure prediction, followed by EM density map-based rigid-body structure fitting, flexible fragment adjustment and atomic-level structure refinement simulations. The protocol was tested on a large set of 285 non-homologous proteins and generated structural models with correct folds for 260 proteins, where 28% had RMSDs below 2 Å.

View Article and Find Full Text PDF

Workflow Barriers and Strategies to Reduce Antibiotic Overuse in Nursing Homes.

J Am Geriatr Soc

October 2020

Division of Infectious Disease, Department of Medicine, University of Wisconsin-Madison, Madison, Wisconsin, USA.

Objectives: Antibiotic overuse is a significant problem in nursing homes (NHs). Strategies to improve antibiotic prescribing practices in NHs are a critical need. In this study, we analyzed antibiotic prescribing workflows to identify strategies for improving antibiotic prescribing in NHs.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!