Publications by authors named "Chenqiu Zhao"

The goal of moving object segmentation is separating moving objects from stationary backgrounds in videos. One major challenge in this problem is how to develop a universal model for videos from various natural scenes since previous methods are often effective only in specific scenes. In this paper, we propose a method called Learning Temporal Distribution and Spatial Correlation (LTS) that has the potential to be a general solution for universal moving object segmentation.

View Article and Find Full Text PDF

We propose a universal background subtraction framework based on the Arithmetic Distribution Neural Network (ADNN) for learning the distributions of temporal pixels. In our ADNN model, the arithmetic distribution operations are utilized to introduce the arithmetic distribution layers, including the product distribution layer and the sum distribution layer. Furthermore, in order to improve the accuracy of the proposed approach, an improved Bayesian refinement model based on neighboring information, with a GPU implementation, is incorporated.

View Article and Find Full Text PDF

In this work we try to address if there is a better way to classify two distributions, rather than using histograms; and answer if we can make a deep learning network learn and classify distributions automatically. These improvements can have wide ranging applications in computer vision and medical image processing. More specifically, we propose a new vessel segmentation method based on pixel distribution learning under multiple scales.

View Article and Find Full Text PDF

The single-feature-based background model often fails in complex scenes, since a pixel is better described by several features, which highlight different characteristics of it. Therefore, the multi-feature-based background model has drawn much attention recently. In this paper, we propose a novel multi-feature-based background model, named stability of adaptive feature (SoAF) model, which utilizes the stabilities of different features in a pixel to adaptively weigh the contributions of these features for foreground detection.

View Article and Find Full Text PDF