Publications by authors named "Haksub Kim"

Visual surveillance produces a significant amount of raw video data that can be time consuming to browse and analyze. In this work, we present a video synopsis methodology called "scene adaptive online video synopsis via dynamic tube rearrangement using octree (SSOcT)" that can effectively condense input surveillance videos. Our method entailed summarizing the input video by analyzing scene characteristics and determining an effective spatio-temporal 3D structure for video synopsis.

View Article and Find Full Text PDF

Visual saliency on stereoscopic 3D (S3D) images has been shown to be heavily influenced by image quality. Hence, this dependency is an important factor in image quality prediction, image restoration and discomfort reduction, but it is still very difficult to predict such a nonlinear relation in images. In addition, most algorithms specialized in detecting visual saliency on pristine images may unsurprisingly fail when facing distorted images.

View Article and Find Full Text PDF

We describe a new 3D saliency prediction model that accounts for diverse low-level luminance, chrominance, motion, and depth attributes of 3D videos as well as high-level classifications of scenes by type. The model also accounts for perceptual factors, such as the nonuniform resolution of the human eye, stereoscopic limits imposed by Panum's fusional area, and the predicted degree of (dis) comfort felt, when viewing the 3D video. The high-level analysis involves classification of each 3D video scene by type with regard to estimated camera motion and the motions of objects in the videos.

View Article and Find Full Text PDF