Publications by authors named "Andre Kaup"

In this paper, we provide an in-depth assessment on the Bjøntegaard Delta. We construct a large data set of video compression performance comparisons using a diverse set of metrics including PSNR, VMAF, bitrate, and processing energies. These metrics are evaluated for visual data types such as classic perspective video, 360° video, point clouds, and screen content.

View Article and Find Full Text PDF

In this paper, a synthetic hyperspectral video database is introduced. Since it is impossible to record ground-truth hyperspectral videos, this database offers the possibility to leverage the evaluation of algorithms in diverse applications. For all scenes, depth maps are provided as well to yield the position of a pixel in all spatial dimensions as well as the reflectance in spectral dimension.

View Article and Find Full Text PDF

Light spectra are a very important source of information for diverse classification problems, e.g., for discrimination of materials.

View Article and Find Full Text PDF

Recently, many new applications arose for multispectral and hyper-spectral imaging. Besides modern biometric systems for identity verification, also agricultural and medical applications came up, which measure the health condition of plants and humans. Despite the growing demand, the acquisition of multi-spectral data is up to the present complicated.

View Article and Find Full Text PDF

The usage of embedded systems is omnipresent in our everyday life, e.g., in smartphones, tablets, or automotive devices.

View Article and Find Full Text PDF

Lossless compression of dynamic 2-D+t and 3-D+t medical data is challenging regarding the huge amount of data, the characteristics of the inherent noise, and the high bit depth. Beyond that, a scalable representation is often required in telemedicine applications. Motion Compensated Temporal Filtering works well for lossless compression of medical volume data and additionally provides temporal, spatial, and quality scalability features.

View Article and Find Full Text PDF

Capturing ground truth data to benchmark super-resolution (SR) is challenging. Therefore, current quantitative studies are mainly evaluated on simulated data artificially sampled from ground truth images. We argue that such evaluations overestimate the actual performance of SR methods compared to their behavior on real images.

View Article and Find Full Text PDF

This paper considers online robust principal component analysis (RPCA) in time-varying decomposition problems such as video foreground-background separation. We propose a compressive online RPCA algorithm that decomposes recursively a sequence of data vectors (e.g.

View Article and Find Full Text PDF

The implementation of automatic image registration is still difficult in various applications. In this paper, an automatic image registration approach through line-support region segmentation and geometrical outlier removal is proposed. This new approach is designed to address the problems associated with the registration of images with affine deformations and inconsistent content, such as remote sensing images with different spectral content or noise interference, or map images with inconsistent annotations.

View Article and Find Full Text PDF

Due to their high resolution, dynamic medical 2D+t and 3D+t volumes from computed tomography (CT) and magnetic resonance tomography (MR) reach a size which makes them very unhandy for teleradiologic applications. A lossless scalable representation offers the advantage of a down-scaled version which can be used for orientation or previewing, while the remaining information for reconstructing the full resolution is transmitted on demand. The wavelet transform offers the desired scalability.

View Article and Find Full Text PDF

Pixelwise linear prediction using backward-adaptive least-squares or weighted least-squares estimation of prediction coefficients is currently among the state-of-the-art methods for lossless image compression. While current research is focused on mean intensity prediction of the pixel to be transmitted, best compression requires occurrence probability estimates for all possible intensity values. Apart from common heuristic approaches, we show how prediction error variance estimates can be derived from the (weighted) least-squares training region and how a complete probability distribution can be built based on an autoregressive image model.

View Article and Find Full Text PDF

Even though image signals are typically defined on a regular 2D grid, there also exist many scenarios where this is not the case and the amplitude of the image signal only is available for a non-regular subset of pixel positions. In such a case, a resampling of the image to a regular grid has to be carried out. This is necessary since almost all algorithms and technologies for processing, transmitting or displaying image signals rely on the samples being available on a regular grid.

View Article and Find Full Text PDF

In this paper, two multiple description coding schemes are developed, based on prediction-induced randomly offset quantizers and unequal-deadzone-induced near-uniformly offset quantizers, respectively. In both schemes, each description encodes one source subset with a small quantization stepsize, and other subsets are predictively coded with a large quantization stepsize. In the first method, due to predictive coding, the quantization bins that a coefficient belongs to in different descriptions are randomly overlapped.

View Article and Find Full Text PDF

In this paper, we derive a spatiotemporal extrapolation method for 3-D discrete signals. Extending a discrete signal beyond a limited number of known samples is commonly referred to as discrete signal extrapolation. Extrapolation problems arise in many applications in video communications.

View Article and Find Full Text PDF