Publications by authors named "Christos G Bampis"

Video dimensions are continuously increasing to provide more realistic and immersive experiences to global streaming and social media viewers. However, increments in video parameters such as spatial resolution and frame rate are inevitably associated with larger data volumes. Transmitting increasingly voluminous videos through limited bandwidth networks in a perceptually optimal way is a current challenge affecting billions of viewers.

View Article and Find Full Text PDF

Because of the increasing ease of video capture, many millions of consumers create and upload large volumes of User-Generated-Content (UGC) videos to social and streaming media sites over the Internet. UGC videos are commonly captured by naive users having limited skills and imperfect techniques, and tend to be afflicted by mixtures of highly diverse in-capture distortions. These UGC videos are then often uploaded for sharing onto cloud servers, where they are further compressed for storage and transmission.

View Article and Find Full Text PDF

Measuring Quality of Experience (QoE) and integrating these measurements into video streaming algorithms is a multi-faceted problem that fundamentally requires the design of comprehensive subjective QoE databases and objective QoE prediction models. To achieve this goal, we have recently designed the LIVE-NFLX-II database, a highly-realistic database which contains subjective QoE responses to various design dimensions, such as bitrate adaptation algorithms, network conditions and video content. Our database builds on recent advancements in content-adaptive encoding and incorporates actual network traces to capture realistic network variations on the client device.

View Article and Find Full Text PDF

Measuring the quality of digital videos viewed by human observers has become a common practice in numerous multimedia applications, such as adaptive video streaming, quality monitoring, and other digital TV applications. Here we explore a significant, yet relatively unexplored problem: measuring perceptual quality on videos arising from both luma and chroma distortions from compression. Toward investigating this problem, it is important to understand the kinds of chroma distortions that arise, how they relate to luma compression distortions, and how they can affect perceived quality.

View Article and Find Full Text PDF

The use of l (p = 1,2) norms has largely dominated the measurement of loss in neural networks due to their simplicity and analytical properties. However, when used to assess the loss of visual information, these simple norms are not very consistent with human perception. Here, we describe a different "proximal" approach to optimize image analysis networks against quantitative perceptual models.

View Article and Find Full Text PDF

In a typical communication pipeline, images undergo a series of processing steps that can cause visual distortions before being viewed. Given a high quality reference image, a reference (R) image quality assessment (IQA) algorithm can be applied after compression or transmission. However, the assumption of a high quality reference image is often not fulfilled in practice, thus contributing to less accurate quality predictions when using stand-alone R IQA models.

View Article and Find Full Text PDF

Streaming video services represent a very large fraction of global bandwidth consumption. Due to the exploding demands of mobile video streaming services, coupled with limited bandwidth availability, video streams are often transmitted through unreliable, low-bandwidth networks. This unavoidably leads to two types of major streaming-related impairments: compression artifacts and/or rebuffering events.

View Article and Find Full Text PDF

Many existing Natural Scene Statistics-based no reference image quality assessment (NR IQA) algorithms employ parametric distributions to capture the statistical inconsistencies of bandpass distorted image coefficients. Here we propose a model of natural image coefficients expressed in the bandpass spatial domain that has the potential to capture higher-order correlations that may be induced by the presence of distortions. We analyze how the parameters of the multivariate model are affected by different distortion types, and we show their ability to capture distortion-sensitive image quality information.

View Article and Find Full Text PDF

HTTP adaptive streaming is being increasingly deployed by network content providers, such as Netflix and YouTube. By dividing video content into data chunks encoded at different bitrates, a client is able to request the appropriate bitrate for the segment to be played next based on the estimated network conditions. However, this can introduce a number of impairments, including compression artifacts and rebuffering events, which can severely impact an end-user's quality of experience (QoE).

View Article and Find Full Text PDF

We propose graph-driven approaches to image segmentation by developing diffusion processes defined on arbitrary graphs. We formulate a solution to the image segmentation problem modeled as the result of infectious wavefronts propagating on an image-driven graph where pixels correspond to nodes of an arbitrary graph. By relating the popular Susceptible - Infected - Recovered epidemic propagation model to the Random Walker algorithm, we develop the Normalized Random Walker and a lazy random walker variant.

View Article and Find Full Text PDF

The aim of this work is to present a modification of the Random Walker algorithm for the segmentation of occlusal caries from photographic color images. The modification improves the detection and time execution performance of the classical Random Walker algorithm and also deals with the limitations and difficulties that the specific type of images impose to the algorithm. The proposed modification consists of eight steps: 1) definition of the seed points, 2) conversion of the image to gray scale, 3) application of watershed transformation, 4) computation of the centroid of each region, 5) construction of the graph, 6) application of the Random Walker algorithm, 7) smoothing and extraction of the perimeter of the regions of interest and 8) overlay of the results.

View Article and Find Full Text PDF