Publications by authors named "Christine Guillemot"

Light fields capture 3D scene information by recording light rays emitted from a scene at various orientations. They offer a more immersive perception, compared with classic 2D images, but at the cost of huge data volumes. In this paper, we design a compact neural network representation for the light field compression task.

View Article and Find Full Text PDF

The volumetric representation of human interactions is one of the fundamental domains in the development of immersive media productions and telecommunication applications. Particularly in the context of the rapid advancement of Extended Reality (XR) applications, this volumetric data has proven to be an essential technology for future XR elaboration. In this work, we present a new multimodal database to help advance the development of immersive technologies.

View Article and Find Full Text PDF

Conventional microscopy systems have limited depth of field, which often necessitates depth scanning techniques hindered by light scattering. Various techniques have been developed to address this challenge, but they have limited extended depth of field (EDOF) capabilities. To overcome this challenge, this study proposes an end-to-end optimization framework for building a computational EDOF microscope that combines a 4f microscopy optical setup incorporating learned optics at the Fourier plane and a post-processing deblurring neural network.

View Article and Find Full Text PDF

Light field imaging, which captures both spatial and angular information, improves user immersion by enabling post-capture actions, such as refocusing and changing view perspective. However, light fields represent very large volumes of data with a lot of redundancy that coding methods try to remove. State-of-the-art coding methods indeed usually focus on improving compression efficiency and overlook other important features in light field compression such as scalability.

View Article and Find Full Text PDF

Deep generative models have proven to be effective priors for solving a variety of image processing problems. However, the learning of realistic image priors, based on a large number of parameters, requires a large amount of training data. It has been shown recently, with the so-called deep image prior (DIP), that randomly initialized neural networks can act as good image priors without learning.

View Article and Find Full Text PDF

Many computer vision applications rely on feature detection and description, hence the need for computationally efficient and robust 4D light field (LF) feature detectors and descriptors. In this paper, we propose a novel light field feature descriptor based on the Fourier disparity layer representation, for light field imaging applications. After the Harris feature detection in a scale-disparity space, the proposed feature descriptor is then extracted using a circular neighborhood rather than a square neighborhood.

View Article and Find Full Text PDF

Graph-based transforms are powerful tools for signal representation and energy compaction. However, their use for high dimensional signals such as light fields poses obvious problems of complexity. To overcome this difficulty, one can consider local graph transforms defined on supports of limited dimension, which may however not allow us to fully exploit long-term signal correlation.

View Article and Find Full Text PDF

We address the problem of light field dimensionality reduction for compression. We describe a local low rank approximation method using a parametric disparity model. The local support of the approximation is defined by super-rays.

View Article and Find Full Text PDF

Graph-based transforms have been shown to be powerful tools in terms of image energy compaction. However, when the size of the support increases to best capture signal dependencies, the computation of the basis functions becomes rapidly untractable. This problem is in particular compelling for high dimensional imaging data such as light fields.

View Article and Find Full Text PDF

In this article, we present a very lightweight neural network architecture, trained on stereo data pairs, which performs view synthesis from one single image. With the growing success of multi-view formats, this problem is indeed increasingly relevant. The network returns a prediction built from disparity estimation, which fills in wrongly predicted regions using a occlusion handling technique.

View Article and Find Full Text PDF

Tone Mapping Operators (TMO) designed for videos can be classified into two categories. In a first approach, TMOs are temporal filtered to reduce temporal artifacts and provide a Standard Dynamic Range (SDR) content with improved temporal consistency. This however does not improve the SDR coding Rate Distortion (RD) performances.

View Article and Find Full Text PDF

This paper describes a set of neural network architectures, called Prediction Neural Networks Set (PNNS), based on both fully-connected and convolutional neural networks, for intra image prediction. The choice of neural network for predicting a given image block depends on the block size, hence does not need to be signalled to the decoder. It is shown that, while fully-connected neural networks give good performance for small block sizes, convolutional neural networks provide better predictions in large blocks with complex textures.

View Article and Find Full Text PDF

The paper addresses the problem of energy compaction of dense 4D light fields by designing geometry-aware local graph-based transforms. Local graphs are constructed on super-rays that can be seen as a grouping of spatially and geometry-dependent angularly correlated pixels. Both non separable and separable transforms are considered.

View Article and Find Full Text PDF

In this paper, we propose a learning-based depth estimation framework suitable for both densely and sparsely sampled light fields. The proposed framework consists of three processing steps: initial depth estimation, fusion with occlusion handling, and refinement. The estimation can be performed from a flexible subset of input views.

View Article and Find Full Text PDF

In this paper, we present a new Light Field representation for efficient Light Field processing and rendering called Fourier Disparity Layers (FDL). The proposed FDL representation samples the Light Field in the depth (or equivalently the disparity) dimension by decomposing the scene as a discrete sum of layers. The layers can be constructed from various types of Light Field inputs, including a set of sub-aperture images, a focal stack, or even a combination of both.

View Article and Find Full Text PDF

The term "plenoptic" comes from the Latin words plenus ("full") + optic. The plenoptic function is the 7-dimensional function representing the intensity of the light observed from every position and direction in 3-dimensional space. Thanks to the plenoptic function it is thus possible to define the direction of every ray in the light-field vector function.

View Article and Find Full Text PDF

Light field imaging has recently known a regain of interest due to the availability of practical light field capturing systems that offer a wide range of applications in the field of computer vision. However, capturing high-resolution light fields remains technologically challenging since the increase in angular resolution is often accompanied by a significant reduction in spatial resolution. This paper describes a learning-based spatial light field super-resolution method that allows the restoration of the entire light field with consistency across all angular views.

View Article and Find Full Text PDF

Building up on the advances in low rank matrix completion, this article presents a novel method for propagating the inpainting of the central view of a light field to all the other views. After generating a set of warped versions of the inpainted central view with random homographies, both the original light field views and the warped ones are vectorized and concatenated into a matrix. Because of the redundancy between the views, the matrix satisfies a low rank assumption enabling us to fill the region to inpaint with low rank matrix completion.

View Article and Find Full Text PDF

Thanks to the increasing number of images stored in the cloud, external image similarities can be leveraged to efficiently compress images by exploiting inter-images correlations. In this paper, we propose a novel image prediction scheme for cloud storage. Unlike current state-of-the-art methods, we use a semi-local approach to exploit inter-image correlation.

View Article and Find Full Text PDF

This paper addresses the problem of designing a global tone mapping operator for rate distortion optimized backward compatible compression of high dynamic range (HDR) images. We address the problem of tone mapping design for two different use cases leading to two different minimization problems. The first problem considered is the minimization of the distortion on the reconstructed HDR signal under a rate constraint on the standard dynamic range (SDR) layer.

View Article and Find Full Text PDF

Most face super-resolution methods assume that low- and high-resolution manifolds have similar local geometrical structure; hence, learn local models on the low-resolution manifold (e.g., sparse or locally linear embedding models), which are then applied on the high-resolution manifold.

View Article and Find Full Text PDF

In this paper, we propose a novel scheme for scalable image coding based on the concept of epitome. An epitome can be seen as a factorized representation of an image. Focusing on spatial scalability, the enhancement layer of the proposed scheme contains only the epitome of the input image.

View Article and Find Full Text PDF

Graph-based representation (GBR) has recently been proposed for describing color and geometry of multiview video content. The graph vertices represent the color information, while the edges represent the geometry information, i.e.

View Article and Find Full Text PDF

This paper presents a color inter-layer prediction (ILP) method for scalable coding of high dynamic range (HDR) video content with a low dynamic range (LDR) base layer. Relying on the assumption of hue preservation between the colors of an HDR image and its LDR tone mapped version, we derived equations for predicting the chromatic components of the HDR layer given the decoded LDR layer. Two color representations are studied.

View Article and Find Full Text PDF

Local learning of sparse image models has proved to be very effective to solve inverse problems in many computer vision applications. To learn such models, the data samples are often clustered using the K-means algorithm with the Euclidean distance as a dissimilarity metric. However, the Euclidean distance may not always be a good dissimilarity measure for comparing data samples lying on a manifold.

View Article and Find Full Text PDF