We present a compression scheme for multiview imagery that facilitates high scalability and accessibility of the compressed content. Our scheme relies upon constructing at a single base view, a disparity model for a group of views, and then utilizing this base-anchored model to infer disparity at all views belonging to the group. We employ a hierarchical disparity-compensated inter-view transform where the corresponding analysis and synthesis filters are applied along the geometric flows defined by the base-anchored disparity model. The output of this inter-view transform along with the disparity information is subjected to spatial wavelet transforms and embedded block-based coding. Rate-distortion results reveal superior performance to the x.265 anchor chosen by the JPEG Pleno standards activity for the coding of multiview imagery captured by high-density camera arrays.

Download full-text PDF

Source
http://dx.doi.org/10.1109/TIP.2019.2894968DOI Listing

Publication Analysis

Top Keywords

multiview imagery
12
base-anchored model
8
disparity model
8
inter-view transform
8
model highly
4
highly scalable
4
scalable accessible
4
accessible compression
4
compression multiview
4
imagery compression
4

Similar Publications

Background: Decoding motor intentions from electroencephalogram (EEG) signals is a critical component of motor imagery-based brain-computer interface (MI-BCIs). In traditional EEG signal classification, effectively utilizing the valuable information contained within the electroencephalogram is crucial.

Objectives: To further optimize the use of information from various domains, we propose a novel framework based on multi-domain feature rotation transformation and stacking ensemble for classifying MI tasks.

View Article and Find Full Text PDF

Convolutional neural networks (CNNs) have been widely utilized for decoding motor imagery (MI) from electroencephalogram (EEG) signals. However, extracting discriminative spatial-temporal-spectral features from low signal-to-noise ratio EEG signals remains challenging. This paper proposes MBMSNet , a multi-branch, multi-scale, and multi-view CNN with a lightweight temporal attention mechanism for EEG-Based MI decoding.

View Article and Find Full Text PDF
Article Synopsis
  • * The technique uses a hierarchical approach, employing a plane-sweep algorithm combined with semi-global optimization to accurately create 3D models from large scenes typically seen in low-altitude drone images.
  • * In tests, FaSS-MVS produces 3D data with high accuracy comparable to advanced offline methods like COLMAP, but does so much faster, processing images at 1-2 frames per second with significantly less computational time.
View Article and Find Full Text PDF
Article Synopsis
  • Research in motor imagery using EEG signals is crucial for brain-computer interfaces (BCI), but current deep-learning methods struggle to leverage the complex relationships between brain regions.
  • The study introduces a new model called MGCANet, which incorporates multi-view graph convolution and attention mechanisms to better aggregate and analyze EEG data from different brain areas for improved classification accuracy.
  • Experimental results show that MGCANet achieved impressive accuracies of 78.26% and 73.68% on two public datasets, outperforming existing classification methods and offering fresh insights into motor imagery decoding.
View Article and Find Full Text PDF

Multi-View 3D object detection (MV3D) has made tremendous progress by leveraging multiple perspective features through surrounding cameras. Despite demonstrating promising prospects in various applications, accurately detecting objects through camera view in the 3D space is extremely difficult due to the ill-posed issue in monocular depth estimation. Recently, Graph-DETR3D presents a novel graph-based 3D-2D query paradigm in aggregating multi-view images for 3D object detection and achieves competitive performance.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!