Scalable feature extraction for coarse-to-fine JPEG 2000 image classification.

IEEE Trans Image Process

Institute of Information and Communication Technologies, Electronics and Applied Mathematics (ICTEAM), Universite Catholique de Louvain, 1348 Louvain-la-Neuve, Belgium.

Published: September 2011

In this paper, we address the issues of analyzing and classifying JPEG 2000 code-streams. An original representation, called integral volume, is first proposed to compute local image features progressively from the compressed code-stream, on any spatial image area, regardless of the code-blocks borders. Then, a JPEG 2000 classifier is presented that uses integral volumes to learn an ensemble of randomized trees. Several classification tasks are performed on various JPEG 2000 image databases and results are in the same range as the ones obtained in the literature with noncompressed versions of these databases. Finally, a cascade of such classifiers is considered, in order to specifically address the image retrieval issue, i.e., bi-class problems characterized by a highly skewed distribution. An efficient way to learn and optimize such cascade is proposed. We show that staying in a JPEG 2000 framework, initially seen as a constraint to avoid heavy decoding operations, is actually an advantage as it can benefit from the multiresolution and multilayer paradigms inherently present in this compression standard. In particular, unlike other existing cascaded retrieval systems, the features used along our cascade are increasingly discriminant and lead therefore to a better tradeoff of complexity versus performance.

Download full-text PDF

Source
http://dx.doi.org/10.1109/TIP.2011.2126584DOI Listing

Publication Analysis

Top Keywords

jpeg 2000
20
2000 image
8
jpeg
5
0
5
image
5
scalable feature
4
feature extraction
4
extraction coarse-to-fine
4
coarse-to-fine jpeg
4
image classification
4

Similar Publications

State-of-the-Art Trends in Data Compression: COMPROMISE Case Study.

Entropy (Basel)

November 2024

Faculty of Electrical Engineering and Computer Science, University of Maribor, Koroška cesta 46, SI-2000 Maribor, Slovenia.

After a boom that coincided with the advent of the internet, digital cameras, digital video and audio storage and playback devices, the research on data compression has rested on its laurels for a quarter of a century. Domain-dependent lossy algorithms of the time, such as JPEG, AVC, MP3 and others, achieved remarkable compression ratios and encoding and decoding speeds with acceptable data quality, which has kept them in common use to this day. However, recent computing paradigms such as cloud computing, edge computing, the Internet of Things (IoT), and digital preservation have gradually posed new challenges, and, as a consequence, development trends in data compression are focusing on concepts that were not previously in the spotlight.

View Article and Find Full Text PDF

A case study on entropy-aware block-based linear transforms for lossless image compression.

Sci Rep

November 2024

Faculty of Electrical Engineering and Computer Science, University of Maribor, Koroška cesta 46, 2000, Maribor, Slovenia.

Data compression algorithms tend to reduce information entropy, which is crucial, especially in the case of images, as they are data intensive. In this regard, lossless image data compression is especially challenging. Many popular lossless compression methods incorporate predictions and various types of pixel transformations, in order to reduce the information entropy of an image.

View Article and Find Full Text PDF

This paper provides a comprehensive study on features and performance of different ways to incorporate neural networks into lifting-based wavelet-like transforms, within the context of fully scalable and accessible image compression. Specifically, we explore different arrangements of lifting steps, as well as various network architectures for learned lifting operators. Moreover, we examine the impact of the number of learned lifting steps, the number of channels, the number of layers and the support of kernels in each learned lifting operator.

View Article and Find Full Text PDF

Unlabelled: New higher-count-rate, integrating, large area X-ray detectors with framing rates as high as 17,400 images per second are beginning to be available. These will soon be used for specialized MX experiments but will require optimal lossy compression algorithms to enable systems to keep up with data throughput. Some information may be lost.

View Article and Find Full Text PDF
Article Synopsis
  • This study introduces a new method for compressing dense light field images captured by Plenoptic 2.0 cameras, using advanced statistical models like the 5-D Epanechnikov Kernel.
  • To address limitations in traditional modeling techniques, the researchers propose a novel 5-D Epanechnikov Mixture-of-Experts approach that uses Gaussian Initialization, which performs better than existing models like 5-D Gaussian Mixture Regression.
  • Experimental results show that this new compression method produces higher quality rendered images than High Efficiency Video Coding (HEVC) and JPEG 2000, especially at low bit depths below 0.06 bits per pixel (bpp).
View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!