In this paper, we address the issues of analyzing and classifying JPEG 2000 code-streams. An original representation, called integral volume, is first proposed to compute local image features progressively from the compressed code-stream, on any spatial image area, regardless of the code-blocks borders. Then, a JPEG 2000 classifier is presented that uses integral volumes to learn an ensemble of randomized trees. Several classification tasks are performed on various JPEG 2000 image databases and results are in the same range as the ones obtained in the literature with noncompressed versions of these databases. Finally, a cascade of such classifiers is considered, in order to specifically address the image retrieval issue, i.e., bi-class problems characterized by a highly skewed distribution. An efficient way to learn and optimize such cascade is proposed. We show that staying in a JPEG 2000 framework, initially seen as a constraint to avoid heavy decoding operations, is actually an advantage as it can benefit from the multiresolution and multilayer paradigms inherently present in this compression standard. In particular, unlike other existing cascaded retrieval systems, the features used along our cascade are increasingly discriminant and lead therefore to a better tradeoff of complexity versus performance.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1109/TIP.2011.2126584 | DOI Listing |
Entropy (Basel)
November 2024
Faculty of Electrical Engineering and Computer Science, University of Maribor, Koroška cesta 46, SI-2000 Maribor, Slovenia.
After a boom that coincided with the advent of the internet, digital cameras, digital video and audio storage and playback devices, the research on data compression has rested on its laurels for a quarter of a century. Domain-dependent lossy algorithms of the time, such as JPEG, AVC, MP3 and others, achieved remarkable compression ratios and encoding and decoding speeds with acceptable data quality, which has kept them in common use to this day. However, recent computing paradigms such as cloud computing, edge computing, the Internet of Things (IoT), and digital preservation have gradually posed new challenges, and, as a consequence, development trends in data compression are focusing on concepts that were not previously in the spotlight.
View Article and Find Full Text PDFSci Rep
November 2024
Faculty of Electrical Engineering and Computer Science, University of Maribor, Koroška cesta 46, 2000, Maribor, Slovenia.
Data compression algorithms tend to reduce information entropy, which is crucial, especially in the case of images, as they are data intensive. In this regard, lossless image data compression is especially challenging. Many popular lossless compression methods incorporate predictions and various types of pixel transformations, in order to reduce the information entropy of an image.
View Article and Find Full Text PDFIEEE Trans Image Process
October 2024
This paper provides a comprehensive study on features and performance of different ways to incorporate neural networks into lifting-based wavelet-like transforms, within the context of fully scalable and accessible image compression. Specifically, we explore different arrangements of lifting steps, as well as various network architectures for learned lifting operators. Moreover, we examine the impact of the number of learned lifting steps, the number of channels, the number of layers and the support of kernels in each learned lifting operator.
View Article and Find Full Text PDFUnlabelled: New higher-count-rate, integrating, large area X-ray detectors with framing rates as high as 17,400 images per second are beginning to be available. These will soon be used for specialized MX experiments but will require optimal lossy compression algorithms to enable systems to keep up with data throughput. Some information may be lost.
View Article and Find Full Text PDFIEEE Trans Image Process
July 2024
Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!