Lossy Compression of Individual Sequences Revisited: Fundamental Limits of Finite-State Encoders.

Entropy (Basel)

The Viterbi Faculty of Electrical and Computer Engineering, Technion-Israel Institute of Technology, Technion City, Haifa 3200003, Israel.

Published: January 2024

We extend Ziv and Lempel's model of finite-state encoders to the realm of lossy compression of individual sequences. In particular, the model of the encoder includes a finite-state reconstruction codebook followed by an information lossless finite-state encoder that compresses the reconstruction codeword with no additional distortion. We first derive two different lower bounds to the compression ratio, which depend on the number of states of the lossless encoder. Both bounds are asymptotically achievable by conceptually simple coding schemes. We then show that when the number of states of the lossless encoder is large enough in terms of the reconstruction block length, the performance can be improved, sometimes significantly so. In particular, the improved performance is achievable using a random-coding ensemble that is universal, not only in terms of the source sequence but also in terms of the distortion measure.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC10887498PMC
http://dx.doi.org/10.3390/e26020116DOI Listing

Publication Analysis

Top Keywords

lossy compression
8
compression individual
8
individual sequences
8
finite-state encoders
8
number states
8
states lossless
8
lossless encoder
8
sequences revisited
4
revisited fundamental
4
fundamental limits
4

Similar Publications

This paper investigates the role of communication in improving coordination within robot swarms, focusing on a paradigm where learning and execution occur simultaneously in a decentralized manner. We highlight the role communication can play in addressing the credit assignment problem (individual contribution to the overall performance), and how it can be influenced by it. We propose a taxonomy of existing and future works on communication, focusing on information selection and physical abstraction as principal axes for classification: from low-level lossless compression with raw signal extraction and processing to high-level lossy compression with structured communication models.

View Article and Find Full Text PDF

Today, huge amounts of time series data are sensed continuously by AIoT devices, transmitted to edge nodes, and to data centers. It costs a lot of energy to transmit these data, store them, and process them. Data compression technologies are commonly used to reduce the data size and thus save energy.

View Article and Find Full Text PDF

State-of-the-Art Trends in Data Compression: COMPROMISE Case Study.

Entropy (Basel)

November 2024

Faculty of Electrical Engineering and Computer Science, University of Maribor, Koroška cesta 46, SI-2000 Maribor, Slovenia.

After a boom that coincided with the advent of the internet, digital cameras, digital video and audio storage and playback devices, the research on data compression has rested on its laurels for a quarter of a century. Domain-dependent lossy algorithms of the time, such as JPEG, AVC, MP3 and others, achieved remarkable compression ratios and encoding and decoding speeds with acceptable data quality, which has kept them in common use to this day. However, recent computing paradigms such as cloud computing, edge computing, the Internet of Things (IoT), and digital preservation have gradually posed new challenges, and, as a consequence, development trends in data compression are focusing on concepts that were not previously in the spotlight.

View Article and Find Full Text PDF

Adaptive Compression and Reconstruction for Multidimensional Medical Image Data: A Hybrid Algorithm for Enhanced Image Quality.

J Imaging Inform Med

December 2024

Department of Computer Science and Engineering, College of Engineering, Anna University, Guindy, Chennai, Tamilnadu, India.

Spatial regions within images typically hold greater priority over adjacent areas, especially in the context of medical images (MI) where minute details can have significant clinical implications. This research addresses the challenge of compressing medical image dimensions without compromising critical information by proposing an adaptive compression algorithm. The algorithm integrates a modified image enhancement module, clustering-based segmentation, and a variety of lossless and lossy compression techniques.

View Article and Find Full Text PDF
Article Synopsis
  • Video-based point cloud compression (V-PCC) is a new MPEG standard that effectively compresses both static and dynamic point clouds with various levels of quality loss.
  • In scenarios where the original point cloud isn't available, it’s important to create reduced-reference quality metrics, which can evaluate visual quality without direct comparison to the original.
  • The study proposes a new metric called PCQAML, which uses a set of 19 selected features related to various aspects of point clouds and demonstrates superior performance against existing metrics in multiple statistical measures.
View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!