Entropy (Basel)
Division of Information Science and Engineering, KTH Royal Institute of Technology, 100-44 Stockholm, Sweden.
Published: January 2025
A private compression design problem is studied, where an encoder observes useful data , wishes to compress them using variable-length code, and communicates them through an unsecured channel. Since are correlated with the private attribute , the encoder uses a private compression mechanism to design an encoded message C and sends it over the channel. An adversary is assumed to have access to the output of the encoder, i.e., C, and tries to estimate . Furthermore, it is assumed that both encoder and decoder have access to a shared secret key . In this work, the design goal is to encode message C with the minimum possible average length that satisfies certain privacy constraints. We consider two scenarios: 1. zero privacy leakage, i.e., perfect privacy (secrecy); 2. non-zero privacy leakage, i.e., non-perfect privacy constraint. Considering the perfect privacy scenario, we first study two different privacy mechanism design problems and find upper bounds on the entropy of the optimizers by solving a linear program. We use the obtained optimizers to design C. In the two cases, we strengthen the existing bounds: 1. |X|≥|Y|; 2. The realization of (X,Y) follows a specific joint distribution. In particular, considering the second case, we use two-part construction coding to achieve the upper bounds. Furthermore, in a numerical example, we study the obtained bounds and show that they can improve existing results. Finally, we strengthen the obtained bounds using the minimum entropy coupling concept and a greedy entropy-based algorithm. Considering the non-perfect privacy scenario, we find upper and lower bounds on the average length of the encoded message using different privacy metrics and study them in special cases. For achievability, we use two-part construction coding and extended versions of the functional representation lemma. Lastly, in an example, we show that the bounds can be asymptotically tight.
Download full-text PDF |
Source |
---|---|
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC11854754 | PMC |
http://dx.doi.org/10.3390/e27020124 | DOI Listing |
Med Image Anal
March 2025
Department of Mechanical Engineering, City University of Hong Kong, Hong Kong Special Administrative Region of China; Department of Data and Systems Engineering, The University of Hong Kong, Hong Kong Special Administrative Region of China. Electronic address:
Federated learning (FL) has shown great potential in medical image computing since it provides a decentralized learning paradigm that allows multiple clients to train a model collaboratively without privacy leakage. However, current studies have shown that data heterogeneity incurs local learning bias in classifiers and feature extractors of client models during local training, leading to the performance degradation of a federation system. To address these issues, we propose a novel framework called Federated Bias eliMinating (FedBM) to get rid of local learning bias in heterogeneous federated learning (FL), which mainly consists of two modules, i.
View Article and Find Full Text PDFIEEE Trans Vis Comput Graph
March 2025
Considering the issue of privacy leakage and motivating more sophisticated protection methods for air-typing with XR devices, in this paper, we propose AirtypeLogger, a new approach towards practical video-based attacks on the air-typing activities of XR users in virtual space. Different from the existing approaches, AirtypeLogger considers a scenario in which the users are typing a short text fragment with semantic meaning occasionally under the spy of video cameras. It detects and localizes the air-typing events in video streams and proposes the spatial-temporal representation to encode the keystrokes' relative positions and temporal order.
View Article and Find Full Text PDFIEEE Trans Image Process
March 2025
Collaborative learning has gained significant traction for training deep learning models without sharing the original data of participants, particularly when dealing with sensitive data such as facial images. However, current gradient inversion attacks are employed to progressively reconstruct private data from gradients, and they have shown successful in extracting private training data. Nonetheless, our observations reveal that these methods exhibit suboptimal performance in face reconstruction and result in the loss of numerous facial details.
View Article and Find Full Text PDFIEEE J Biomed Health Inform
February 2025
The exponential growth of sensitive patient information and diagnostic records in digital healthcare systems has increased the complexity of data protection, while frequent medical data breaches severely compromise system security and reliability. Existing privacy protection techniques often lack robustness and real-time capabilities in high-noise, high-packet-loss, and dynamic network environments, limiting their effectiveness in detecting healthcare data leaks. To address these challenges, we propose a Swarm Intelligence-Based Network Watermarking (SIBW) method for real-time privacy data leakage detection in digital healthcare systems.
View Article and Find Full Text PDFIEEE Trans Pattern Anal Mach Intell
February 2025
With growing concerns about information security, protecting the privacy of user-sensitive data has become crucial. The rapid development of multi-modal retrieval technologies poses new threats, making sensitive data more vulnerable to leakage and malicious mining. To address this, we introduce a Proactive Adversarial Multi-modal Learning (PAML) approach that transforms sensitive data into adversarial counterparts, evading malicious multi-modal retrieval and ensuring privacy.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!
© LitMetric 2025. All rights reserved.