The utilization of deep learning and invertible networks for image hiding has been proven effective and secure. These methods can conceal large amounts of information while maintaining high image quality and security. However, existing methods often lack precision in selecting the hidden regions and primarily rely on residual structures. They also fail to fully exploit low-level features, such as edges and textures. These issues lead to reduced quality in model generation results, a heightened risk of network overfitting, and diminished generalization capability. In this article, we propose a novel image hiding method based on invertible networks, called MFI-Net. The method introduces a new upsampling convolution block (UCB) and combines it with a residual dense block that employs the parametric rectified linear unit (PReLU) activation function, effectively utilizing multi-level information (low-level and high-level features) of the image. Additionally, a novel frequency domain loss (FDL) is introduced, which constrains the secret information to be hidden in regions of the cover image that are more suitable for concealing the data. Extensive experiments on the DIV2K, COCO, and ImageNet datasets demonstrate that MFI-Net consistently outperforms state-of-the-art methods, achieving superior image quality metrics. Furthermore, we apply the proposed method to digital collection images, achieving significant success.
Download full-text PDF |
Source |
---|---|
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC11888862 | PMC |
http://dx.doi.org/10.7717/peerj-cs.2668 | DOI Listing |
PeerJ Comput Sci
February 2025
School of Computer Science and Technology, Shandong Technology and Business University, Yantai, Shandong, China.
The utilization of deep learning and invertible networks for image hiding has been proven effective and secure. These methods can conceal large amounts of information while maintaining high image quality and security. However, existing methods often lack precision in selecting the hidden regions and primarily rely on residual structures.
View Article and Find Full Text PDFProc Natl Acad Sci U S A
March 2025
Department of Molecular Cellular and Developmental Biology, Yale University, New Haven, CT 06511.
Our intuition suggests that when a movie is played in reverse, our perception of motion at each location in the reversed movie will be perfectly inverted compared to the original. This intuition is also reflected in classical theoretical and practical models of motion estimation, in which velocity flow fields invert when inputs are reversed in time. However, here we report that this symmetry of motion perception upon time reversal is broken in real visual systems.
View Article and Find Full Text PDFIt is common in nature to see aggregation of objects in space. Exploring the mechanism associated with the locations of such clustered observations can be essential to understanding the phenomenon, such as the source of spatial heterogeneity, or comparison to other event generating processes in the same domain. Log-Gaussian Cox processes (LGCPs) represent an important class of models for quantifying aggregation in a spatial point pattern.
View Article and Find Full Text PDFSci Rep
March 2025
School of Computer Science and Artificial Intelligence, Zhengzhou University, Zhengzhou, 450001, China.
As an image enhancement technology, multi-modal image fusion primarily aims to retain salient information from multi-source image pairs in a single image, generating imaging information that contains complementary features and can facilitate downstream visual tasks. However, dual-stream methods with convolutional neural networks (CNNs) as backbone networks predominantly have limited receptive fields, whereas methods with Transformers are time-consuming, and both lack the exploration of cross-domain information. This study proposes an innovative image fusion model designed for multi-modal images, encompassing pairs of infrared and visible images and multi-source medical images.
View Article and Find Full Text PDFIEEE Trans Pattern Anal Mach Intell
February 2025
While deep neural networks (NN) significantly advance image compressed sensing (CS) by improving reconstruction quality, the necessity of training current CS NNs from scratch constrains their effectiveness and hampers rapid deployment. Although recent methods utilize pre-trained diffusion models for image reconstruction, they struggle with slow inference and restricted adaptability to CS. To tackle these challenges, this paper proposes Invertible Diffusion Models (IDM), a novel efficient, end-to-end diffusion-based CS method.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!