In this paper, we introduce a novel Dense D2C-Net, an unobtrusive display-to-camera (D2C) communication scheme that embeds and extracts additional data via visual content through a deep convolutional neural network (DCNN). The encoding process of Dense D2C-Net establishes connections among all layers of the cover image, and fosters feature reuse to maintain the visual quality of the image. The Y channel is employed to embed binary data owing to its resilience against distortion from image compression and its lower sensitivity to color transformations. The encoder structure integrates hybrid layers that combine feature maps from the cover image and input binary data to efficiently hide the embedded data, while the addition of multiple noise layers effectively mitigates distortions caused by the optical wireless channel on the transmitted data. At the decoder, a series of 2D convolutional layers is used for extracting output binary data from the captured image. We conducted experiments in a real-world setting using a smartphone camera and a digital display, demonstrating superior performance from the proposed scheme compared to conventional DCNN-based D2C schemes across varying parameters such as transmission distance, capture angle, display brightness, and camera resolution.

Download full-text PDF

Source
http://dx.doi.org/10.1364/OE.498067DOI Listing

Publication Analysis

Top Keywords

dense d2c-net
12
binary data
12
cover image
8
data
6
image
5
dense
4
d2c-net dense
4
dense connection
4
connection network
4
network display-to-camera
4

Similar Publications

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!