Privacy-Preserving Semantic Segmentation Using Vision Transformer.

J Imaging

Department of Computer Science, Tokyo Metropolitan University, 6-6 Asahigaoka, Hino-shi, Tokyo 191-0065, Japan.

Published: August 2022

In this paper, we propose a privacy-preserving semantic segmentation method that uses encrypted images and models with the vision transformer (ViT), called the segmentation transformer (SETR). The combined use of encrypted images and SETR allows us not only to apply images without sensitive visual information to SETR as query images but to also maintain the same accuracy as that of using plain images. Previously, privacy-preserving methods with encrypted images for deep neural networks have focused on image classification tasks. In addition, the conventional methods result in a lower accuracy than models trained with plain images due to the influence of image encryption. To overcome these issues, a novel method for privacy-preserving semantic segmentation is proposed by using an embedding that the ViT structure has for the first time. In experiments, the proposed privacy-preserving semantic segmentation was demonstrated to have the same accuracy as that of using plain images under the use of encrypted images.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC9503913PMC
http://dx.doi.org/10.3390/jimaging8090233DOI Listing

Publication Analysis

Top Keywords

privacy-preserving semantic
16
semantic segmentation
16
encrypted images
16
plain images
12
images
9
vision transformer
8
accuracy plain
8
privacy-preserving
5
segmentation
5
segmentation vision
4

Similar Publications

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!