Existing secure multiparty computation protocol from secret sharing is usually under this assumption of the fast network, which limits the practicality of the scheme on the low bandwidth and high latency network. A proven method is to reduce the communication rounds of the protocol as much as possible or construct a constant-round protocol. In this work, we provide a series of constant-round secure protocols for quantized neural network (QNN) inference. This is given by masked secret sharing (MSS) in the three-party honest-majority setting. Our experiment shows that our protocol is practical and suitable for low-bandwidth and high-latency networks. To the best of our knowledge, this work is the first one where the QNN inference based on masked secret sharing is implemented.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC9955064PMC
http://dx.doi.org/10.3390/e25020389DOI Listing

Publication Analysis

Top Keywords

secret sharing
16
masked secret
12
inference based
8
based masked
8
quantized neural
8
neural network
8
qnn inference
8
round-efficient secure
4
secure inference
4
secret
4

Similar Publications

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!