Binary neural network (BNN) is an effective method for reducing model computational and memory cost, which has achieved much progress in the super-resolution (SR) field. However, there is still a noticeable performance gap between a binary SR network and its full-precision counterpart. Considering that the information density in quantization features is far lower than full-precision features, we aim to improve the precision of quantization features to produce rich-enough output activations for SR task. First, we make several observations that a multibit value could be approximated by multiple 1-bit values, and the computation power of binary convolution could be improved by approximating the multibit convolution process. Then, we propose a mixed binary representation set to approximate multibit activations, which is effective in compensating the quantization precision loss. Finally, we present a new precision-driven binary convolution (PDBC) module, which increases the convolution precision and protects image detail information without extra computation. Compared with normal binary convolution, our method could largely reduce the information loss caused by binarization. In experiments, our methods consistently show superior performance over the baseline models and can surpass state-of-the-art methods in terms of peak signal to noise ratio (PSNR) and visual quality.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1109/TNNLS.2022.3201528 | DOI Listing |
Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!