Measuring the similarity between the patches in images is a fundamental building block in various tasks. Naturally, the patch size has a major impact on the matching quality and on the consequent application performance. Under the assumption that our patch database is sufficiently sampled, using large patches (e.g., 21 × 21 ) should be preferred over small ones (e.g., 7 × 7 ). However, this dense-sampling assumption is rarely true; in most cases, large patches cannot find relevant nearby examples. This phenomenon is a consequence of the curse of dimensionality, stating that the database size should grow exponentially with the patch size to ensure proper matches. This explains the favored choice of small patch size in most applications. Is there a way to keep the simplicity and work with small patches while getting some of the benefits that large patches provide? In this paper, we offer such an approach. We propose to concatenate the regular content of a conventional (small) patch with a compact representation of its (large) surroundings-its context. Therefore, with a minor increase of the dimensions (e.g., with additional ten values to the patch representation), we implicitly/softly describe the information of a large patch. The additional descriptors are computed based on a self-similarity behavior of the patch surrounding. We show that this approach achieves better matches, compared with the use of conventional-size patches, without the need to increase the database-size. Also, the effectiveness of the proposed method is tested on three distinct problems: 1) external natural image denoising; 2) depth image super-resolution; and 3) motion-compensated frame-rate up conversion.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1109/TIP.2016.2576402 | DOI Listing |
Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!