In the latest video coding standard, namely Versatile Video Coding (VVC), more directional intra modes and reference lines have been utilized to improve prediction efficiency. However, complex content still cannot be predicted well with only the adjacent reference samples. Although nonlocal prediction has been proposed to further improve the prediction efficiency in existing algorithms, explicit signalling or matching error potentially limits the coding efficiency. To address these issues, we propose a joint local and nonlocal progressive prediction scheme, targeting at improving nonlocal prediction accuracy without additional signalling. Specifically, template matching based prediction (TMP) is conducted firstly to derive an initial nonlocal predictor. Based on the first prediction and previously decoded reconstruction information, a local template, including inner textures and neighboring reconstruction, is carefully designed. With the local template involved in nonlocal matching process, a more accurate nonlocal predictor can be found progressively in the second prediction. Finally, the coefficients from the two predictions are fused and transmitted in bitstreams. In this way, more accurate nonlocal predictor can be derived implicitly with local information instead of being explicitly signalled. Experimental results on the reference software VTM-9.0 of VVC show that the method achieves 1.02% BD-Rate reduction for natural sequences and 2.31% BD-Rate reduction for screen content videos under all intra (AI) configuration.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1109/TIP.2022.3161831 | DOI Listing |
Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!