Recent methods based on deep learning have shown promise in converting grayscale images to colored ones. However, most of them only allow limited user inputs (no inputs, only global inputs, or only local inputs), to control the output colorful images. The possible difficulty lies in how to differentiate the influences of different inputs. To solve this problem, we propose a two-stage deep colorization method allowing users to control the results by flexibly setting global inputs and local inputs. The key steps include enabling color themes as global inputs by extracting K mean colors and generating K-color maps to define a global theme loss, and designing a loss function to differentiate the influences of different inputs without causing artifacts. We also propose a color theme recommendation method to help users choose color themes. Based on the colorization model, we further propose an image compression scheme, which supports variable compression ratios in a single network. Experiments on colorization show that our method can flexibly control the colorized results with only a few inputs and generate state-of-the-art results. Experiments on compression show that our method achieves much higher image quality at the same compression ratio when compared to the state-of-the-art methods.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1109/TVCG.2020.3021510 | DOI Listing |
Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!