This paper presents a data-driven approach for automatically generating cartoon faces in different styles from a given portrait image. Our stylization pipeline consists of two steps: an offline analysis step to learn about how to select and compose facial components from the databases; a runtime synthesis step to generate the cartoon face by assembling parts from a database of stylized facial components. We propose an optimization framework that, for a given artistic style, simultaneously considers the desired image-cartoon relationships of the facial components and a proper adjustment of the image composition. We measure the similarity between facial components of the input image and our cartoon database via image feature matching, and introduce a probabilistic framework for modeling the relationships between cartoon facial components. We incorporate prior knowledge about image-cartoon relationships and the optimal composition of facial components extracted from a set of cartoon faces to maintain a natural, consistent, and attractive look of the results. We demonstrate generality and robustness of our approach by applying it to a variety of portrait images and compare our output with stylized results created by artists via a comprehensive user study.

Download full-text PDF

Source
http://dx.doi.org/10.1109/TIP.2016.2628581DOI Listing

Publication Analysis

Top Keywords

facial components
24
cartoon faces
12
faces styles
8
image-cartoon relationships
8
cartoon
6
facial
6
components
6
data-driven synthesis
4
synthesis cartoon
4
styles paper
4

Similar Publications

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!