In addition to 3D geometry, accurate representation of texture is important when digitizing real objects in virtual worlds. Based on a single consumer RGBD sensor, accurate texture representation for static objects can be realized by fusing multi-frame information; however, extending the process to dynamic objects, which typically have time-varying textures, is difficult. Thus, to address this problem, we propose a compact keyframe-based representation that decouples a dynamic texture into a basic static texture and a set of multiplicative changing maps. With this representation, the proposed method first aligns textures recorded from multiple keyframes with the reconstructed dynamic geometry of the object. Errors in the alignment and geometry are then compensated in an innovative iterative linear optimization framework. With the reconstructed texture, we then employ a scheme to synthesize the dynamic object from arbitrary viewpoints. By considering temporal and local pose similarities jointly, dynamic textures in all keyframes are fused to guarantee high-quality image generation. Experimental results demonstrate that the proposed method handles various dynamic objects, including faces, bodies, cloth, and toys. In addition, qualitative and quantitative comparisons demonstrate that the proposed method outperforms state-of-the-art solutions.

Download full-text PDF

Source
http://dx.doi.org/10.1109/TVCG.2021.3064846DOI Listing

Publication Analysis

Top Keywords

proposed method
12
dynamic texture
8
consumer rgbd
8
rgbd sensor
8
dynamic objects
8
demonstrate proposed
8
texture
6
dynamic
6
dtexfusion dynamic
4
texture fusion
4

Similar Publications

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!