Generative Rendering: 2D Mesh

Control video generation model

CommonProductVideoVideo GenerationControl Model
Traditional 3D content creation tools empower users with direct control over scene geometry, appearance, actions, and camera paths, transforming their imagination into reality. However, creating computer-generated videos remains a tedious manual process, which can be automated through emerging text-to-video diffusion models. Despite promising prospects, the lack of control in video diffusion models limits users' ability to apply their own creativity, rather than expanding it. To address this challenge, we propose a novel approach that combines the controllability of dynamic 3D meshes with the expressiveness and editability of emerging diffusion models. Our method takes animated low-fidelity rendering meshes as input and injectsground truth correspondences derived from the dynamic mesh into various stages of a pre-trained text-to-image generation model, resulting in high-quality, temporally consistent frames. We demonstrate our method on various examples, where actions are achieved through animating bound assets or altering camera paths.
Visit

Generative Rendering: 2D Mesh Alternatives