Synthesizing Moving People with 3D Control

Single-image generation of realistic human motion

CommonProductImageImage Processing3D Animation
This product is based on a diffusion model framework, designed to generate 3D motion sequences of humans from a single image. Its core components include learning priors about the unseen parts of the human body and clothing, and rendering new body poses with appropriate clothing and textures. We train the model in the texture map space to make it invariant to pose and viewpoint, thus more efficient. Additionally, we develop a diffusion-based rendering pipeline controlled by 3D human poses that produces realistic human renderings. Our method can generate image sequences that align with 3D pose targets while visually resembling the input image. The 3D control also allows for generating various synthetic camera trajectories to render human figures. Experiments demonstrate that our approach can generate image sequences of continuous motion and complex poses, outperforming previous methods.
Visit

Synthesizing Moving People with 3D Control Visit Over Time

Monthly Visits

17788201

Bounce Rate

44.87%

Page per Visit

5.4

Visit Duration

00:05:32

Synthesizing Moving People with 3D Control Visit Trend

Synthesizing Moving People with 3D Control Visit Geography

Synthesizing Moving People with 3D Control Traffic Sources

Synthesizing Moving People with 3D Control Alternatives