At the 2024 World Artificial Intelligence Conference (WAIC), SenseTime Technology launched a controllable character video generation large model named Vimi. The Vimi model utilizes SenseTime's advanced large model technology to generate character videos consistent with the target action from a single photo, achieving precise facial and limb control. The model supports various driving methods, including videos, animations, sounds, and text. It boasts years of accumulated facial tracking technology and precise control over details, capable of producing highly consistent and harmonious video content with light and shadow.

Vimi's stability is particularly outstanding, as it can generate single-lens character videos over 1 minute long without the image quality deteriorating over time. It can also adjust the environmental scene according to the character's movements, simulate changes in camera angles and hair trembling, and provide realistic visual effects. Moreover, Vimi supports the simulation of light and shadow changes, offering video creators a wide range of creative freedom.

WeChat Screenshot_20240709140907.png

Vimi Camera, as the first consumer application based on the Vimi large model, is mainly aimed at a wide range of female users to meet their entertainment and creation needs. After users upload high-definition character photos from different angles, Vimi Camera can automatically generate digital avatars and portrait videos in various styles, offering diverse generation options. Vimi Camera also supports creating fun character emoticons from a single photo, with various gameplay, enabling personalized creation.

Currently, Vimi Camera is open for beta testing. Interested users can apply for an experience by following the official account and filling out the reservation link.