Alsphere, the tech company behind AiShi, has announced that PixVerse V2.5 will be officially launched for global users on August 22.

In July this year, AiShi Technology released PixVerse V2, a significantly upgraded AI video generation product. Built on a Diffusion+Transformer (DiT) architecture, PixVerse V2 introduces several innovations in video generation technology. It offers the capability to generate longer, more consistent, and more engaging videos, supporting the creation of multiple video clips at once, with individual clips up to 8 seconds long and a total duration of up to 40 seconds for multiple clips.

WeChat Screenshot_20240822082649.png

PixVerse V2 stands out in technological innovation, incorporating a self-developed spatio-temporal attention mechanism that significantly enhances spatial and temporal awareness, improving the handling of complex scenes. Additionally, the product leverages multi-modal models to strengthen text comprehension, achieving precise alignment between text information and video content, thereby enhancing the model's understanding and expressive capabilities.

Furthermore, PixVerse V2 has been optimized on traditional models by using weighted loss to promote rapid model convergence and improve training efficiency. Based on user feedback and community discussions, the AiShi Technology team has emphasized the importance of consistency in video creation. PixVerse V2 supports the one-click generation of 1-5 consecutive video segments, maintaining consistent character appearances, visual styles, and scene elements.

PixVerse V2 also allows for secondary editing of generated content, where users can intelligently recognize and automatically associate elements, flexibly replacing and adjusting the video's subject, actions, style, and camera movements, thereby enriching the diversity of creation. AiShi Technology has stated that there will be multiple iterative upgrades over the next three months to provide an even better AI video generation experience.