Finally, everyone can use Byte's video generation model, PixelDance! Now, ByteDance's video generation models PixelDance and Seaweed are fully launched on Jimeng AI, and users can experience the powerful features of these models through the Jimeng AI web version and mobile app by selecting "Video P2.0Pro" or "Video S2.0Pro".
Both models require 20 points to generate a 5-second video, while P2.0Pro requires 40 points to generate a 10-second video.
After testing AIbase for a while, overall, if complex movements are needed, P2.0Pro would be a better choice, but it is more reliant on prompts. To achieve better results, one must master certain prompting techniques.
For small movements, sometimes S2.0Pro produces better video results than P2.0Pro, making it more user-friendly for beginners. Sometimes you don't even need prompts; you can directly convert images to videos, and the AI will automatically interpret the image information and convert it into suitable video effects.
According to feedback from several beta testers, P2.0Pro performs excellently when generating 10-second videos, especially during 3-5 camera transitions, maintaining good coherence between scenes and characters. With careful adjustments to prompts, this model can also achieve stunning special effects. Users can use advanced techniques such as temporal prompts and long shots to enhance the expressiveness and storytelling of the videos.
Here is AIbase's testing experience:
First, I provided an image of a surfing cat. The results from S2.0Pro and P2.0Pro are as follows:
S2.0Pro Result
P2.0Pro Result
As we can see, S2.0Pro accurately reproduces the style, color, and other detailed features of the input image, while P2.0Pro sometimes exhibits color deviation. In terms of movement, both models perform steadily with minimal issues.
Next, I tested both models with Elon Musk:
I simply input "Musk leans closer to the camera and gives a thumbs up" to see the results~
S2.0Pro Result
P2.0Pro Result
For this relatively simple video, the differences between the two models don't seem very significant, but P2.0Pro added some expressions to Musk, making it look more vivid.
Now let's increase the difficulty a bit:
I provided a longer and more complex prompt: "The camera zooms in, focusing on a young man dressed in a plain white robe. He holds an ancient long sword, looking determined. The wind blows through his hair as the sky darkens. A massive dark green dragon swoops down from the clouds, its scales glinting with a cold light."
S2.0Pro Result
P2.0Pro Result
Currently, P2.0Pro adheres more closely to the prompts, strictly following the first half of the prompt: "The camera zooms in, focusing on a young man dressed in a plain white robe," but the dragon's movement is somewhat limited. S2.0Pro's results are more random, but the dragon's movement is better than P2.0Pro's. In actual use, users can mix and match the two models based on their needs.
It is worth noting that Jimeng also offers a lightweight version of the video model, S2.0 (which is a stripped-down version of S2.0Pro). It generates videos faster, though the quality may sometimes be diminished, but generating one video only costs 5 points. This makes it a more cost-effective option.
Here are the results I got without providing any prompts:
If you're interested, you can try it out yourself: https://top.aibase.com/tool/jimeng