Luma AI has recently launched a new magic feature— the ability to generate videos from the first and last frames, as well as the option to extend the video by an additional 5 seconds. It's as if the video has been cast with an extension spell, filling every frame with infinite possibilities. While occasional hard cuts may occur, isn't this exactly what the charm of editing is all about? It fills every second of the video with surprises and creativity.

The future of video generation control will be based on such extension operations. During the extension process, not only can you use prompts to guide, but you can also add images, even achieving visual branch control. This will be a revolution in visual and creative expression, making video generation more intelligent and personalized.

Video from X blogger Guizang

Previously, Luma AI released the innovative "Extend" video feature. This feature can intelligently analyze video content and extend the length of the video based on user prompts, maintaining consistency with the original video style and characters. Users can easily expand the video time to over 10 seconds without worrying about the change in video quality or style.

Dream Machine is the latest text-to-video model released by Luma AI in mid-June. This model not only supports text input but can also use images as guidance to generate videos. The quality, action consistency, color, lighting, saturation, and camera movement of the generated videos are comparable to OpenAI's Sora.

Dream Machine can simulate the physical properties of the real world, such as gravity, collisions, and lighting changes, making the generated videos more realistic. The model can generate high-quality videos with excellent action continuity and visual effects. More importantly, Dream Machine is free for all users.

Free trial address: https://top.aibase.com/tool/dream-machine