Alibaba has introduced Wan2.7-Video, a new AI model designed to help creators produce complete videos with higher quality and efficiency.
The system allows users to move beyond simple content generation to full video creation, offering advanced control similar to a film director. Creators can manage storytelling, editing, and post-production using one platform.
The release comes shortly after Alibaba launched Wan2.7-Image, showing a fast expansion of its AI tools for both video and image creation.
Wan2.7-Video includes four tools: text-to-video, image-to-video, reference-to-video, and video editing. These features combine text, images, audio, and video into a single workflow.
The platform helps solve common issues like inconsistent scenes and weak story flow. It can produce videos from 2 to 15 seconds in 720p or 1080p, and also supports enterprise use through APIs.
Users can edit videos using natural language instructions. This includes changing characters, dialogue, scenes, and camera movements while keeping lighting and style consistent.
The system can automatically sync lip movements with updated dialogue and maintain unique voices. It also supports multiple characters, keeping their appearance and voices consistent across scenes.
Wan2.7-Video can turn simple prompts into full storyboards with cinematic effects like drone shots and 360-degree camera movement. It also allows smooth scene transitions by controlling how videos start and end.
Alibaba also launched Wan2.7-Image, which focuses on better personalization and accurate color control. Users can adjust detailed features like facial structure and match exact color codes for branding.
The tool can generate multiple images at once and supports detailed text rendering, including complex formulas and multiple languages. A higher-end version, Wan2.7-Image-Pro, offers improved quality and 4K output.
Both tools are now available on Alibaba Cloud’s Model Studio and related platforms, aiming to provide creators with faster and more powerful multimedia production tools.






