Pollo AI Unveils Wan 2.5 with Native Audio for Next-Level Video Creation
Pollo AI is taking AI video creation a step further with the launch of Wan 2.5, the latest model from Wan AI. Following in the footsteps of Veo 3, Wan 2.5 introduces native audio generation, letting creators automatically add scene-matching sound to their videos—or, for more control, integrate their own audio seamlessly.
The update also packs significant performance improvements: enhanced motion dynamics, sharper visuals, improved prompt understanding, and better frame-to-frame consistency, promising smoother and more realistic AI-generated video content.
Global Creative Challenge: “Heartbeat”
To mark the launch, Pollo AI has teamed up with the WanMuse Team for the Worldwide Wan 2.5 Submission Call, themed “Heartbeat.” Open until November 12, the contest invites storytellers to submit AI-generated videos in two categories:
- Professional Creators: Videos 30+ seconds
- Hobbyist Creators: Videos 10–30 seconds
Early participants get perks: the first 100 entrants receive 500 free credits and a one-month Pollo AI Pro membership, while 30 selected submissions earn 2,000 free credits and a one-month Pro membership.
“Our collaboration with WanMuse+ on the ‘Heartbeat’ challenge reflects our belief that powerful technology should ignite powerful, human stories,” said Emma Chen, CPO of Pollo AI.
Accessible, Creative, and Affordable
Getting started with Wan 2.5 is straightforward. Users simply select the model in Pollo AI’s image-to-video generator and start creating. To celebrate the launch, Pollo AI is offering 50% off credits for seven days, through September 30.
The combination of native audio, enhanced visuals, and motion refinement positions Wan 2.5 as a compelling choice for creators seeking a next-level AI video workflow. It also signals a growing trend in the AI content space: models that not only generate visuals but also automatically handle sound, reducing post-production work.
Why It Matters
AI video tools are evolving beyond basic animation and deepfake-style edits. With models like Wan 2.5, creators—whether professionals or hobbyists—can produce richer, more immersive content with fewer manual steps. For brands and storytellers, this could accelerate content creation cycles and unlock entirely new ways to engage audiences.