The generative AI video race just got a high-budget contender. Video Rebirth, a Singapore-based startup founded by former Tencent scientist Dr. Wei Liu, has secured $50 million in fresh funding from a mix of global financial and strategic investors. The company aims to upend today’s “AI video for everyone” trend by focusing instead on the professionals—those who care about motion, lighting, and realism as much as speed and creativity.
From Text-to-Video to “World Models”
While tools like Pika Labs, Runway, and Synthesia have made AI video accessible, they often struggle to deliver studio-quality results. Flickering frames, inconsistent motion, and unrealistic physics still betray the limits of current generative models. Video Rebirth thinks it has the fix.
Dr. Liu calls his team’s ambition a “world model” for AI video—technology capable of generating entire environments that obey physical rules. The startup’s proprietary Physics Native Attention architecture is designed to model interactions between objects, light, and movement more accurately than existing diffusion-based systems. It powers the company’s internal model family, nicknamed “Bach”, which aims to orchestrate visual scenes with cinematic fidelity rather than just generating clips that “look good enough.”
Aiming Beyond Consumer Novelty
The $50M injection will bankroll the final stretch toward Video Rebirth’s 1.0 product, slated for release in December 2025. Rather than chasing viral short-form videos, the company’s sights are set on film studios, ad agencies, e-commerce platforms, and animation houses—markets that demand control, quality, and repeatability.
As Dr. Liu puts it: “We’re not here to make another text-to-video toy. We’re here to build a platform for creators who demand cinematic quality and physical consistency.”
The Next Phase of Generative Video
The timing could prove strategic. With AI models like OpenAI’s Sora and Google’s Veo demonstrating jaw-dropping realism but limited accessibility, professional-grade video generation remains an open field. If Video Rebirth can combine Sora-level visuals with usable creative control, it might carve out a lucrative niche just as the market transitions from experimentation to production.
The startup’s focus on “AI Generated Entertainment (AIGE)” also hints at broader ambitions: world-model AI that could simulate interactive scenes for gaming, film previsualization, and virtual commerce. In other words, the next Pixar—or Unreal Engine—could be algorithmic.
Video Rebirth’s approach aligns with a larger trend: AI moving from assistive to generative infrastructure. The question now isn’t whether AI can make video—it’s whether it can make one worth watching twice.
Power Tomorrow’s Intelligence — Build It with TechEdgeAI










