Amazon Web Services (AWS) and TwelveLabs, the video understanding company, have announced the integration of TwelveLabs’ multimodal foundation models—Marengo and Pegasus—into Amazon Bedrock, a fully managed service that gives developers access to top-tier foundation models through a single API. This move marks AWS as the first cloud provider to offer TwelveLabs’ powerful video AI capabilities, enabling enterprises to transform their video data into actionable intelligence at scale. With the inclusion of TwelveLabs in Amazon Bedrock, developers can now harness state-of-the-art video search, classification, summarization, and insight extraction—all backed by the security, privacy, and scalability of AWS.
Revolutionizing Access to the World’s Largest Unstructured Data Source
Video data represents nearly 80% of the world’s data, yet it remains largely unsearchable and underutilized. TwelveLabs changes that equation by enabling deep, multimodal video understanding, transforming decades of footage into indexable, searchable, and analyzable digital assets.
“By making our models available through Amazon Bedrock, we’re empowering enterprises to bring video understanding to their existing infrastructure,” said Jae Lee, Co-founder and CEO of TwelveLabs.
“Users can now find precise video moments in seconds and extract meaning from them, whether the footage is 10 years old or 10 minutes old.”
Features of TwelveLabs Models in Amazon Bedrock
TwelveLabs’ models bring advanced AI capabilities for video understanding directly into the hands of developers and enterprises:
- Natural language video search: Find exact moments in video using plain-language queries.
- Multimodal analysis: Processes visuals, audio, and text simultaneously.
- No pre-labeled data required: Understands content contextually without custom training.
- Temporal intelligence: Connects related events over time within video sequences.
- Enterprise scalability: Handles massive video libraries efficiently with built-in governance.
These capabilities enable sports networks, broadcasters, streaming platforms, and advertisers to convert static archives into dynamic business tools—fueling operational efficiency and new monetization streams.
Enterprise Impact Across Industries
TwelveLabs’ integration into Amazon Bedrock opens transformative opportunities for multiple sectors:
- Media & Entertainment: Automate highlight generation, theme discovery, and content packaging.
- Sports & Teams: Analyze gameplay and generate fan-first video experiences on demand.
- Broadcast & News: Quickly locate and repurpose relevant footage from massive archives.
- Streaming & Ads: Dynamically insert context-relevant ads and tailor distribution strategies.
“With powerful tools like Amazon Bedrock and TwelveLabs’ models, we’re creating smarter, more immersive experiences for fans,” said Humza Teherany, Chief Strategy & Innovation Officer at MLSE.
Seamless AI Integration for Developers
Available through Amazon Bedrock, TwelveLabs’ models are fully managed and serverless, requiring no infrastructure oversight. This allows developers to:
- Integrate video understanding into apps without deep AI expertise.
- Build search, categorization, summarization, and content extraction tools.
- Scale seamlessly across content volumes while maintaining performance and security.
- Maintain full data control and utilize cost-control features to manage AI responsibly.
“Natural language semantic search is a strategic unlock for our entertainment customers,” said Samira Panah Bakhtiar, GM of Media & Entertainment at AWS.
“With TwelveLabs in Amazon Bedrock, customers can now make sense of any video moment—quickly and securely.”
Customer Use Case: Cineverse and Strategic Partners
Companies like Cineverse and Monks are already leveraging these AI tools to revolutionize video value chains across IP, advertising, and streaming.
“TwelveLabs in Amazon Bedrock makes it easier to monetize content across sports, news, and entertainment,” said Lewis Smithingham, EVP at Monks.
Expanding AWS–TwelveLabs Collaboration
This integration follows a broader Strategic Collaboration Agreement (SCA) between AWS and TwelveLabs. Through the use of Amazon SageMaker HyperPod, TwelveLabs is accelerating the development and training of its foundation models—scaling rapidly while reducing costs.
“This marks the next phase in our collaboration with AWS, making our video AI more accessible to enterprises worldwide,” added Jae Lee.
Unlocking Video with Foundation Models
The availability of TwelveLabs’ Marengo and Pegasus models on Amazon Bedrock signals a major shift in how organizations access and utilize their video data. By providing developers with seamless access to powerful video understanding models, AWS and TwelveLabs are democratizing access to one of the world’s richest but least-explored data sources.
The result: new use cases, better user experiences, and smarter decision-making powered by the next generation of multimodal AI.