The PyTorch Foundation, the Linux Foundation-backed hub for open source AI, has announced that Ray is its newest foundation-hosted project. Originally developed by Anyscale, Ray is a distributed computing framework designed to streamline AI workloads—including data processing, model training, and inference at scale.
With AI teams often slowed by fragmented systems and compute inefficiencies, Ray aims to eliminate distributed computing bottlenecks, complementing PyTorch and vLLM by allowing seamless execution of workloads from a single machine to thousands of nodes. Since its inception at UC Berkeley, Ray has earned over 39,000 GitHub stars and more than 237 million downloads.
“The PyTorch Foundation is committed to fostering an open, interoperable, and production-ready AI ecosystem,” said Matt White, GM of AI at the Linux Foundation and Executive Director of the PyTorch Foundation. “By bringing Ray under the foundation, we are uniting critical components to build next-generation AI systems and supporting developers in training, serving, and deploying models at scale.”
Ray’s Capabilities for Modern AI Workloads
Ray provides a scalable compute framework to address the heavy computational demands of AI:
- Multimodal Data Processing: Handles massive, diverse datasets—including text, images, audio, and video—efficiently in parallel.
- Pre-training and Post-tuning: Scales PyTorch and other ML frameworks across thousands of GPUs for both model pre-training and fine-tuning.
- Distributed Inference: Serves production models with high throughput and low latency, orchestrating dynamic workloads across clusters.
By donating Ray to the PyTorch Foundation, Anyscale reinforces its commitment to open governance and long-term sustainability for the project.
“With Ray, our goal is to make distributed computing as straightforward as writing Python code,” said Robert Nishihara, co-founder of Anyscale. “Joining the PyTorch Foundation ensures Ray remains an open, community-driven backbone for AI developers.”
Together, PyTorch for model development, vLLM for inference, and Ray for distributed execution now form a unified open source foundation for AI. This integrated ecosystem allows teams to build and scale AI applications more efficiently, without struggling with fragmented infrastructure or proprietary lock-in.
Power Tomorrow’s Intelligence — Build It with TechEdgeAI