Sycomp A Technology Company, Inc. today announced a breakthrough in data-to-GPU performance, reaching 1.2 terabytes per second (TB/s) on Google Cloud Platform (GCP). This milestone empowers AI scientific discovery and research by accelerating data-intensive and high-performance computing (HPC) workloads such as Generative AI, Agentic AI, Computer Vision, Machine Learning (ML), Deep Learning (DL), and Natural Language Processing (NLP).
Platform Features and Benefits
- Cutting-Edge Storage and Performance
Sycomp’s Intelligent Data Storage Platform leverages IBM Storage Scale and the Real Insight Storage Engine (RISE) for software-defined storage. It enables seamless evaluation of on-premises environments and cloud performance, ensuring optimal resource allocation. - Hybrid Cloud Flexibility
The platform allows dynamic synchronization of data across Google Cloud Storage buckets, on-premises, and globally distributed cloud regions, facilitating transparent data mobility and low latency access. - Optimized Resource Utilization
Enhanced Google Kubernetes Engine (GKE) performance and improved price-performance using GCP’s Z3 storage-optimized systems help maximize throughput. The platform achieves 97% GPU utilization on Google’s A3 Ultra machines with single-node throughput exceeding 29 GB/s. - Scalable IO Throughput
Delivering up to 1.2 TB/s of IO throughput, the platform supports the most demanding AI and HPC workloads with consistent, low latency data access.
Industry and Customer Impact
Dean Hildebrand, Technical Director, Office of the CTO at Google Cloud, noted, “GCP’s HPC and AI/ML customers demand amazing performance at competitive prices. Sycomp’s solution in the GCP Marketplace meets this need by enabling scalable, low latency reads with seamless integration across cloud storage.”
Saurabh Saxena, Vice President of Global Services and Engineering at Sycomp, stated, “With over 30 years of leadership in data center, HPC, and storage solutions, Sycomp’s Intelligent Data Storage Platform offers scalable, high-performance capabilities tailored for AI infrastructure and advanced technology design, supporting critical AI workloads globally.”