Hewlett Packard Enterprise announced significant enhancements to its portfolio of NVIDIA AI Computing by HPE solutions, designed to support the entire AI lifecycle. These updates target enterprises, service providers, sovereigns, and research organizations, providing deeper integration with NVIDIA AI Enterprise and launching new compute, storage, and software offerings.
Strengthening Collaboration Between HPE and NVIDIA
“Our strong collaboration with NVIDIA continues to drive transformative outcomes for our shared customers,” said Antonio Neri, president and CEO of HPE. “By co-engineering cutting-edge AI technologies elevated by HPE’s robust solutions, we are empowering businesses to harness the full potential of these advancements throughout their organization, no matter where they are on their AI journey.”
Jensen Huang, founder and CEO of NVIDIA, added, “Together, NVIDIA and HPE are laying the foundation for businesses to harness intelligence as a new industrial resource that scales from the data center to the cloud and the edge.”
HPE Private Cloud AI Gains Enhanced NVIDIA AI Enterprise Support
HPE Private Cloud AI, a turnkey cloud-based AI factory co-developed with NVIDIA, helps customers unify AI strategies across business units. The solution will now support feature branch model updates from NVIDIA AI Enterprise, including AI frameworks, microservices, and SDKs. This enables AI developers to test and validate software features for scalable, safe deployment of generative and agentic AI applications.
Introducing HPE Alletra Storage MP X10000 SDK for NVIDIA AI Data Platform
HPE’s latest storage solution, Alletra Storage MP X10000, introduces an SDK that integrates with the NVIDIA AI Data Platform. This partnership accelerates data pipelines and intelligent orchestration, enabling enterprises to streamline unstructured data ingestion, inference, and continuous learning.
benefits include:
- Flexible inline data processing, vector indexing, and metadata enrichment.
- Accelerated data path with RDMA transfers between GPU memory, system memory, and storage.
- Modular scalability to align capacity and performance with workload demands.
This integration enables seamless data access for agentic AI applications from edge to cloud.
HPE ProLiant Compute DL380a Gen12 Servers Now Support NVIDIA RTX PRO 6000 Blackwell GPUs
The HPE ProLiant Compute DL380a Gen12 server, a leader in MLPerf Inference benchmarks, will soon feature up to 10 NVIDIA RTX PRO 6000 Blackwell Server Edition GPUs, enhancing performance for enterprise AI workloads, including multimodal AI inference, model fine-tuning, and graphics applications.
Notable features include:
- Advanced air-cooled and liquid-cooled options.
- Enhanced security with Silicon Root of Trust and post-quantum cryptography readiness.
- Automated lifecycle management with AI-driven insights for energy efficiency.
Additional HPE servers topping MLPerf Inference v5.0 benchmarks include the DL384 Gen12 and Cray XD670, validating HPE’s AI innovation leadership.
Expanding AI Infrastructure Optimization with HPE OpsRamp Software
HPE OpsRamp Software will support NVIDIA RTX PRO 6000 Blackwell GPUs, providing SaaS-based AI infrastructure optimization. It offers full-stack workload observability, workflow automation, and AI-powered analytics to streamline operations of distributed AI environments. Deep integration with NVIDIA technologies ensures granular performance and resilience monitoring.