Super X AI Technology (NASDAQ: SUPX) is pushing the boundaries of enterprise AI compute with the XN9160-B300 AI Server, a high-performance powerhouse engineered for AI training, machine learning, and HPC workloads. Powered by NVIDIA’s Blackwell B300 GPUs, this 8U server aims to turn AI projects that typically take months into feasible, large-scale operations in record time.
“Organizations today need more than raw GPU power—they need scalable, reliable platforms capable of handling the next generation of AI workloads,” said a company spokesperson. With eight Blackwell B300 GPUs, 2,304GB of unified HBM3E memory, and ultra-high-speed networking, the XN9160-B300 is clearly built with ambitious AI workloads in mind.
Supercharged AI and HPC Performance
The server is designed to accelerate foundation model training, multimodal AI, reinforcement learning, and large-scale inference, while also handling HPC tasks such as climate modeling, drug discovery, seismic analysis, and insurance risk simulations. Its massive GPU memory pool eliminates offloading delays and supports enormous model contexts—critical for large language models and high-concurrency AI applications.
“NVIDIA’s Blackwell Ultra GPUs, coupled with second-generation Transformer Engines, provide a performance leap over previous architectures,” the company noted, highlighting 50% more NVFP4 compute and HBM memory per chip compared with the prior generation. This allows for faster training and inference without sacrificing efficiency, even at hyperscale deployments.
Enterprise-Ready Design
The XN9160-B300 is more than a GPU monster. It combines dual Intel Xeon 6 processors, 32 DDR5 DIMMs, and high-speed InfiniBand and Ethernet networking to ensure data throughput keeps pace with GPU performance. The fifth-generation NVLink interconnects allow the eight GPUs to act as a single, cohesive accelerator.
Energy efficiency and reliability are built in, with 12×3,000W 80 PLUS Titanium redundant power supplies and robust PCIe Gen5 and NVMe storage options. This makes the system suitable for continuous, mission-critical AI workloads in enterprise data centers.
Target Markets
SuperX positions the XN9160-B300 for organizations tackling massive AI and HPC challenges, including:
- Hyperscale AI Factories: Running trillion-parameter foundation models and high-concurrency AI engines.
- Scientific Research & Simulation: Exascale computing, molecular modeling, and digital twin creation.
- Financial Services: Real-time risk modeling, high-frequency trading simulations, and large language model applications.
- Bioinformatics & Genomics: Accelerating genome sequencing, drug discovery, and protein structure prediction.
- Global Systems Modeling: Climate prediction, disaster modeling, and high-resolution meteorological simulations.
With the launch of the XN9160-B300, SuperX is staking a claim in the upper echelon of enterprise AI infrastructure, offering a system that is both performance-heavy and production-ready for organizations that demand the very highest compute densities and memory capacities.
Power Tomorrow’s Intelligence — Build It with TechEdgeAI