At a recent press conference, DataDirect Networks (DDN) announced its latest Infinia 2.0 object storage system, designed specifically for artificial intelligence (AI) training and inference. The system claims to achieve up to 100 times AI data acceleration and a 10 times improvement in cloud data center cost efficiency, attracting attention from various industries.
DDN's CEO and co-founder, Alex Bouzari, stated, "85 of the Global 500 companies are using DDN's data intelligence platform to run their AI and high-performance computing (HPC) applications. Infinia will help customers achieve faster model training and real-time insights in data analytics and AI frameworks while ensuring future adaptability in GPU efficiency and energy consumption."
Image Source Note: Image generated by AI, licensed from Midjourney
Paul Bloch, DDN's co-founder and president, also added, "Our platform is already deployed in some of the largest AI factories and cloud environments globally, demonstrating its capability to support critical AI operations." Notably, Elon Musk's xAI is one of DDN's clients.
In the design of Infinia 2.0, AI data storage is at the core. CTO Sven Oehme emphasized, "AI workloads require real-time data intelligence to eliminate bottlenecks, accelerate workflows, and seamlessly scale in complex model enumeration, pre-training and post-training, retrieval-augmented generation (RAG), agentic AI, and multimodal environments." Infinia 2.0 aims to maximize the value of AI while providing real-time data services, efficient multi-tenant management, intelligent automation, and a robust AI-native architecture.
The system features event-driven data movement, multi-tenancy, and hardware-agnostic design, ensuring 99.999% uptime and achieving up to 10 times always-on data reduction, fault-tolerant network erasure coding, and automated quality of service (QoS). Infinia 2.0 integrates with Nvidia's Nemo, NIMS microservices, GPU, Bluefield 3DPU, and Spectrum-X networking to enhance the efficiency of AI data pipelines.
DDN claims that Infinia's bandwidth can reach TBps with latency below a millisecond, outperforming AWS S3 Express significantly. Other impressive metrics include a 100-fold improvement in AI data acceleration, AI workload processing speed, metadata processing, and object listing based on independent benchmarking, as well as a 25-fold increase in the speed of AI model training and inference queries.
The Infinia system supports scalability from TB to EB, capable of supporting over 100,000 GPUs and one million simultaneous clients, providing a solid foundation for large-scale AI innovations. DDN emphasizes that its system performs exceptionally well in real data center and cloud deployments, achieving unparalleled efficiency and cost savings across a range from 10 to over 100,000 GPUs.
Charles Liang, CEO of Supermicro, stated, "By combining DDN's data intelligence platform Infinia 2.0 with Supermicro's high-end server solutions, the two companies have collaborated to build one of the world's largest AI data centers." This partnership may be related to the expansion of xAI's Colossus data center.