smallbiztechnology_logo (1)

Oracle announces multi-ZettaFLOPS scale AI computing

2 Min Read
ZettaFLOPS AI

Oracle announced plans to deploy massive clusters of NVIDIA GPUs to achieve multi-ZettaFLOPS scale AI computing by 2025. The OCI Superclusters will use NVIDIA’s H100, H200, and upcoming Blackwell GPUs. The H100 GPU superclusters can scale up to 16,384 GPUs, delivering 65 ExaFLOPS of performance with 13 Pb/s network throughput.

The H200 GPU clusters, available later this year, will scale up to 65,536 GPUs, offering 260 ExaFLOPS and 52 Pb/s throughput. Oracle is now taking orders for its AI supercomputer in the cloud, featuring up to 131,072 NVIDIA Blackwell GPUs. This setup promises 2.4 ZettaFLOPS of peak performance.

The OCI Superclusters will use NVIDIA’s GB200 NVL72 liquid-cooled bare-metal instances.

Oracle’s AI infrastructure scaling plans

72 Blackwell GPUs will communicate with an aggregate 129.6 TB/s bandwidth within a single NVLink domain.

Advanced networking technologies like NVIDIA Quantum-2, RoCEv2, and ConnectX-8 SuperNICs will support these computational demands. Oracle also discussed plans to use new reactors to power its data centers. Implementing these plans will be key for Oracle to effectively scale its AI infrastructure.

The industry will be watching to see how Oracle overcomes the logistical and regulatory challenges ahead. Oracle’s vision, combined with NVIDIA’s GPU technology, sets the stage for a transformative leap in AI computing power. The scale and performance levels promise to redefine the paradigms of AI capabilities and infrastructure development.

See also  Android Devices: Charting Course of Performance and Design Innovation
Share This Article
Emily Parker is the dynamic force behind a groundbreaking startup poised to disrupt the industry. As the founder and CEO, Emily's innovative vision and entrepreneurial spirit drive her company's success.