smallbiztechnology_logo (1)

Oracle Code Assist introduces AI capabilities

10 Min Read
AI Assist

Oracle has announced the beta release of Oracle Code Assist, an AI-powered programming assistant optimized for Java and application development on Oracle Cloud Infrastructure (OCI). The tool is available through the Oracle Beta Program and is designed to enhance the development of applications specifically for OCI. One key feature of the beta version is its optimization for Java.

This includes capabilities that assist in building new Java applications and updating legacy applications to improve both performance and security. Oracle has also revealed plans to deliver optimization for NetSuite SuiteScript within the next year, aiming to aid in the creation of NetSuite extensions and customizations. Oracle Code Assist can be deployed as a plugin for JetBrains’ IntelliJ IDEA IDE, where developers can receive AI-generated suggestions to facilitate application development across various modern programming languages.

In addition to Oracle Code Assist, Oracle also announced new features for its OCI Kubernetes Engine (OKE). These features include support for Ubuntu Linux images, enhanced container security, logging analytics for OKE workloads, and comprehensive health checks for cluster nodes. The container security enhancements aim to help developers quickly identify and address security issues down to the container level, while the new health checks ensure the continued health and updates of worker nodes.

This strategic development underscores Oracle’s commitment to advancing its cloud infrastructure and artificial intelligence capabilities. Enterprises are seeking increasingly powerful computing solutions to support their AI workloads and accelerate data processing. This efficiency translates to better returns on AI investments in training and fine-tuning, and improved user experiences for AI inference.

At the Oracle CloudWorld conference today, Oracle Cloud Infrastructure (OCI) announced the first zettascale OCI Supercluster, accelerated by NVIDIA, to help enterprises train and deploy next-generation AI models using more than 100,000 of NVIDIA’s latest-generation GPUs. OCI Superclusters allow customers to choose from a wide range of NVIDIA GPUs and deploy them in various environments: on premises, public cloud, and sovereign cloud. Set for availability in the first half of next year, the Blackwell-based systems can scale up to 131,072 Blackwell GPUs with RoCEv2 networking, delivering an astounding 2.4 zettaflops of peak AI compute to the cloud.

See also  Adapting Warren's Financial Strategy Amid Inflation Crisis

At the show, Oracle also previewed liquid-cooled bare-metal instances to power large-scale training with Quantum-2 InfiniBand and real-time inference of trillion-parameter models within an extended 72-GPU domain, which can act as a single, massive GPU. OCI will offer new bare-metal instances connecting eight GPUs in a single instance via NVLink and NVLink Switch, and scaling to 65,536 H200 GPUs with NVIDIA ConnectX-7 NICs over RoCEv2 cluster networking. This capability allows customers to deliver real-time inference at scale and accelerate their training workloads.

OCI announced the general availability of GPU-accelerated instances for midrange AI workloads and visualization. Oracle’s edge offerings provide scalable AI at the edge, even in disconnected and remote locations. For example, smaller-scale deployments with Oracle’s Roving Edge Device v2 will now support enterprise-level AI processing.

Companies are using NVIDIA-powered OCI Superclusters to drive AI innovation. Foundation model startup Reka, for instance, is using the clusters to develop advanced multimodal AI models for enterprise agents. “Reka’s multimodal AI models, built with OCI and NVIDIA technology, empower next-generation enterprise agents that can read, see, hear, and speak to make sense of our complex world,” said Dani Yogatama, cofounder and CEO of Reka.

“With NVIDIA GPU-accelerated infrastructure, we can handle very large models and extensive contexts with ease, all while enabling dense and sparse training to scale efficiently.”

NVIDIA received the 2024 Oracle Technology Solution Partner Award in Innovation for its comprehensive approach to AI innovation. Oracle Autonomous Database is gaining NVIDIA GPU support for Oracle Machine Learning notebooks, allowing customers to accelerate their data processing workloads.

Oracle introduces AI-powered coding assistant

NVIDIA and Oracle are partnering to demonstrate several capabilities, including how NVIDIA GPUs can accelerate bulk vector embeddings, vector graph index generation, and boost generative AI performance for text generation and translation use cases. NVIDIA and Oracle are collaborating to deliver infrastructure worldwide to meet the data residency needs of governments and enterprises. Brazil-based startup Wide Labs trained and deployed Amazonia IA, one of the first large language models for Brazilian Portuguese, using OCI’s Brazilian data centers to ensure data sovereignty.

See also  China Allows Chatbots, Tech Stocks Jump

“Developing a sovereign LLM allows us to offer clients a service that processes their data within Brazilian borders, giving Amazônia a unique market position,” said Nelson Leoni, CEO of Wide Labs. In Japan, Nomura Research Institute is using OCI’s Alloy infrastructure to enhance its financial AI platform in line with local regulatory requirements. Communication and collaboration company Zoom will use NVIDIA GPUs in OCI’s Saudi Arabian data centers to support compliance with local data requirements.

Additionally, geospatial modeling company RSS-Hydro is demonstrating its flood mapping platform — built on the NVIDIA Omniverse platform — in Japan’s Kumamoto region. These customers are among numerous others globally building and deploying domestic AI applications powered by NVIDIA and OCI, driving economic resilience through sovereign AI infrastructure. Enterprises can accelerate task automation on OCI by deploying NVIDIA software and OCI’s scalable cloud solutions, enabling quick adoption of generative AI and building agentic workflows for complex tasks.

Solutions such as NVIDIA cuOpt, NIM, and RAPIDS are available on the Oracle Cloud Marketplace. Oracle and Nvidia have announced a groundbreaking collaboration to create one of the most powerful AI computing infrastructures to date. The Zettascale cluster, soon to be available through Oracle Cloud Infrastructure (OCI), promises unparalleled performance with up to 131,072 Nvidia Blackwell GPUs.

The cutting-edge cluster is designed to facilitate AI training and high-performance computing (HPC), providing up to 2.4 ZettaFLOPS of AI performance, which is unprecedented in the current technological landscape. Customers can configure these new supercomputer clusters with either Nvidia’s Hopper or Blackwell GPUs, alongside various networking options, including ultra-low latency RoCEv2 with ConnectX-7 NICs and ConnectX-8 SuperNICs, or Nvidia’s Quantum-2 InfiniBand-based networks. Additionally, storage solutions will be tailored depending on performance needs.

The initial versions of the OCI Superclusters can support up to 16,384 GPUs, delivering a peak performance of 65 exaFLOPS in FP8/INT8 workload and 13 Petabits per second (Pb/s) in network throughput. Scaling further, clusters launching later this year will manage up to 65,536 GPUs, offering 260 exaFLOPS of peak performance and 52 Pb/s in network throughput. The most advanced, Blackwell-powered clusters will feature up to 131,072 GPUs, peaking at 2.4 zettaFLOPS.

See also  Charlie Munger's Death: A Legacy of Wisdom and Collaboration

OCI’s upcoming supercomputing clusters far outstrip the capabilities of existing leading systems. For example, the B200-based OCI Superclusters have over three times more GPUs than the Frontier supercomputer, currently using 37,888 AMD Instinct MI250X GPUs. “We have one of the broadest AI infrastructure offerings and are supporting customers who are running some of the most demanding AI workloads in the cloud,” said Mahesh Thiagarajan, executive vice president at Oracle Cloud Infrastructure.

“With Oracle’s distributed cloud, customers have the flexibility to deploy cloud and AI services wherever they choose while preserving the highest levels of data and AI sovereignty.”

Several companies are already benefiting from OCI’s advanced infrastructure. WideLabs and Zoom are leveraging OCI’s high-performance AI capabilities to accelerate their AI development while maintaining sovereignty controls. “As businesses, researchers, and nations race to innovate using AI, access to powerful computing clusters and AI software is critical,” stated Ian Buck, vice president of Hyperscale and High Performance Computing at Nvidia.

“Nvidia’s full-stack AI computing platform on Oracle’s broadly distributed cloud will deliver AI compute capabilities at an unprecedented scale to advance AI efforts globally and help organizations accelerate research, development, and deployment.”

The upcoming OCI Superclusters will use Nvidia’s GB200 NVL72 liquid-cooled cabinets, equipped with 72 GPUs that communicate at an aggregate bandwidth of 129.6 TB/s in a single NVLink domain. Oracle has indicated that Nvidia’s Blackwell GPUs will be available in the first half of 2025, although the exact timeframe for fully loaded Blackwell-powered clusters remains to be clarified. This collaboration between Nvidia and Oracle marks a significant leap forward in AI computing, promising new possibilities for businesses and researchers worldwide.

Share This Article
Follow:
SmallBizTechnology.com Editorial team. Striving to publish news, insights, and interviews focused on technology and more for growing businesses!