Elon Musk’s artificial intelligence startup, xAI, is reportedly planning to make a supercomputer designed to power the next version of its chatbot, Grok.
According to a report by The Information, Musk informed investors that this supercomputer will be significantly more powerful than current GPU clusters, aiming to be four times larger than the most advanced GPU clusters available today.
The proposed supercomputer is expected to be operational by the fall of 2025. To achieve this, xAI is considering a partnership with Oracle, a leading enterprise technology company.
This collaboration could leverage Oracle’s expertise and infrastructure capabilities to support the development of the massive computing system. However, xAI has not commented on these plans, and Oracle has not responded to inquiries from Reuters regarding their potential involvement.
The foundation of this supercomputer will be Nvidia’s flagship H100 graphics processing units (GPUs), which are currently the benchmark for AI-related data center operations.
Nvidia’s H100 GPUs are renowned for their performance in handling intensive AI workloads but are also in high demand, making them difficult to procure in large quantities.
In a presentation to investors in May, Musk highlighted that the scale of the new supercomputer would be a significant leap from current capabilities. He indicated that training the Grok 2 model required about 20,000 Nvidia H100 GPUs, and future iterations, such as Grok 3, would necessitate a staggering 100,000 Nvidia H100 chips.
Musk founded xAI last year to challenge the dominance of AI powerhouses like Microsoft-backed OpenAI and Google’s AI initiatives.
The successful implementation of this supercomputer would not only bolster xAI’s technological infrastructure but also position it as a formidable player in the AI market. The enhanced computational power would facilitate the training of more sophisticated AI models, potentially setting new benchmarks for AI performance and innovation.