Alphabet, the parent company of Google, has introduced its latest innovation in artificial intelligence (AI) data center chips: Trillium. CEO Sundar Pichai hailed the new product as a significant advancement, boasting nearly five times the speed of its predecessor.
During a briefing call with reporters, Pichai emphasized the exponential growth in demand for machine learning computers, which has surged by a factor of 1 million in the past six years.
He highlighted Google’s pioneering efforts in AI chip development, spanning over a decade, positioning the company to meet the evolving needs of the industry.
Alphabet’s venture into custom AI chips for data centers presents a notable alternative to Nvidia’s dominant processors in the market. While Nvidia currently commands approximately 80% of the AI data center chip market, Google’s tensor processing units (TPUs) have secured a significant share of the remaining 20%.
Google’s comprehensive approach, combining hardware and software, has enabled it to carve out a substantial presence in this competitive landscape.
The sixth-generation Trillium chip promises a remarkable 4.7 times improvement in computing performance compared to its predecessor, the TPU v5e.
Designed to power the technology behind text and media generation from large models, the Trillium processor also boasts a 67% increase in energy efficiency over the v5e. Google’s engineers achieved these performance enhancements by optimizing high-bandwidth memory capacity and overall bandwidth, addressing the bottleneck often encountered in AI model processing.
Google plans to make the new Trillium chip available to its cloud customers by “late 2024.” The chips are designed to be deployed in pods comprising 256 chips, which can be seamlessly scaled to accommodate hundreds of pods.
This scalable architecture ensures that Google’s cloud infrastructure can efficiently handle the growing demand for AI processing tasks.