NEW DELHI, Nov 8: US-based tech giant Google launched ‘Ironwood’, which is its seventh-generation most powerful Tensor Processing Unit (TPU), a Google blog said.
‘Ironwood’ is purpose-built for the most demanding workloads like large-scale model training, complex reinforcement learning, and low-latency AI inference.
It offers 10 times peak performance improvement over its predecessor, TPU v5p.
On the other hand, it provides four times better performance per chip for training and inference workloads compared to TPU v6e (Trillium). These capabilities make Ironwood the most powerful and energy-efficient custom silicon to scale.
“Today’s frontier models, including Google’s Gemini, Veo, Imagen and Anthropic’s Claude train and serve on Tensor Processing Units (TPUs). For many organizations, the focus is shifting from training these models to powering useful, responsive interactions with them. Constantly shifting model architectures, the rise of agentic workflows, plus near-exponential growth in demand for compute, define this new age of inference,” the blog said.
“We have been preparing for this transition for some time, and today, we are announcing the availability of three new products built on custom silicon that deliver exceptional performance, lower costs, and enable new capabilities for inference and agentic workloads,” it added.
The blog also informed that New Arm-based Axion Instances, which is ‘NA4’, a cost-effective N series virtual machine to date, is now available in preview.
NA4 offers two times better price-performance than comparable current-generation x86-based VMs.
An ‘x86-based Virtual Machine’ refers to a virtualized computer that runs on a host processor using the x86 architecture, which is common in most personal computers.
(UNI)
