On Tuesday, Google announced its foray into the realm of custom silicon for its data centers, positioning itself alongside other cloud service giants such as Amazon and Microsoft. The introduction of the Axion line of processors marks Google’s inaugural venture into Arm-based CPUs specifically tailored for data center applications. According to Amin Vahdat, Google’s vice president and general manager for machine learning systems and cloud AI, Axion promises to deliver “industry-leading performance and energy efficiency” and is slated to be available to Google Cloud customers later this year.
The Axion processors are a fusion of Google’s silicon expertise and Arm’s highest-performing CPU cores. Google claims that Axion will offer up to 30% better performance than the fastest currently available general-purpose Arm-based instances in the cloud. Moreover, compared to current-generation x86-based instances, Axion is expected to deliver up to 50% better performance and up to 60% improved energy efficiency.
Bob O’Donnell, founder and chief analyst at Technalysis Research, explained that Google is the third of the big three cloud service providers to venture into developing their own CPU designs. This move is driven by the desire to have unique, optimized solutions that can run software more efficiently and power-effectively. Considering the significant cost of data center power usage, Arm designs are viewed as more power-efficient alternatives to Intel. However, Axion will not replace Intel but rather provide a new option for workloads where it offers a better alternative.
The transition towards using Arm processors in data centers is seen as a blow to Intel’s historical dominance in the market with its x86 processors. Benjamin Lee, an engineering professor at the University of Pennsylvania, highlighted the efficiency gains from designing custom hardware components like Axion. These custom components can handle computations for security, networking, and hardware management more efficiently, freeing up CPU capacity for user and customer computations.
This shift towards Arm architectures for server markets signifies an ongoing transition away from x86 architectures, a mobility that could likely benefit Arm in the long term more than Google, according to Rodolfo Rosini, co-founder and CEO of Vaire.
Moreover, Google’s Axion is part of a broader trend where companies like Apple and Tesla are designing their own chips, a move Gaurav Gupta of Gartner sees as aiming for better cost control, roadmap control, and IP synergies.
In addition to Axion, Google also announced the general availability of Cloud TPU v5p, described as the most potent and scalable Tensor Processing Unit to date. It’s built for training large and demanding generative AI models, containing thousands of chips capable of delivering significant improvements in FLOPS and high-bandwidth memory on a per-chip basis.
This development underscores Google’s dedication to accelerating machine learning workloads efficiently, optimizing both mobile and data center environments for AI applications. Abdullah Anwer Ahmed, founder of Serene Data Ops, pointed out that Google’s TPU offers a cost-effective option for inferencing, which constitutes a significant portion of the total cost of running machine learning infrastructure in the cloud.
Finally, Google’s new Axion chips are also aimed at supporting the company’s sustainability goals. They offer a more energy-efficient option than x86 CPUs, reflecting the company’s commitment to reducing the substantial energy consumption of data centers and contributing to broader sustainability efforts.
Source