Deep Integration! Google and Intel Expand AI Chip Partnership, Commit to Adopting Xeon Processors
Google has pledged to use Intel's multi-generation processors in its artificial intelligence data centers, marking a significant upgrade in the existing partnership between the two companies and reflecting a new phase in the AI infrastructure race—where the strategic value of CPUs is being reassessed.
According to a joint statement released on Thursday, Intel’s latest generation Xeon 6 processors will handle Google’s AI training and inference workloads, offering the long-embattled chip giant a new opportunity to compete for market share in an arena currently dominated by Nvidia. Amin Vahdat, Google’s Chief Technology Officer for AI Infrastructure, stated, “Intel’s Xeon roadmap gives us confidence to continually meet the ever-growing demands for performance and efficiency in our workloads.” Neither party disclosed the financial terms or the timeline of the agreement.
This collaboration comes at a time when CPUs are returning to the spotlight in the next stage of the AI race. In March, Nvidia’s Head of AI Infrastructure, Dion Harris, told CNBC that as agent workloads are shifting compute demand away from graphics processors, the CPU is “becoming the bottleneck.” For Intel, this shift could present a strategic window to reverse its declining fortunes.
CPUs Return to the Core of AI Infrastructure
The evolution of AI workloads is reshaping the chip demand structure for data centers. Nvidia’s Dion Harris pointed out that GPU-centric accelerated computing architectures are encountering bottlenecks when handling emerging agent workloads, making balanced system-level configuration ever more critical. Intel CEO Lip-Bu Tan also emphasized in a statement, “Scaling AI requires not only accelerators but also balanced systems.”
Google’s relationship with Intel dates back nearly three decades to when Google first built its server racks, indicating a long-standing partnership. The agreement to bring Intel Xeon 6 processors formally into Google’s AI workload ecosystem is a substantive deepening of this long-term relationship.
Intel Rebuilds Market Confidence on Multiple Fronts
Recently, Intel has been taking intensive action to reshape its positioning in the AI era. In August last year, the US government acquired a 10% stake in Intel, with the Trump administration framing the move as a strategic measure to support America’s domestic advanced chip manufacturing capability; soon after, Nvidia announced a $5 billion investment in Intel. Boosted by these investments, Intel’s share price has nearly tripled over the past year.
On the manufacturing side, Intel’s latest Xeon processors use its most advanced 18A process node and are produced at its Arizona fab, which opened last year. Despite heavy investment in its foundry business, Intel’s own processors remain the new plant’s largest clients. Additionally, according to Lip-Bu Tan’s post on LinkedIn this week, Elon Musk has commissioned Intel to design, manufacture, and package custom chips for SpaceX, xAI, and Tesla, with the project to be carried out at its Terafab facility in Texas. However, financial details and timelines have not yet been disclosed.
IPU Collaboration Deepens, Google’s In-House Chip Strategy Advances in Parallel
Beyond CPU collaboration, Google and Intel on Thursday also reaffirmed their ongoing partnership in the Infrastructure Processing Unit (IPU) domain.
The two companies have been co-developing this programmable accelerator since 2022, designed to offload network, storage, and security functions from the main CPU. According to Google’s statement to CNBC, this chip was an industry first when the partnership launched four years ago, aiming to maximize main CPU processing power in traditional data centers by handling so-called “overhead” tasks such as routing network traffic, storage management, data encryption, and running virtualization software.
It is worth noting that Google is advancing its in-house chip strategy in parallel. Over the past decade, the company has continuously developed its proprietary AI accelerator, the Tensor Processing Unit (TPU), and in 2024 released its self-developed Axion CPU based on the Arm architecture, thereby bypassing the x86 architecture dominated by Intel for its core designs. This indicates that the expanded collaboration between Google and Intel is a strategic supplement within Google’s diversified chip portfolio, rather than a comprehensive bet on a single supplier.
Disclaimer: The content of this article solely reflects the author's opinion and does not represent the platform in any capacity. This article is not intended to serve as a reference for making investment decisions.
You may also like
IVVD Surges on Pipeline Expansion, Hinges on $1.6124 Breakout

AKAM Dips 5.24%—Is This a Breakdown or Just a Bounce?

Balfour Beatty’s Insider Sale Motivated by Tax Reasons Stands in Stark Contrast to Strong Buyback Commitment

SLB Inks Subsea Agreement with PETRONAS, Broadens Digital Activities in Angola
