Google’s Custom AI Chip Bet: The Marvell Deal and What It Means for Semiconductor Stocks

Alphabet’s Google is reportedly in advanced discussions with Marvell Technology to co-develop a new class of custom artificial intelligence chips purpose-built for inference workloads — the latest move in a multi-year hyperscaler effort to reduce dependence on Nvidia and reshape one of the most valuable corners of the semiconductor market.

The talks, which surfaced this week in analyst notes, would expand Google’s existing partnership with Marvell beyond networking hardware into dedicated AI accelerator silicon. If formalized, the deal could represent a significant revenue opportunity for Marvell while adding fresh competitive pressure on Nvidia’s dominance in the AI chip market.

Why Hyperscalers Are Building Their Own Chips

The economics are straightforward. A single Nvidia H100 GPU costs between $25,000 and $40,000, and training or running large AI models at scale requires thousands of them. For companies operating millions of daily inference requests — translating text, generating images, answering search queries — the bill is enormous. Custom silicon, designed for a specific task rather than general-purpose computing, can cut energy consumption by 3x to 5x and total cost of ownership dramatically.

Google pioneered the hyperscaler custom chip playbook. Its Tensor Processing Unit (TPU), first deployed internally in 2016, predates the current AI boom by nearly a decade. By 2024, Google had reached its sixth generation — the Trillium TPU, which the company says delivers 4.7x the compute performance per chip compared to its predecessor — and began offering TPU capacity to enterprise customers through Google Cloud.

But Google wasn’t alone for long. Amazon Web Services launched its Trainium and Inferentia chip lines for training and inference respectively. Microsoft debuted the Maia 100 AI accelerator in late 2023. Meta unveiled its MTIA chip for inference workloads across its platforms. The industry consensus has become clear: the cloud giants will no longer outsource their silicon strategy entirely to Nvidia.

Why Marvell, and Why Now

Marvell Technology has quietly become one of the most important names in the hyperscaler custom chip ecosystem. Unlike Nvidia, which sells standard merchant silicon to any buyer, or Broadcom, which also competes in the custom ASIC space, Marvell has positioned itself as a pure custom silicon partner — designing chips to a hyperscaler’s exact specifications and letting the customer take the IP.

The company already has a well-documented relationship with Google on data center networking. Marvell’s Octeon processors power a range of data center infrastructure applications, and the company has deepened its hyperscaler ties in recent years under CEO Matt Murphy, who has refocused Marvell’s portfolio away from legacy storage and toward cloud infrastructure.

In its most recent analyst day, Marvell projected that custom AI silicon revenue — what it calls its “custom compute” business — would reach $2.5 billion in fiscal year 2026, up from essentially zero three years earlier. A Google deal for AI inference chips could materially accelerate that trajectory. Analysts at Bernstein and Morgan Stanley have both flagged Marvell’s custom compute pipeline as the core driver of the stock’s re-rating potential.

The Inference Race: Where the Real Volume Is

The Google-Marvell collaboration reportedly focuses on inference chips — the hardware that runs a trained AI model to generate responses — rather than training. This distinction matters enormously from an investment perspective.

Training a frontier AI model happens once, or a handful of times, at enormous compute cost. Inference happens billions of times daily, every time a user asks ChatGPT a question, searches Google, or gets a product recommendation. As AI gets embedded into core business workflows, inference demand is expected to grow exponentially. Morgan Stanley estimated in late 2025 that inference would account for more than 60% of total AI compute spend by 2027, up from roughly 40% at the end of 2024.

Custom inference silicon is particularly attractive because the computational patterns are more predictable than training, making them easier to optimize at the silicon level. A chip purpose-built to run a specific model architecture can outperform a general-purpose GPU dramatically on both speed and energy efficiency.

What It Means for Marvell Stock (MRVL)

Marvell’s stock has had a volatile 2026, caught between enthusiasm for its AI custom chip business and concerns about broader semiconductor sector cyclicality. Shares had rallied sharply from their 2025 lows, driven largely by multiple expansions in the custom compute business, before pulling back alongside the broader market during the spring selloff tied to Middle East geopolitical tensions.

A formalized Google partnership for AI inference chips would be a meaningful catalyst. Analysts have noted that hyperscaler custom chip contracts typically span multi-year commitments with recurring revenue characteristics — very different from the lumpy, cyclical patterns that have historically plagued the semiconductor industry. Each new hyperscaler design win represents years of recurring production volume.

Marvell already has confirmed engagements with Google and Amazon for custom ASIC work; a new AI inference chip deal with Google would add another revenue stream. The company’s five cloud customers — Google, Amazon, Microsoft, Meta, and one unnamed hyperscaler — each represent significant multi-year pipeline opportunities. In that context, a confirmed deal could push Marvell’s custom compute revenue estimates meaningfully higher than current Street consensus.

Nvidia’s Position: Challenged but Not Threatened — Yet

It would be an overstatement to call any of this existential for Nvidia. The GPU maker’s H100 and next-generation Blackwell platforms remain the dominant hardware for AI training, and its CUDA software ecosystem — built over nearly two decades — creates significant switching costs. No custom chip can replicate CUDA’s developer network overnight.

But Nvidia’s market share in inference is less certain. Inference workloads are more standardized, the switching costs are lower, and the energy efficiency argument for custom silicon is strongest here. Nvidia’s own data center revenue grew 122% year-over-year in fiscal year 2025, but analysts have begun modeling a gradual share erosion in inference over a three-to-five-year horizon as hyperscaler custom chips mature.

Broadcom (AVGO), Marvell’s chief rival in custom AI silicon, is also building out hyperscaler AI chip partnerships, most notably with Google (on the TPU project) and Meta. Competition for these multi-billion-dollar design wins is intensifying, which means Marvell’s ability to convert pipeline into confirmed deals will be a crucial differentiator for investors.

The Risks Worth Watching

Custom silicon deals carry meaningful execution risk. Design cycles for custom chips typically run 18 to 24 months from design kick-off to production tape-out, meaning revenue from a new Google inference chip engagement wouldn’t flow into Marvell’s financials until late 2027 at the earliest. Hyperscalers also retain the right to in-source chip development over time — Amazon, for instance, has progressively moved more chip development entirely in-house, reducing third-party dependency.

There is also concentration risk. If Marvell’s custom compute revenue becomes heavily reliant on two or three hyperscaler relationships, any shift in one partner’s silicon roadmap could have an outsized impact on results. Investors in the custom silicon space are effectively taking a position on the hyperscaler’s continued appetite to outsource design work rather than build fully internal chip teams.

The Bigger Picture

Google’s reported talks with Marvell are a window into a structural shift that has been underway for several years: the unbundling of the AI chip supply chain. Rather than a monolithic Nvidia dominance across training and inference, the industry is moving toward a more fragmented market where hyperscalers mix general-purpose Nvidia GPUs with purpose-built custom silicon — and where specialized semiconductor partners like Marvell, Broadcom, and Cadence play increasingly central roles.

For investors, that shift creates a more nuanced opportunity set in semiconductors than the Nvidia-or-nothing framing of 2023. The custom silicon ecosystem — encompassing chip designers, EDA software firms, advanced packaging suppliers, and TSMC as the underlying foundry — is being gradually repriced to reflect that broader, more durable growth story.

Whether or not the Google-Marvell talks produce a formal chip deal in the near term, the direction of travel is clear: every major hyperscaler is betting that custom silicon will be a core competitive advantage in the AI era. The race for inference chip dominance has barely begun.

Disclosure: This article was produced with AI assistance and reviewed before publication. It is for informational purposes only and is not investment advice.

Leave a Comment