Meta signs multi-billion dollar deal to rent Google TPUs, challenging Nvidia's grip on AI compute
Meta leases Google TPUs in a multiyear deal worth billions. Combined with AMD and Nvidia pacts, Meta is reshaping AI chip supply dynamics.
Whether you're looking for an angel investor, a growth advisor, or just want to connect — I'm always open to great ideas.
Get in TouchAI, startups & growth insights. No spam.
TL;DR: Meta has signed a multiyear, multibillion-dollar lease agreement to rent Google's Tensor Processing Units (TPUs) for AI training and inference. The deal arrives days after Meta committed to millions of Nvidia Blackwell and Rubin GPUs and a separate $60 billion AMD partnership. Combined, these three agreements represent the most aggressive AI chip diversification strategy any company has ever pursued, and they could reshape who controls the AI hardware market.
The Information first reported on February 26, 2026 that Meta Platforms has signed a multi-year lease agreement with Google worth billions of dollars. The deal gives Meta access to Google's Tensor Processing Units for both training and running AI models. Neither company has confirmed a precise dollar figure or detailed contract terms.
What we do know. Meta will lease TPU capacity from Google Cloud, starting in 2026. Discussions are already underway for Meta to purchase TPUs outright starting in 2027, installing them in its own data centers for greater control over the hardware stack. That transition from leasing to owning matters. It signals Meta views Google's chips as a long-term infrastructure bet, not a stopgap while Nvidia supply catches up.
This is Google's first known large-scale TPU lease to an external AI company of Meta's size. Google has used TPUs internally for years, for search ranking, YouTube recommendations, and Gemini model training. Leasing them to a direct competitor is new territory.
The strategic awkwardness is obvious. Meta and Google compete in advertising, AI assistants, and social platforms. Now one is renting AI hardware from the other. That both agreed tells you how desperate compute demand has become.
The Google TPU deal is not happening in isolation. It is the third leg of a chip procurement strategy that Meta has assembled in the span of ten days.
Deal 1: Nvidia. On February 17, 2026, CNBC reported that Meta expanded its multiyear partnership with Nvidia to include millions of Blackwell and Rubin GPUs, plus Spectrum-X networking, Arm-based Grace CPUs, and collaboration on Vera CPUs. Nvidia did not reveal a dollar amount, but analysts estimate the deal is worth tens of billions. This covers both on-premises and cloud-based AI infrastructure.
Deal 2: AMD. On February 24, 2026, AMD and Meta announced a five-year agreement valued at up to $60 billion to deploy 6 gigawatts of AMD Instinct GPUs across Meta's data centers. Shipments of the first gigawatt begin in the second half of 2026, powered by a custom AMD Instinct MI450 chip designed specifically for Meta's Llama-5 and future Llama-6 models. The MI450 features 432GB of HBM4 memory and delivers nearly 40 PFLOPS of FP4 compute. As part of the deal, AMD issued Meta performance-based warrants for 160 million shares of AMD common stock at $0.01 per share, roughly 10% of the company.
Deal 3: Google TPUs. The multibillion-dollar lease announced this week, covering current-generation TPUs with a path to outright ownership starting in 2027.
Here is how the three deals compare:
| Vendor | Deal value | Duration | Hardware | Delivery starts |
|---|---|---|---|---|
| Nvidia | Tens of billions (est.) | Multiyear | Blackwell + Rubin GPUs, Grace CPUs | 2026 |
| AMD | Up to $60B | 5 years | Custom MI450 Instinct GPUs | H2 2026 |
| Multibillion (est.) | Multiyear | TPU v6 (Trillium), path to TPU v7 | 2026 |
No company in history has signed three chip supply agreements of this scale in a single month. Meta CEO Mark Zuckerberg has made it clear that AI is the company's future, and these deals are the infrastructure to back that claim.
"Meta Expands AI Chip Strategy with Google TPU Partnership Following Nvidia and AMD Deals" -- Blockonomi
For Google, the Meta deal is part of a broader strategy to turn TPUs into a revenue-generating product line, not just internal infrastructure.
Google has also signed an agreement with an unidentified large investment firm to fund a joint venture that would lease TPUs to external customers. Meta is an early adopter of this program, but it will not be the last. The joint venture model lets Google scale production without bearing 100% of the capital expenditure, while the investment partner gets exposure to the AI compute market.
Internally, Google Cloud executives have set an ambitious target. They want to capture up to 10% of Nvidia's annual revenue through TPU sales and leasing. Given that Nvidia posted $215.9 billion in FY26 revenue, that target represents roughly $20 billion in TPU-related revenue for Google. That would make TPU a standalone business rivaling many Fortune 500 companies.
Google's TPU roadmap supports this ambition. The upcoming TPU v7 (Ironwood) is purpose-built for inference, featuring 192GB HBM3e, 7.4 TB/s bandwidth, and approximately 2,300 peak BF16 TFLOPS. At 600W TDP with liquid cooling, it supports pods of 9,216 chips delivering 42.5 EFLOPS of FP8 compute.
Google plans to build millions of TPU v7 units in 2026 at TSMC. Shipments of the v7e variant are expected to approach 500,000 units. This is not a side project. It is a full-scale assault on Nvidia's dominance.
The honest answer is that it depends on the workload. TPUs and GPUs have fundamentally different architectures, and each has strengths.
Where TPUs win. Google's chips excel at scale economics and energy efficiency. TPU v6e offers up to 4x better performance per dollar compared to Nvidia H100 for large language model training, recommendation systems, and large-batch inference. The power draw is significantly lower. TPU v6e runs at 300W TDP versus H100's 700W TDP. At data center scale, where electricity is a major cost, that difference compounds.
TPUs also win on cluster coherence. Google's pods connect thousands of chips with high-bandwidth interconnects designed as a unified system, rather than networking discrete GPUs together after the fact. For certain distributed training workloads, this architectural advantage matters.
Where Nvidia wins. Nvidia leads in raw performance, software ecosystem maturity, and versatility. CUDA, built over nearly two decades, supports virtually every AI framework in existence. According to Artificial Analysis benchmarks, Nvidia achieves approximately 5x better tokens-per-dollar than TPU v6e for inference, and 2x better than AMD's MI300X. Blackwell roughly doubles H100 performance on many benchmarks.
The real comparison for Meta. Meta does not need the best chip for every workload. It needs enough compute for all of them. Training Llama requires one compute profile. Running inference across billions of users on Facebook, Instagram, and WhatsApp requires another. By sourcing from three vendors, Meta matches each workload to the most cost-effective hardware.
| Metric | Google TPU v6e | Nvidia H100 | Nvidia B200 |
|---|---|---|---|
| FP16 performance | ~920 TFLOPS (8-chip pod) | ~1,979 TFLOPS | ~2x H100 |
| Memory | 256GB HBM (8-chip) | 80GB HBM3 | 192GB HBM3e |
| TDP | 300W per chip | 700W | 1,000W |
| Training cost advantage | Up to 4x/$ vs H100 | Baseline | Premium |
| Inference cost (tokens/$) | ~5x more expensive | Baseline | Better than H100 |
| Software ecosystem | JAX, TensorFlow | CUDA (universal) | CUDA (universal) |
Meta committed $115 billion to $135 billion in capital expenditure for 2026. That nearly doubles the $72 billion the company spent in 2025. The company also announced $600 billion in total US infrastructure investment through 2028, including construction of 30 data centers and two massive AI facilities called Prometheus and Hyperion.
These are staggering numbers. For context, the entire global semiconductor industry is approaching a $1 trillion total addressable market in 2026. Meta alone is spending more than 10% of that total on AI infrastructure in a single year.
Where does all that money go? Chip deals account for a large portion. Data center construction, networking equipment, power infrastructure, and cooling systems make up the rest.
The question analysts keep asking is whether this spending can generate returns. Meta's core advertising business brought in over $160 billion in 2025. But $135 billion in a single year requires AI to deliver transformational value, not just incremental ad targeting improvements.
Zuckerberg has bet on agentic AI systems and AI-powered commerce justifying the investment. If he is right, Meta's infrastructure will be years ahead. If he is wrong, the company will have very expensive buildings full of depreciating hardware.
"Meta signs multi-billion-dollar deal to rent Google AI chips" -- Yahoo Finance
Let's be clear about what this deal is and what it is not.
It is not the end of Nvidia's dominance. Meta just signed a massive multiyear commitment with Nvidia for millions of Blackwell and Rubin GPUs. Nvidia reported $215.9 billion in FY26 revenue and guided $78 billion for Q1 FY27. The company's data center business alone generated $193.7 billion last year. No single TPU deal changes those numbers overnight.
But it is a structural shift in how the market works. Here is why.
For three years, the AI hardware market functioned as a near-monopoly. CUDA and GPU performance made Nvidia the only viable option. Companies did not diversify because there was nothing to diversify to.
That picture has changed. AMD has a $60 billion Meta commitment. Google is leasing TPUs externally for the first time. Nvidia is still the biggest vendor by far, but it is no longer the only option.
Google Cloud executives' target of capturing 10% of Nvidia's annual revenue, roughly $20 billion, is aggressive but not unrealistic if the joint venture scales. Even if they capture 5%, that represents a meaningful shift in market share that would pressure Nvidia's pricing power.
The real risk for Nvidia is not losing Meta's business. Meta is buying from all three vendors simultaneously. The risk is that other hyperscalers follow Meta's playbook and start diversifying too. If Microsoft, Amazon, and Oracle all begin leasing TPUs alongside their Nvidia purchases, the dynamics of the AI chip market change permanently.
"Soaring demand for AI chips pushes Meta toward Google TPUs in a deal that exposes real strain in global supply chains" -- TechRadar
Three factors explain the timing.
Supply constraints. Nvidia cannot build chips fast enough. Despite record production at TSMC, demand for Blackwell GPUs exceeds supply. Nvidia disclosed a backlog of $500 billion on Blackwell and Rubin processors. When your primary supplier has a half-trillion-dollar backlog, you look for alternatives. Meta needs compute today, not in 18 months when Nvidia catches up.
Pricing power. When one vendor controls 80%+ of a market, prices reflect that control. Nvidia's gross margins consistently run above 75%. By establishing AMD and Google as credible alternatives, Meta creates competitive tension. Even if Meta never moves a majority of workloads off Nvidia hardware, the existence of alternatives gives Meta negotiating power on pricing, allocation, and support terms.
Strategic risk. Concentration on a single hardware vendor creates operational risk. If Nvidia experiences production delays, yield issues, or supply chain disruptions (all of which have happened before), a company running 100% Nvidia infrastructure has no fallback. With three vendors, Meta can shift workloads dynamically based on availability. That operational flexibility is worth billions in risk reduction.
There is also a less-discussed factor. Meta has been developing internal AI accelerators (MTIA chips) for several years. The Nvidia, AMD, and Google deals may serve as a bridge to a future where Meta produces its own compute, following Apple, Google, and Amazon down the custom silicon path.
The Meta-Google deal is one data point in a larger trend. Four forces are working against hardware monopolies.
Falling training costs. DeepSeek trained V3 for $5.5 million. Better algorithms reduce the premium on having the absolute fastest chip.
Inference overtaking training. Training happens once. Inference happens billions of times. Inference workloads are more price-sensitive and more amenable to diverse hardware. Google's TPUs are strongest precisely in this area.
Software ecosystem fragmentation. JAX runs natively on TPUs. PyTorch supports multiple backends. MLPerf results show competitive performance across platforms. The CUDA moat is still deep, but it is no longer unswimmable.
Power as a binding constraint. Data centers are hitting limits on available electrical capacity. A chip delivering 80% of the performance at 40% of the power draw wins when the alternative is not building a data center at all.
All four trends favor a multi-vendor world. Meta's three-deal strategy is the first large-scale execution of this thesis.
Five things to watch over the next 12-18 months.
Meta's workload allocation. If Meta moves even 15-20% of training to TPUs and AMD, it validates both alternatives and encourages other hyperscalers to follow the same playbook.
Google's joint venture scaling. The unnamed investment firm backing Google's TPU leasing needs to commit capital to production. If Google delivers at scale, the 10% revenue target becomes realistic. If bottlenecks limit supply, the initiative stalls.
Nvidia's response. Vera Rubin arrives in 2027 with another major performance leap. Nvidia may also adjust pricing or deepen integration partnerships. Jensen Huang has navigated competitive threats before.
AMD's MI450 execution. Custom silicon has a history of delays. If AMD delivers the 432GB HBM4, 40 PFLOPS chip on schedule, it validates the $60 billion deal. If not, Meta falls back to Nvidia.
TPU v7 deployment. Ironwood is impressive on paper, but mass production at TSMC using MediaTek customization adds complexity. Delays shift the value proposition back toward proven Nvidia hardware.
The AI chip market is entering its most competitive phase since GPUs first replaced CPUs for deep learning. Meta just made sure of that.
Neither company disclosed a figure. Multiple outlets describe it as a "multibillion-dollar" multiyear lease. Given Meta's $115-135 billion 2026 capex, the TPU deal likely represents a single-digit percentage of total spending.
Meta will initially lease TPU v6e (Trillium) capacity through Google Cloud. When it potentially purchases hardware outright starting in 2027, it would likely transition to TPU v7 (Ironwood), optimized for inference with 192GB HBM3e and 7.4 TB/s bandwidth.
No. Meta simultaneously signed a multiyear deal with Nvidia for millions of Blackwell and Rubin GPUs. The Google TPU agreement is additive to, not a replacement for, Meta's Nvidia partnership. Meta is diversifying supply, not switching vendors. Nvidia will remain Meta's largest chip supplier for the foreseeable future.
Google has signed an agreement with an unidentified large investment firm to create a joint venture that leases TPUs to external customers. The structure allows Google to scale TPU production without bearing all capital costs. Meta is an early adopter, but Google intends to lease to other AI companies as well.
Meta signed a five-year agreement valued at up to $60 billion with AMD for 6 gigawatts of Instinct GPU deployment. The centerpiece is a custom MI450 chip with 432GB HBM4 and 40 PFLOPS FP4 performance, designed specifically for Llama-5 and Llama-6 training. AMD also granted Meta warrants for 160 million shares at $0.01 per share, approximately a 10% stake.
Google Cloud executives set this as an internal target. At Nvidia's $215.9 billion FY26 revenue, 10% means roughly $20 billion in TPU revenue. Aggressive, but within reach if the joint venture scales and Meta's adoption validates TPUs for more customers.
AI compute demand is so intense that competitive dynamics take a back seat to operational necessity. Nvidia has a $500 billion backlog. Every major hyperscaler faces supply constraints. Meta needs compute capacity now, and Google has TPUs available. The financial incentives are large enough on both sides to override competitive concerns.
Meta committed $115 billion to $135 billion in 2026 capital expenditure, nearly doubling the $72 billion spent in 2025. The company also announced $600 billion in cumulative US infrastructure investment through 2028, covering 30 data centers including the Prometheus and Hyperion AI facilities.
Nvidia's fundamentals remain strong with $78 billion Q1 FY27 guidance and 75%+ gross margins. But the deal introduces a long-term narrative of margin compression. Analysts are watching whether other hyperscalers follow Meta's multi-vendor approach, which would pressure Nvidia's pricing power.
Likely yes. Microsoft already uses custom Maia chips alongside Nvidia. Amazon has Trainium and Inferentia. If Meta proves TPU and AMD workloads perform at production scale, it removes the last barrier to broad multi-vendor adoption.
NVIDIA smashed Q4 FY26 estimates with $68B revenue, $62.3B from data centers. Full breakdown of results, Vera Rubin, and what it means for AI.
A tech industry group including Nvidia and Google sent a letter to Defense Secretary Pete Hegseth expressing concern over the Pentagon's unprecedented decision to label Anthropic — a US company — a supply-chain risk.
NVIDIA launches the Rubin computing architecture at MWC Barcelona with six new chips spanning agentic AI, physical AI, autonomous vehicles, robotics, and biomedical.