TL;DR: Meta has pushed its next flagship AI model, code-named Avocado, from its planned March 2026 launch to at least May after internal tests showed it trailing Google and OpenAI. More striking: Meta's AI division leaders have held discussions about temporarily licensing Google's Gemini to power Meta AI products while Avocado catches up — an extraordinary U-turn for a company that has staked its identity on open-source AI as the alternative to proprietary frontier models.
Table of contents
- What happened: the Avocado delay explained
- The performance gap: where Avocado actually sits
- The Gemini licensing bombshell
- The $135 billion question
- How the market reacted
- Meta's open-source narrative vs. the new reality
- Where Llama 4 Scout and Maverick fit
- The competitive landscape Meta is racing against
- What this means for Meta AI products and users
- Implications for the broader AI industry
What happened: the Avocado delay explained
On March 13, 2026, The New York Times reported that Meta has decided to push back the release of Avocado, its next-generation flagship AI model, to at least May 2026. The model had been expected to launch this month.
The delay follows internal benchmarking sessions in which Avocado was measured against models from Google, OpenAI, and Anthropic across key capability areas: reasoning, coding, writing, and multi-step problem solving. The results, according to sources familiar with the process, were disappointing. Avocado performed well against Meta's own prior generation models, but it did not reach the frontier tier that leadership had targeted.
Avocado is not just another incremental update. The model was designed to be the centerpiece of Meta's next phase of AI development — the model that would anchor Meta AI across WhatsApp, Instagram, Facebook, Messenger, and Ray-Ban smart glasses. It was supposed to be the answer to ChatGPT and Gemini, not another step behind them.
The decision to delay rather than ship a product that falls short of competitors shows that Mark Zuckerberg's AI leadership team is willing to take the short-term embarrassment of a missed timeline rather than release something that would hand critics a clear comparison point showing Meta's models are not at the frontier.
Sources say the new target is May, though no specific date has been confirmed publicly. The phrase "at least May" in multiple reports suggests internal timelines are still in flux.
The details emerging from internal tests paint a precise picture of Avocado's position in the competitive landscape.
According to reporting from Fortune and PYMNTS, Avocado's performance currently falls between Google's Gemini 2.5 and Gemini 3. That is a nuanced position: it is not a failure, but it is also not what Meta needed.
Here is what the benchmark positioning tells us:
The critical context: Google released Gemini 3 in November 2025. A model that falls between Gemini 2.5 and Gemini 3 is, effectively, 3-5 months behind Google's frontier. For a company spending north of $100 billion on AI infrastructure in a single year, being 3-5 months behind on model quality is a serious problem — not because the gap is enormous, but because the market perception of who leads matters enormously for developer adoption, enterprise partnerships, and consumer trust.
Meta's previous flagship model, Llama 4 Maverick, was positioned against GPT-4o and Gemini 2.0 Flash. Maverick performed credibly against those targets. The problem is that those targets have moved considerably since Maverick launched in April 2025. Avocado was meant to leapfrog the new frontier. Instead, it has landed in the gap between the old and new Google frontier.
The specific failure modes cited in internal discussions include reasoning tasks involving multiple logical steps, coding tasks requiring real-world integration rather than isolated function completion, and long-context comprehension where Avocado's performance degraded more steeply than Gemini 3 as context length increased.
The Gemini licensing bombshell
The delay itself is news. What follows it is extraordinary.
According to multiple reports, including Fortune and Quartz, senior leaders within Meta's AI division have held discussions about temporarily licensing Google's Gemini technology to power Meta AI products while Avocado is brought up to competitive performance.
No final decision has been announced. These are discussions, not a signed agreement. But the fact that the conversation is happening at all represents one of the most striking strategic revelations in the AI industry since the field began its current acceleration phase.
Consider what it would mean in practice:
Meta AI, the assistant embedded inside WhatsApp, Instagram, Facebook, and Meta's Ray-Ban smart glasses, currently runs on Meta's own models. If Meta were to license Gemini, Google's technology would be powering conversations inside Meta's products, reaching well over three billion monthly active users across its apps.
Google would be providing the AI brain inside the social network that Google has tried and failed to compete with for twenty years (remember Google+?). Meta would be paying Google for the technology that was supposed to give Meta an independent competitive edge. And Meta's users — who are also Google users in many cases — would be interacting with a system that neither they nor the companies involved have fully disclosed.
The strategic logic is understandable even if the optics are complicated. Meta cannot ship an inferior model for its AI assistant and expect users to keep using it over ChatGPT or Google's own Gemini products. If Avocado is not ready, and if the gap is large enough that users would notice, a temporary licensing arrangement preserves the product experience while the internal model catches up.
The irony is sharp. Meta has spent years arguing — loudly and publicly — that open-source models are better for the ecosystem, better for developers, and better for competition than Google and OpenAI's closed, proprietary systems. Mark Zuckerberg has made this argument personally in interviews, blog posts, and congressional testimony. Licensing Gemini would require Meta to run its user-facing AI on a closed, proprietary system owned by one of its primary tech industry rivals.
The $135 billion question
Meta outlined capital expenditure plans of between $115 billion and $135 billion for 2026 — a number so large that it has become a reference point in discussions of AI's infrastructure spending era. Most of that capital goes to data centers, chips, energy infrastructure, and compute capacity required to train and serve frontier AI models.
As PYMNTS reported, the Avocado delay puts that entire investment thesis under scrutiny.
The investment logic was always: Meta spends aggressively on compute and talent, builds frontier models, embeds those models across its three-billion-user platform, and captures AI's value through the world's largest distribution network. The models had to be competitive for the loop to close.
If the models are not competitive — if Avocado ships in May and benchmarks between Gemini 2.5 and Gemini 3 rather than ahead of Gemini 3 — then investors have to ask whether $135 billion in infrastructure spending is producing commensurate model quality, or whether Meta is building the world's most expensive second-place position in AI.
This is not a small question. For context, OpenAI reportedly spent around $9 billion on compute in 2024. Anthropic raised $7.3 billion through 2025. Google's AI compute investment is massive but distributed across a company with a broader product portfolio. Meta's $135 billion is almost entirely AI-dedicated. If that spending does not produce frontier model quality, the capital allocation question becomes very uncomfortable for the board and for shareholders.
Meta's counterargument — implicit in its strategy — is that model quality is not the only variable. Distribution matters more. Even a model that ranks third or fourth in benchmarks can win if it reaches three billion people daily. WhatsApp users who get an AI response in their messaging app will not consult benchmark leaderboards before using it. The strategy assumes that good enough plus massive distribution beats best-in-class plus limited reach.
The Gemini licensing discussion suggests that leadership may no longer believe "good enough" is good enough, at least for this generation.
How the market reacted
Meta shares dropped following the reports. According to Blockonomi and Benzinga, shares fell more than 2-3%, trading near $618 in early Friday trading before recovering slightly.
The reaction was measured rather than panicked. Here is why the market did not crater:
- The delay is two months, not indefinite. May is close. Avocado is not being canceled.
- Meta's core business is strong. Advertising revenue, user growth, and operating margins are all performing well. The AI delay does not affect near-term financials.
- The Gemini discussion is just a discussion. No contract, no announcement, no confirmed deal.
- Meta has credibility on AI. Llama 4 Maverick and Llama 4 Scout were well-received. The market has not lost faith in Meta's AI program, only its current timeline.
That said, the stock movement reflects real concern. Meta has been one of the primary beneficiaries of the 2024-2026 AI valuation expansion. Shares climbed from under $200 in late 2022 to over $700 at various points in 2025. A significant portion of that increase reflects investor confidence in Meta's AI roadmap. News that the flagship model is underperforming targets for reasoning, coding, and long-context tasks seeds doubt about whether that confidence is warranted at current valuations.
The open-source story is central to understanding why the Gemini licensing discussion is so significant beyond the immediate product implications.
Meta has built an entire strategic narrative around open-source AI. The Llama models — Llama 2, Llama 3, Llama 4 — are available for anyone to download, fine-tune, and deploy commercially. Meta frames this as a public good and as a competitive differentiator. The argument: open-source commoditizes AI models, reduces the moat of closed players like OpenAI and Google, and positions Meta as a trustworthy steward of AI development rather than a gatekeeper.
Zuckerberg has made this point repeatedly. In a January 2025 post, he explicitly argued that open-source AI is better for the world, better for developers, and better for Meta because a widely distributed ecosystem makes Meta's platforms more central to how AI gets used.
Licensing Gemini would directly contradict this narrative for the most important product use case: the AI assistant that Meta's users interact with every day. The flagship consumer AI experience, the one that defines whether users choose Meta AI over ChatGPT, would run on a closed proprietary model from Google.
The practical compromise — license Gemini for consumer products while continuing open-source Llama for developer ecosystem — is possible. But it would be difficult to communicate without admitting that open-source models are not yet good enough for Meta's most demanding production use cases.
There is also a vendor risk dimension. Licensing Gemini creates dependency on Google at precisely the moment when AI model capability is one of the industry's most contested resources. If Google decides to change terms, raise prices, or restrict access — perhaps if competitive dynamics shift — Meta would have built user expectations around a model it no longer controls.
Where Llama 4 Scout and Maverick fit
Meta released Llama 4 Scout and Llama 4 Maverick in April 2025, and both models received genuinely positive reception from the developer community.
Llama 4 Scout is a 17-billion active parameter model with 16 experts using a mixture-of-experts (MoE) architecture. It runs on a single NVIDIA H100 GPU, offers an industry-leading 10-million-token context window, and outperforms Google's Gemma 3, Gemini 2.0 Flash-Lite, and Mistral 3.1 across most standard benchmarks.
Llama 4 Maverick uses the same 17-billion active parameter count but draws from 128 experts and 400 billion total parameters. It matched or beat GPT-4o and Gemini 2.0 Flash on multimodal benchmarks when it launched and delivered comparable results to DeepSeek V3 on reasoning and coding tasks.
Both models are natively multimodal. Both are open-weight, meaning developers can download and modify them. This is the genuine success of Meta's AI program — the Llama ecosystem has become one of the most deployed open-source model families in the world, embedded in everything from academic research to enterprise applications to open-source developer tools.
The problem is that "April 2025 frontier" is "mid-tier" in March 2026. The rate of capability improvement in AI means even excellent models age quickly. Llama 4 Maverick competes with where GPT-4o and Gemini 2.0 were, not where GPT-4.5, Gemini 3, and Claude 3.7 are. Avocado was supposed to close that gap. The internal test results suggest it hasn't fully closed it yet.
There are also unresolved questions about Avocado's release terms. Rumors circulating in early 2026 suggest that while Scout and Maverick remain open-weight, Avocado — like Meta's Behemoth model — may be released with more restrictive licensing. A semi-proprietary or closed Avocado would represent a significant shift from Meta's stated open-source commitment, creating friction with the developer community that has adopted Llama as an alternative to closed models.
The AI model landscape in March 2026 is genuinely crowded at the frontier, and the pace of improvement has accelerated rather than plateaued.
Meta sits in a complicated position on this table. On model quality, it is currently trailing the frontier by a meaningful margin. On distribution, it has no peer — three billion monthly active users is more than OpenAI, Anthropic, Google's consumer AI products, and DeepSeek combined. On cost, open-source Llama models give Meta and third parties the ability to deploy AI cheaply at scale.
The distribution advantage should not be dismissed. Every time a WhatsApp user types a message and gets an AI-assisted response, every time an Instagram user uses AI to generate a caption or filter, Meta is building behavioral patterns that will be very difficult to dislodge. Users who learn AI through Meta's products may not be comparison-shopping against ChatGPT.
But the distribution advantage depends on the product experience being good enough that users don't actively seek alternatives. There is accumulating evidence that AI-native users — people who use AI daily for real tasks — do notice model quality differences. If Avocado ships in May and the user experience trails what ChatGPT or Gemini delivers, the distribution advantage starts eroding precisely among the heaviest users, who are also the most influential in shaping perception.
For users of Meta AI across WhatsApp, Instagram, Facebook, and Ray-Ban smart glasses, the immediate impact of the Avocado delay is minimal. The current model powering Meta AI continues to operate. There is no degradation in service.
The medium-term question is more interesting. If Avocado ships in May as planned, users will see an upgrade to the AI assistant across Meta's platforms. Capabilities like extended reasoning, better coding assistance, improved multimodal understanding, and longer-context conversations should all improve.
If the Gemini licensing discussions move forward to an actual agreement, users would be interacting with a Google-powered model inside Meta products. This would likely not be disclosed prominently in product interfaces — licensing agreements rarely surface as user-facing labels — but it would represent a significant behind-the-scenes shift.
For Meta AI's business customers — brands and advertisers using Meta AI to generate creative assets, respond to customer inquiries, or build automated workflows — the delay matters more. Enterprise buyers are explicitly evaluating AI product quality against competitors. A delay signals that Meta's product roadmap is not as predictable as advertised, which creates risk in multi-year enterprise AI planning.
For developers in the Llama ecosystem, the delay has less direct impact. The Llama 4 Scout and Maverick models remain available and continue to receive community adoption. The uncertainty is whether Avocado, when it does ship, will be open-weight or more restricted — a question that Meta has not publicly resolved.
Implications for the broader AI industry
The Avocado delay and Gemini licensing discussion are significant beyond Meta's internal strategy. They illuminate dynamics shaping the entire AI industry.
First: the capability race has no finish line. Avocado may have been competitive six months ago when its architecture was locked. But the target moved. Google shipped Gemini 3 in November 2025. OpenAI shipped GPT-4.5. Anthropic shipped Claude 3.7. The frontier has shifted, and Avocado is now catching up to a target that moved mid-sprint. This will happen to every company, including Google and OpenAI. The question is how large the gap becomes and how quickly companies can close it.
Second: distribution without model quality is not a sustainable moat. Meta's distribution advantage is real, but if the quality gap between Meta AI and frontier competitors becomes perceptible to users, distribution starts working against you — more people experiencing the inferior product becomes a larger liability rather than an asset.
Third: the open-source strategy has real limits. Meta has built genuine value in the open-source AI ecosystem through the Llama series. But open-source development, particularly at the frontier, is expensive and slower to iterate than closed-source research because feedback loops from user deployment are more diffuse. The Gemini licensing discussion shows that even Meta's leadership questions whether open-source can keep pace with closed-source frontier development over short-term product windows.
Fourth: AI infrastructure spending may not directly translate to model quality. Meta's $135 billion budget is extraordinary. But throwing compute at a model does not automatically produce frontier quality — architecture design, training data curation, and RLHF methodology matter as much as raw compute. If the largest single AI compute budget in the world produces second-tier model quality, the relationship between spending and capability is more complex than capital markets have assumed.
Fifth: cross-company AI licensing could become normalized. If Meta licenses Gemini, it validates a model where AI capabilities flow across company boundaries through commercial agreements, separate from the product experiences users associate with each brand. Apple already does this with OpenAI through Apple Intelligence. The AI stack may increasingly look like the semiconductor stack — where companies compete on integration and product experience while relying on components from rivals.
20 frequently asked questions
Avocado is Meta's next-generation flagship AI model, designed to power Meta AI across WhatsApp, Instagram, Facebook, Messenger, and Meta's Ray-Ban smart glasses. It was intended to compete directly with frontier models from Google, OpenAI, and Anthropic at the time of its release.
Internal benchmark tests showed Avocado's performance trailing frontier models from Google, OpenAI, and Anthropic on reasoning, coding, and long-context tasks. Rather than release a model that falls short of competitors, Meta pushed the launch to at least May 2026 to allow additional development time.
When will Avocado be released?
The current target is May 2026, but sources describe it as "at least May," suggesting the timeline is not fully locked. No specific release date has been publicly confirmed.
How does Avocado compare to existing AI models?
According to reports, Avocado's performance currently sits between Google's Gemini 2.5 and Gemini 3. It outperforms Meta's own Llama 4 Maverick and Gemini 2.5, but trails Gemini 3, GPT-4.5, and Claude 3.7.
Meta's AI division has discussed temporarily licensing Gemini to power Meta AI products while Avocado finishes development. The logic is that this would prevent user-facing quality degradation in the interim — rather than shipping an inferior model or delaying the product experience, Meta could use Google's frontier model as a bridge.
No. Meta has not confirmed or denied the licensing discussions. Reports describe internal conversations among AI division leaders, not a signed agreement or finalized deal.
Meta has outlined capital expenditure plans of $115 billion to $135 billion for 2026, primarily for AI infrastructure including data centers, chips, and energy capacity.
Meta shares dropped approximately 2-3% following reports of the delay, trading near $618. The reaction was measured, as analysts noted the delay is short-term and Meta's underlying advertising business remains strong.
What is the difference between Avocado and Llama 4?
Llama 4 (Scout and Maverick) are Meta's open-source models available for developer download and deployment. Avocado is a proprietary flagship model designed for Meta's consumer AI products. Avocado is expected to significantly outperform the Llama 4 series, though its release terms remain unclear.
Will Avocado be open-source like Llama?
This has not been confirmed. Reports suggest Avocado, like Meta's larger Behemoth model, may carry more restrictive licensing terms than the fully open-weight Llama 4 models, representing a possible shift in Meta's open-source AI strategy.
What is Llama 4 Maverick?
Llama 4 Maverick is Meta's best-performing open-weight model, released in April 2025. It uses a mixture-of-experts architecture with 17 billion active parameters drawn from 400 billion total parameters across 128 experts. At launch, it performed comparably to GPT-4o and outperformed Gemini 2.0 Flash on most benchmarks.
What is Llama 4 Scout?
Llama 4 Scout is a smaller, highly efficient open-weight model with 17 billion active parameters and 16 experts. It fits on a single NVIDIA H100 GPU, offers a 10-million-token context window, and outperforms Gemma 3, Gemini 2.0 Flash-Lite, and Mistral 3.1 on standard benchmarks.
Why is the Gemini licensing idea so surprising?
Meta has spent years publicly championing open-source AI as an alternative to proprietary closed models from Google and OpenAI. Mark Zuckerberg has personally argued that open-source is better for competition and for the ecosystem. Licensing Google's proprietary Gemini model for Meta's core consumer AI experience would directly contradict that positioning.
Users would likely notice improved AI quality in Meta AI products, but the underlying technology would be Google's Gemini rather than Meta's own models. This would typically not be prominently disclosed in the user interface and would represent a temporary arrangement while Avocado development continues.
Meta AI has a distribution advantage — it reaches users through WhatsApp, Instagram, Facebook, and Ray-Ban smart glasses, platforms with over three billion monthly active users. On raw model quality, Meta currently trails the frontier by a meaningful margin. The delay of Avocado extends that gap.
Which companies are at the AI frontier in March 2026?
Google (Gemini 3), OpenAI (GPT-4.5), and Anthropic (Claude 3.7) are broadly considered frontier providers. DeepSeek V3 offers comparable capability at dramatically lower cost for certain use cases. Meta's Avocado, when released, is expected to perform between Google's prior and current generation models.
That is the central question raised by the Avocado delay. If the largest single-company AI infrastructure budget in the world produces second-tier model quality, the relationship between spending and capability is more complex than investors assumed. Meta's counterargument is that distribution, not pure model quality, is the decisive variable for AI product success.
Developers who build on Meta's AI platform are watching for clarification on Avocado's release date and, critically, its licensing terms. The Llama 4 open-weight models remain unchanged. If Avocado launches with restricted licensing, it would create a bifurcation in Meta's AI ecosystem between the open-source community tools and the proprietary flagship.
Has any other major tech company licensed a competitor's AI model?
Apple is the clearest precedent. Apple Intelligence, launched in 2025, routes certain user queries to OpenAI's ChatGPT when Apple's on-device models cannot fully handle the request. Apple disclosed this arrangement prominently in product communications. It suggests cross-company AI licensing can work commercially and from a user trust perspective, provided the disclosure is handled transparently.
If Avocado ships in May and remains significantly behind Gemini 3 and GPT-4.5, Meta faces harder choices: a second delay, an earlier-than-planned release with acknowledged performance gaps, or permanent dependency on licensed frontier models for consumer-facing products. The latter would represent a fundamental shift in Meta's AI strategy and would require a significant narrative change from Zuckerberg and Meta's AI leadership team.
Key takeaways
- Avocado, Meta's next flagship AI model, has been delayed from March to at least May 2026 after internal tests showed it trailing Google, OpenAI, and Anthropic on reasoning, coding, and long-context tasks.
- The model's current performance falls between Google Gemini 2.5 and Gemini 3 — competitive against Meta's own prior generation, but not at the frontier.
- Meta's AI division has discussed temporarily licensing Google's Gemini to power Meta AI products during the gap — a dramatic strategic reversal for a company built on open-source AI advocacy.
- No Gemini licensing deal has been confirmed. The discussions are internal, not announced.
- Meta's $115-135 billion AI capex plan for 2026 is under increased scrutiny as the flagship model misses its target launch window.
- META stock dropped 2-3% on the news, trading near $618, before stabilizing as analysts noted the core advertising business remains strong.
- The competitive landscape is unforgiving. Gemini 3 (November 2025), GPT-4.5, and Claude 3.7 set a bar that Avocado's May version must reach to reestablish Meta as a credible frontier player, not just a distribution powerhouse.
- The open-source Llama 4 ecosystem (Scout, Maverick) remains intact and widely adopted, but whether Avocado follows the same licensing model remains unresolved.
Sources: Fortune · PYMNTS · Quartz · TipRanks · Benzinga · Blockonomi