Google rolls out Canvas in AI Mode to every US user
Google launches Canvas inside AI Mode for all US Search users, enabling document creation, code generation, and tool use directly within search results.
Whether you're looking for an angel investor, a growth advisor, or just want to connect — I'm always open to great ideas.
Get in TouchAI, startups & growth insights. No spam.
Google dropped Canvas in AI Mode for all US users on March 4, 2026 — no Labs opt-in, no subscription gate, no waiting list. The move transforms Google Search from a results page into a creation surface: you can now draft documents, generate code, build interactive tools, and produce audio overviews without leaving the search interface. This is the broadest single expansion of AI Mode since it launched in March 2025, and it signals where the next phase of the search wars is heading.
TL;DR — key numbers:
Canvas is a persistent side panel that opens inside AI Mode when you select it from the tool menu — the plus (+) icon in the AI Mode interface. It does not replace the search results or the AI Mode conversation. It runs alongside them.
When you open Canvas and describe what you want to build, Gemini generates a working draft in the side panel. The draft pulls from real-time web data and Google's Knowledge Graph, so the output is grounded in current information rather than a static training snapshot.
The three core creation modes Canvas supports today:
| Mode | What it produces | Example use case |
|---|---|---|
| Documents | Structured prose: emails, reports, proposals, creative writing | Turn a research query into a formatted briefing doc |
| Code | HTML, CSS, JavaScript, and other languages; live browser preview | Build a functional calculator or data viz tool |
| Interactive tools | Executable mini-apps, quizzes, games, study guides | Convert class notes into a quiz with scoring logic |
The live preview is the key differentiator from a plain text output. When you generate code, you see the running output in the Canvas panel. You toggle between the preview and the underlying code, then refine via conversational follow-ups in the AI Mode chat. The loop is: prompt, preview, refine — all without switching tabs.
Canvas also accepts uploads. You can drop in class notes, a research paper, or a PDF, and Canvas converts it into a study guide, a web page, an audio overview, or a quiz. The itinerary-building mode added in November 2025 goes further — it pulls real-time flight data, hotel availability, and Google Maps location details into a structured travel plan inside the Canvas panel.
Google did not rush this. Canvas followed a disciplined staged rollout that started as an invite-only Labs experiment and ended with a full public release. Understanding the timeline matters because it shows how Google is now handling major Search feature launches: test in Labs, instrument the data, expand in stages, then drop the gates.
| Date | Event |
|---|---|
| March 2025 | AI Mode launches in Google Labs; Google One AI Premium subscribers get first access |
| May 2025 | Google I/O: AI Mode expands to all US users without Labs sign-up; Gemini 2.5 powers AI Mode |
| June–July 2025 | AI Mode rolls out in India via Search Labs; full US availability confirmed by July |
| July 2025 | Canvas enters Labs as a desktop-only experiment for US users enrolled in AI Mode Labs |
| November 17, 2025 | Canvas gains itinerary-building mode with real-time Google Flights and Google Maps integration |
| December 1, 2025 | Google tests direct AI Mode entry from AI Overviews on mobile globally; "Ask Anything" button added |
| January 14, 2026 | Google details six Canvas travel planning use cases; image upload support added for destination discovery |
| January 27, 2026 | Gemini 3 becomes the default model for AI Overviews globally; seamless AI Mode handoff added |
| March 4, 2026 | Canvas in AI Mode goes live for all US users in English — no Labs opt-in required |
The Labs-to-general-availability window was roughly 8 months for Canvas specifically. That is fast for a feature that fundamentally changes what Search is. Google used that window to instrument real usage, catch edge cases on itinerary and coding outputs, and add the image upload and audio overview capabilities before going wide.
Canvas is not a standalone product. It is one surface in a stack of AI capabilities that Google has been assembling inside Search over the past 18 months.
The current AI Mode stack:
Google Search
└── AI Overviews (Gemini 3, global)
└── AI Mode (conversational, US and India)
├── Deep Research
├── Personal Intelligence
└── Canvas
├── Document drafting
├── Code + live preview
└── Interactive tools
AI Overviews sits at the top of every relevant SERP. When you expand an AI Overview and want to go deeper, there is now a direct handoff into AI Mode via the "Ask Anything" button. Once inside AI Mode, Canvas is one tool among several — but it is the one that produces a persistent artifact you can save, share, or iterate on.
Gemini 3 underpins all of it. Google made Gemini 3 the default model for AI Overviews globally in January 2026, and the same model powers Canvas. This matters because Canvas outputs are grounded — they cite sources, pull real-time data, and produce content that reflects the current state of the web rather than a training cutoff.
The Knowledge Graph integration is underappreciated. When Canvas builds a study guide, an itinerary, or a research summary, it is drawing on Google's structured entity data — not just web pages. That gives Canvas outputs a factual consistency that pure retrieval-augmented generation approaches struggle to match.
The access path is straightforward. No lab opt-in. No subscription.
Current constraints:
The no-login-required aspect is significant. Unlike Gemini.google.com (which requires a Google account) or the Gemini app (which requires sign-in and optionally a subscription), Canvas in Search is available to anyone who lands on Google.com in the US. That is hundreds of millions of users gaining access immediately.
Canvas is not a new concept. OpenAI introduced ChatGPT Canvas in October 2024. Anthropic has had Claude Artifacts since mid-2024. Perplexity offers document-style outputs in its Spaces feature. Google is late to the named "canvas" pattern but arrives with a distribution advantage none of its competitors can match.
Side-by-side comparison:
| Feature | Google Canvas (AI Mode) | ChatGPT Canvas | Claude Artifacts | Perplexity |
|---|---|---|---|---|
| Activation | Manual via tool menu | Automatic based on query | Manual or automatic | Manual via Spaces |
| Distribution | All US Search users (free) | ChatGPT Plus ($20/mo) required for full access | Claude.ai free and paid tiers | Perplexity free and Pro |
| Live code preview | Yes, in-browser | Yes | Yes | Limited |
| Real-time web grounding | Yes (Google Search + Knowledge Graph) | Yes (with browsing) | Limited | Yes (core feature) |
| Google Docs export | Not confirmed at launch | No | No | No |
| Image upload | Yes | Yes | Yes | Yes |
| Audio overview | Yes | No | No | No |
| Itinerary building | Yes, with Flights + Maps data | No | No | Limited |
| Inline editing | Conversational follow-ups | Inline + conversational | Inline + conversational | Conversational |
The meaningful differences:
ChatGPT Canvas triggers automatically when OpenAI's model decides your query warrants a canvas. Google and Anthropic both require more explicit user initiation. This is a design philosophy split — automatic triggering reduces friction but can feel intrusive; explicit triggering gives users control.
Gemini Canvas consistently outperforms ChatGPT Canvas on visually complex tasks based on third-party testing. ChatGPT Canvas tends to perform better on pure text generation tasks. For coding, both produce working output, but Google's live preview loop inside Search has an ergonomic advantage — you never leave the page.
The distribution gap is not subtle. ChatGPT's Canvas requires a $20/month Plus subscription for the best experience. Google's Canvas is free and available to anyone on Google.com in the US. At 90% global search market share, Google's install base for Canvas is orders of magnitude larger than OpenAI's or Anthropic's.
The context here is important. Google's 90.04% global search market share looks impenetrable. But the directional trend is not: that share was 92.58% in 2022. Perplexity grew 370% year-over-year in 2025 by positioning as an AI-first search engine rather than a chatbot with a search button. Microsoft Copilot — despite being baked into Windows and Office — sits at 1.2% AI chatbot market share, which tells you that distribution alone does not win.
What Google is doing with Canvas is collapsing the distance between search and creation. The previous model was: search on Google, find a result, go to that result, create something in a separate tool. Canvas makes Google the creation surface. The session stays inside Google. The data stays inside Google. The follow-up queries stay inside Google.
This is a direct response to the threat model that made ChatGPT worrying for Google in the first place: users going to ChatGPT for creation tasks they used to bounce between Google and other tools to accomplish.
The key competitive moves Canvas makes:
Canvas changes the calculus for anyone who builds on top of Google Search.
For SEO and content teams:
The concern with AI Overviews was always zero-click — users get the answer in the SERP and never click through. Canvas accelerates this dynamic for creation-intent queries. A user who previously searched "how to build a study guide from notes," clicked through to a guide, and used a third-party tool is now completing that task inside Google without leaving.
The queries most exposed to Canvas cannibalization:
For developers:
Canvas can generate functional HTML/CSS/JavaScript in the browser. For simple prototypes, utility scripts, or client-facing tools, Canvas reduces the time-to-working-demo from hours to minutes. This does not replace professional development, but it does compress the gap between "I want to see if this works" and "I have something to show."
The audio overview feature is worth flagging for content producers. Canvas can convert a research report or document into a spoken audio summary. This is a lightweight podcast-like format without any production tooling. For teams that want to repurpose written content into audio, Canvas offers a zero-setup path.
Google will not stop here. The March 4 rollout was the US-English baseline. Based on the pattern from AI Mode and AI Overviews, here is what the next phase likely looks like:
Short term (March–April 2026):
Medium term (Q2 2026):
The metric to watch: Whether Canvas creates a measurable increase in time-on-SERP for creation-intent queries. If it does, it validates the thesis that Google can own the creation layer. If it does not — if users still jump to ChatGPT or Notion or VS Code for serious work — that tells you something important about the ceiling of search-embedded creation tools.
Canvas in AI Mode going fully public in the US is not a minor feature update. It is Google's clearest statement yet that the search results page is no longer the end state — it is the starting point for a creation session. Gemini 3 provides the model quality. The Knowledge Graph provides the factual grounding. The zero-subscription access provides the distribution. The live code preview provides the feedback loop that makes creation inside a browser tab feel functional rather than gimmicky.
The competitive moat Google is building is not technical superiority. ChatGPT Canvas and Claude Artifacts are both capable tools. The moat is where users already start. A billion+ daily searches means Canvas has an addressable audience that no AI startup can replicate without Google-scale distribution. The question is whether creation inside a search session is genuinely better than using a purpose-built tool — or whether users treat it as a convenient starting point before moving to a real environment.
For now, the answer is: it depends on the task. Simple documents, basic interactive tools, study guides, and travel itineraries are fully in Canvas's wheelhouse. Complex codebases, collaborative documents, and professional-grade content still belong in dedicated tools. But that boundary will move. Google has 8 months of Labs data on how users actually use Canvas, and it will optimize aggressively for the use cases that stick.
If you have not tried Canvas yet, open Google Search, switch to AI Mode, and hit the plus icon. The gap between a search query and a working document or tool is now measured in seconds, not minutes. That is a real change worth taking seriously.
Does Canvas in AI Mode require a paid Google subscription?
No. As of March 4, 2026, Canvas is available to all US users searching in English at no cost. You do not need Google One AI Premium, a Workspace subscription, or any other paid tier. Any user on Google.com in the US can access Canvas by switching to AI Mode and selecting Canvas from the tool menu.
How is Canvas different from AI Overviews?
AI Overviews are automatic summaries that appear at the top of search results for relevant queries. They are read-only — you consume the overview, you do not create inside it. Canvas is a creation tool that lives inside AI Mode. You open it intentionally when you want to build something: a document, a code snippet, an interactive tool. AI Overviews and Canvas are complementary; you can use an AI Overview as a starting point and then jump into AI Mode to create an artifact based on it.
Can I save or export what I create in Canvas?
At launch, Canvas does not have a direct export-to-Google-Docs button, but copy-paste works. Given Google's Workspace integration history, a direct save-to-Drive or export-to-Docs feature is likely in the roadmap. The audio overview outputs can be played back in the interface but export options are limited at this stage.
Does Canvas work on mobile?
The March 4 rollout prioritized desktop browsers. Mobile support is not fully confirmed as feature-complete at initial launch. Based on Google's standard rollout pattern (desktop first, then mobile), expect a mobile-optimized Canvas experience within a few months.
How does Canvas handle accuracy? Can it hallucinate?
Canvas outputs are grounded in real-time web data and Google's Knowledge Graph, which significantly reduces the hallucination risk compared to models generating from training data alone. However, no AI system is immune to errors. For factual documents, Canvas will typically cite sources in the output. For code, the live preview lets you verify functionality directly. For important professional or legal documents, human review remains essential.
Is Canvas available outside the US?
Not at the general launch. US English users are the initial target. India is the most likely next expansion given that AI Mode launched there via Search Labs in mid-2025. Other regions will follow, with timeline dependent on language model performance in local languages and content moderation requirements.
How does this compare to using Gemini directly at gemini.google.com?
The Gemini app and Gemini.google.com require a Google account sign-in and offer a broader range of model interactions. Canvas in AI Mode is specifically optimized for the search-and-create workflow — it is anchored to a search query, grounded in web data, and designed to produce artifacts that answer a specific information or creation need. Gemini.google.com is better for open-ended conversations and tasks that do not start from a search intent. Canvas is better when you already know what you are looking for and want to convert that knowledge into a document or tool.
What kinds of code can Canvas generate?
Canvas currently supports HTML, CSS, and JavaScript, which covers the full range of browser-executable interactive tools: calculators, data visualizations, quizzes, games, form tools, and simple utility apps. The live preview runs directly in the Canvas panel, so you see the output immediately. More complex programming environments (Python, backend code, database-connected apps) are outside Canvas's current scope — those use cases still belong in purpose-built development environments.
While the DoD blacklisted Anthropic as a supply chain risk, Microsoft Azure and Google Cloud continue offering Claude to commercial enterprise clients — creating a two-tier AI reality.
Google DeepMind's Genesis AI identified 25 new magnetic materials from 380 million candidates, validated by DOE national labs — a landmark moment for AI-driven science.
Google's March 2026 Pixel Feature Drop expands Circle to Search globally, adds Gemini real-time visual understanding, AR overlays, and shopping integration to Pixel 7a and above.