Google Flow becomes a full AI creative studio with Nano Banana and Veo 3.1 under one roof
Google merges Nano Banana, Veo 3.1, Whisk, and ImageFX into Flow. Free image generation, pro video tools, and 1.5B creations.
Whether you're looking for an angel investor, a growth advisor, or just want to connect — I'm always open to great ideas.
Get in TouchAI, startups & growth insights. No spam.
TL;DR: Google just consolidated its scattered AI creative tools into a single platform called Flow. Nano Banana handles image generation (for free), Veo 3.1 handles video with native audio, and Whisk plus ImageFX are being absorbed into the same workspace starting March 2026. Users have already created over 1.5 billion images and videos on the platform. This is Google's clearest move yet to compete directly with Adobe Firefly, Runway, and OpenAI's Sora.
On February 25, 2026, Google announced a major overhaul of Flow, its AI creative studio that originally launched as a video-only generation tool under Google Labs. The update transforms Flow from a single-purpose video generator into a unified workspace that handles text-to-image, image editing, video generation, video editing, and asset management in one interface.
The key changes are structural. Previously, Google had separate tools doing separate things. ImageFX handled text-to-image generation. Whisk handled image remixing and style transfer. Flow handled video. Nano Banana was the underlying image model powering some of these tools, while Veo powered video generation. They all lived in different places, had different interfaces, and required you to export and re-import assets constantly.
That era is over. Flow now integrates Nano Banana directly into its core experience, meaning you can generate an image, refine it, and immediately feed it into Veo for video generation without ever leaving the app. The asset grid, the biggest UI change in this update, lets you search, filter, sort, and group all your images and videos into collections.
"We've heard from our community that managing creative projects across multiple tools can be a pain point. Flow now brings everything together." -- @GoogleLabs
Google also announced that Whisk and ImageFX will be folded into Flow starting in early March 2026, with automatic project transfers for existing users. If you have been using those tools, your work follows you into the new platform.
The numbers tell the story of adoption. Flow users have created over 1.5 billion images and videos since launch. That is not a typo. 1.5 billion. For a Google Labs project, that signals the kind of usage that justifies a full product investment rather than an experiment.
Nano Banana 2, which Google also announced on February 26, 2026, is the image generation engine at the heart of the new Flow. It is built on the Gemini 3.1 Flash architecture, which means it combines the visual quality of Nano Banana Pro with the speed of Gemini Flash. The practical result: high-fidelity images generated fast enough that the creative loop does not break.
The specs matter. Nano Banana 2 supports resolutions from 512px all the way up to 4K. It handles multiple aspect ratios natively. It can maintain character consistency for up to five characters and object fidelity for up to 14 objects in a single workflow. That character consistency feature is critical for anyone trying to build visual narratives, storyboards, or brand content where the same person or product needs to appear across multiple frames.
Text rendering has improved substantially. Previous AI image models struggled with legible text in generated images. Nano Banana 2 generates accurate, readable text, which opens up marketing mockups, social media graphics, greeting cards, and infographic creation as viable use cases.
One detail worth noting: Nano Banana 2 pulls from Gemini's real-world knowledge base, including real-time information and images from web search. If you ask it to generate an image of a specific public figure, landmark, or product, it can reference actual visual data rather than hallucinating an approximation. Google is also applying SynthID watermarks to all generated images and supporting C2PA Content Credentials for provenance tracking.
Inside Flow, the integration is seamless. You generate images with Nano Banana 2, tweak them using the new editing tools, and then pass them directly to Veo as "ingredients" for video generation. No exporting. No re-uploading. The image becomes a starting frame, a style reference, or a character sheet for your video project.
Image generation in Flow is free. That is a significant competitive move. Midjourney charges for every image. DALL-E charges credits. Adobe Firefly gates usage behind subscription tiers. Google is betting that free image generation draws users into the ecosystem, where they will eventually upgrade to paid tiers for video generation and advanced features.
Veo 3.1 is the video generation model powering Flow's video capabilities, and it is a meaningful step up from Veo 2. The model generates videos at 720p or 1080p at 24 frames per second, with upscaling available to 4K for production-quality output. Videos can be 4, 6, or 8 seconds long in either 16:9 or 9:16 aspect ratios.
Eight seconds might sound short. It is not. The clip extension feature lets you chain segments together, generating what comes next in a scene. Google says you can build videos lasting a minute or more through successive extensions, with the model maintaining visual coherence between clips. That is the same approach Runway uses with its Gen-3 Alpha, but the native integration with Flow's asset pipeline makes the workflow more fluid.
The standout feature of Veo 3.1 is native audio generation. This is not a bolted-on text-to-speech layer. The model generates three types of audio simultaneously with the video:
This puts Veo 3.1 ahead of most competitors on the audio front. Runway's Gen-3 Alpha does not generate audio natively. Sora's audio capabilities are still limited. Adobe Firefly's video features do not include audio generation at all. Having audio baked into the generation pipeline eliminates the need for a separate audio editing step, which is a real time-saver for short-form content creators.
The "Frames to Video" feature is another differentiator. You provide two images, a starting frame and an ending frame, and Veo 3.1 generates the transition between them with audio. This is useful for product reveals, scene transitions, and narrative sequences where you want precise control over the beginning and end states.
Flow's editing tools received a significant upgrade. Three features stand out.
Lasso tool. Select any region of an image using a freehand lasso, then apply conversational prompts to just that area. Select a person and say "remove the man." Select a pond and say "add koi fish." Select a sky and say "make it sunset." This is similar to what Adobe has been doing with generative fill in Photoshop, but it lives inside the same platform as your video generation pipeline. No round-tripping.
Clip extension. Select any generated video clip and tell Flow to generate what happens next. The model maintains visual continuity, keeping characters, environments, lighting, and style consistent across the extension. You can keep extending repeatedly to build longer sequences. The practical ceiling depends on how well coherence holds over many extensions, but Google's demos show plausible results across 30-to-60-second sequences.
Camera controls. You can direct shots with specific camera movements: pans, tilts, zooms, dollies, tracking shots. This is the kind of creative control that separates a "cool AI demo" from a useful production tool. If you are building a product video and need a slow dolly-in on a hero product, you can specify that. If you need a quick whip-pan for an action sequence, you can specify that too.
These three features together create a workflow that used to require multiple specialized tools. Image generation in one app, inpainting in another, video generation in a third, and then video editing in a fourth. Flow consolidates that stack.
"The lasso tool alone is worth the switch. Being able to select and repaint a section of an image, then immediately generate a video from it, in the same app, that's a real workflow improvement." -- @MattVidPro
Whisk and ImageFX are not being shut down overnight. Google is handling the transition with a phased migration starting in early March 2026.
Whisk, which launched as an image remixing tool that let you blend subjects, scenes, and styles from reference images, will have its core capabilities absorbed into Flow's generation pipeline. The "ingredients" concept that Whisk pioneered, where you feed in reference images to guide generation, is now a native part of how Flow handles both image and video creation.
ImageFX, Google's text-to-image tool that competed with DALL-E and Midjourney, will also merge into Flow. Existing ImageFX users will get automatic project transfers, meaning your saved generations and prompts carry over.
The consolidation makes strategic sense. Running three separate AI creative tools with overlapping capabilities fragments the user base, splits engineering resources, and confuses the market. Google tried the "let's see what sticks" approach during the Labs phase. Enough stuck that it is time to ship a product.
For users who preferred the simplicity of ImageFX for quick one-off image generation, the transition might feel heavier. Flow is a more complex interface with more features. Google will need to make sure the simple use cases remain simple, or it risks losing casual users who just want to type a prompt and get an image.
Flow's pricing structure has four levels:
| Tier | Price | Image generation | Video models | Credits | Extra perks |
|---|---|---|---|---|---|
| Free | $0 | Nano Banana 2 (unlimited) | Limited Veo access | Basic | -- |
| Pro | Paid | Nano Banana 2 + Pro | Veo 2, Veo 3.1 | More | Full tool access |
| Ultra | Paid | All models | All models | Most | Early access to experimental models |
| Workspace | Included with qualifying plans | Nano Banana 2 | Veo access | 100/month | No extra charge |
The free tier is the hook. Google is offering unlimited free image generation with Nano Banana 2. No credit system for images. No daily limits mentioned in the announcement. You sign up at flow.google, and you start generating.
Video is where the monetization happens. Pro subscribers get access to both Veo 2 and the latest Veo 3.1 model. Ultra subscribers get everything Pro has, plus more credits and first access to experimental models and advanced features.
Google has not published exact pricing for Pro and Ultra tiers in the February announcement, but the strategy is clear: use free images to build habit, then convert users to paid video tiers. This is the same playbook Canva used with its free design tools feeding into paid team and enterprise subscriptions. It works.
The Workspace integration is notable for enterprise users. If your company is already on a qualifying Google Workspace plan, you get Flow access with 100 monthly credits at no additional cost. That is a distribution advantage no competitor can match. Adobe has Creative Cloud. Runway has standalone subscriptions. Neither has anything embedded into the productivity suite that millions of businesses already pay for.
Here is how the four major AI creative platforms compare as of February 2026:
| Feature | Google Flow | Adobe Firefly | Runway Gen-3 | OpenAI Sora |
|---|---|---|---|---|
| Image generation | Nano Banana 2 (free) | Firefly Image 3 (paid) | Not core focus | DALL-E 3 (paid) |
| Video generation | Veo 3.1 | Firefly Video | Gen-3 Alpha | Sora 2 |
| Max video resolution | 4K (upscaled) | 1080p | 1080p | 1080p |
| Native audio in video | ✓ | ✗ | ✗ | Limited |
| Video duration | 4-8s (extendable to 60s+) | 5s | 10s | Up to 60s |
| Image editing tools | Lasso, inpainting | Generative fill | Limited | ✗ |
| Camera controls | ✓ | ✗ | ✓ | Limited |
| Free tier | ✓ (images free) | ✗ | ✗ | Limited |
| Pro app integration | Google Workspace | Adobe CC (Premiere, AE) | Standalone | ChatGPT |
| Asset management | Collections, grid | Adobe Libraries | Project folders | Basic |
Adobe Firefly's strength is integration with Premiere Pro and After Effects. If you are in the Adobe ecosystem, Firefly slots into your workflow in a way Flow cannot. But Firefly's video quality has drawn criticism, with users rating it below Runway and Veo for realism.
Runway is the specialist. Gen-3 Alpha produces some of the most visually impressive AI video available, with fine-grained controls for motion and camera work. But it is video-only, has no image generation, and no free tier. For professional video creators willing to pay, Runway remains the benchmark.
Sora has brand recognition and generates the longest single clips at up to 60 seconds. Scene consistency is strong. But editing tools are minimal and it requires a ChatGPT Plus or Pro subscription.
Flow's competitive advantage is breadth and price. It is the only platform offering free image generation, integrated video with native audio, editing tools, and asset management in one workspace. Not the best at any single thing, but it does more in one place than any competitor.
"Google is doing the Google thing: maybe not the best at each individual feature, but the best at putting them all in one box. For creators who don't want to juggle five subscriptions, Flow is suddenly very interesting." -- @LinusEkenstam
The consolidation of Google's AI creative tools into Flow signals a market shift. The era of standalone AI creative tools is ending. The era of integrated AI creative platforms is beginning.
For freelancers, the free tier is a genuine opportunity. Prototype visual concepts, generate social media content, and build short video clips without spending anything. The quality ceiling is lower than Runway or Midjourney, but the price-to-capability ratio is unmatched.
For agencies, Workspace integration matters. If your team is on Google Workspace, you get 100 credits per user per month included. That covers exploration and prototyping. Heavier use upgrades to Pro or Ultra.
For educators and students, the free tier removes access barriers entirely. AI creative tools have been gated behind subscriptions that many institutions cannot justify.
The consolidation raises one concern. Users of simpler tools like ImageFX lose the simplicity. Not everyone wants a full creative studio. Sometimes you want to type a prompt, get an image, and move on. Google will need to keep the simple path simple.
Worth watching: the 1.5 billion creations number. That is a massive dataset of user preferences and creative patterns. Google will almost certainly use aggregated Flow interactions to improve its models. The more people use Flow, the better Nano Banana and Veo get. That flywheel effect is the kind of competitive moat Google builds well.
Google Flow is an AI creative studio available at flow.google that combines image generation (Nano Banana 2), video generation (Veo 3.1), and editing tools in a single workspace. It launched under Google Labs and received a major overhaul on February 25, 2026.
Image generation with Nano Banana 2 is free. Video generation and advanced features require a Pro or Ultra subscription. Google Workspace users on qualifying plans get 100 monthly credits included at no extra charge.
Both tools are being absorbed into Flow starting in early March 2026. Existing users will get automatic project transfers so their saved work carries over to the new platform.
Nano Banana 2 is Google's latest image generation model, built on the Gemini 3.1 Flash architecture. It generates images from 512px to 4K resolution, supports character consistency for up to five characters, and renders legible text in generated images. All images include SynthID watermarks and C2PA Content Credentials.
Veo 3.1 is Google's video generation model integrated into Flow. It generates 720p or 1080p video at 24fps with durations of 4, 6, or 8 seconds. It also generates native audio, including dialogue, sound effects, and ambient sound. Videos can be upscaled to 4K and extended to 60+ seconds through clip chaining.
Yes. Veo 3.1 supports camera direction including pans, tilts, zooms, dollies, and tracking shots. You can also use the "Frames to Video" feature to set specific start and end frames for a generation.
Flow offers free image generation and native audio in video, which Firefly does not. Firefly's advantage is deep integration with Adobe Creative Cloud apps like Premiere Pro and After Effects. Flow integrates with Google Workspace instead.
Runway's Gen-3 Alpha is widely regarded as producing higher-quality video output than Veo 3.1 for single clips. But Flow offers integrated image generation, a free tier, and native audio, none of which Runway provides. Flow is broader; Runway is deeper on video quality.
Yes. Google's terms of service for Flow allow commercial use of generated content. All outputs include SynthID watermarks for AI provenance tracking, which aligns with emerging industry standards for AI-generated media disclosure.
Flow supports standard image formats (PNG, JPEG) and video formats (MP4). The asset grid lets you organize, filter, and download your creations. Batch export options are available for Pro and Ultra subscribers.
While the DoD blacklisted Anthropic as a supply chain risk, Microsoft Azure and Google Cloud continue offering Claude to commercial enterprise clients — creating a two-tier AI reality.
Google DeepMind's Genesis AI identified 25 new magnetic materials from 380 million candidates, validated by DOE national labs — a landmark moment for AI-driven science.
Google's March 2026 Pixel Feature Drop expands Circle to Search globally, adds Gemini real-time visual understanding, AR overlays, and shopping integration to Pixel 7a and above.