TL;DR: Adobe announced a major expansion of the Firefly platform in March 2026, introducing custom model training that lets individual creators and brands build personalized generative AI models directly from their own image libraries. Alongside this, Firefly now surfaces more than 30 industry-leading models from Google, OpenAI, Runway, and Kling — covering character generation, illustration styles, and photographic aesthetics — inside a single creative workspace. The announcement positions Adobe not just as a model builder but as the platform layer that orchestrates the entire generative AI ecosystem for professional creative work.
What you will learn
- What Adobe actually announced: custom Firefly model training and what it means
- How the custom model training pipeline works in practice
- The 30+ industry models now available inside Firefly and which partners are involved
- How character, illustration, and photographic style personalization works
- Why this matters for brands, agencies, and individual creative professionals
- How Adobe is building a competitive moat in the creator economy
- How Firefly's new capabilities compare to Midjourney, Stable Diffusion, and DALL-E
- What creators should try first when exploring the new features
- Frequently asked questions about pricing, ownership, and privacy
What Adobe announced: custom model training comes to Firefly
Adobe's March 2026 Firefly expansion is the most significant update to the platform since its launch. At the center of it is a deceptively simple idea: if your creative work has a distinct visual identity, why should you train a generic model on someone else's data and then fight to get your aesthetic out of it? Adobe's answer is to let you bring your own images and build a model that already speaks your visual language.
Custom model training in Firefly means that photographers, illustrators, brand teams, and design studios can now upload their own image libraries, trigger a training run inside the Firefly platform, and get back a personalized generative model that reflects the specific style, character design, or visual grammar they have developed over years of work. The resulting model is private to that creator or organization and can be used for all subsequent generation tasks — producing outputs that look and feel consistent with the creator's existing body of work rather than averaging across millions of unrelated training images.
This is a fundamentally different proposition from what prompt engineering achieves. When you work with a generic model and craft elaborate style prompts, you are negotiating with a system that has millions of other influences pulling against your direction. When you work with a model fine-tuned on your own portfolio, the model's prior is your work. Generation starts from that baseline rather than from a general population distribution.
The announcement covers three broad personalization categories: character models, illustration style models, and photographic style models. Each serves a distinct professional use case, and each has been built with the specific constraints and quality demands of that use case in mind. Adobe has been integrating the underlying fine-tuning infrastructure through Firefly for several months, with the March 2026 announcement representing general availability for the training pipeline.
How custom model training works: upload, train, generate
The custom model training workflow in Firefly is designed to be accessible to creative professionals who are not machine learning engineers. The technical process of fine-tuning a generative model on custom data has historically required GPU access, specialized software environments, and a working knowledge of training configurations. Adobe has abstracted that entirely.
The workflow has three stages.
Image upload and curation. You begin by uploading a set of images that represent the visual style or character you want to capture. Adobe recommends a minimum image count that varies by model type — character models require enough coverage of the subject from multiple angles and contexts to develop a coherent representation; style models require enough stylistic diversity to avoid overfitting to specific compositional choices. The platform provides guidance on image quality requirements and flags problematic submissions (low resolution, conflicting styles, duplicate images) before training begins.
This curation step matters more than it might seem. The quality of your custom model is directly bounded by the quality and coherence of your training set. A photographer uploading four hundred consistent images from their portfolio will get a more useful model than one who uploads a mixture of stylistically inconsistent work. Adobe has built tooling to help creators understand and improve their training sets before committing to a training run.
Training and validation. Once the image set is submitted, the training run begins. Adobe handles the compute infrastructure. Depending on the model type and the size of the training set, training runs complete in a timeframe measured in hours rather than days — Adobe has engineered the fine-tuning pipeline to be computationally efficient enough for practical creative use, not just enterprise-scale deployments. During training, the platform generates a validation preview — a small set of sample outputs using test prompts — so you can evaluate model quality before the training run is marked complete. If the validation output reveals issues, you can adjust the training set and retry.
Generation with custom models. Once your custom model is trained and saved, it appears in your Firefly model selector alongside the standard Firefly models and the third-party models from the partner library. From that point, any generation task you run in Firefly — text to image, reference-guided generation, generative fill — can draw on your custom model as the style or character foundation. You can combine your custom model with specific prompts, reference images, and compositional controls the same way you would use any other model in the system.
The custom models are private by default. Adobe has stated that training data uploaded to create a custom model is not used to train shared Adobe models. Organizations with stricter data handling requirements can deploy custom models through Firefly's enterprise licensing tier, which adds contractual data isolation guarantees on top of the technical defaults.
The 30+ industry models: Google, OpenAI, Runway, and Kling integrations
Alongside the custom model training announcement, Adobe has expanded the Firefly model library to over 30 industry-leading models from external partners. This is a strategic repositioning of Firefly from a single-vendor model product to a platform that surfaces the best generative capabilities from across the industry inside one professional creative environment.
The partner roster includes Google, OpenAI, Runway, and Kling — four organizations that collectively represent the leading edge of image and video generation research. Each brings different strengths to the Firefly library.
Google contributes image generation capabilities rooted in the Nano Banana model family, which offers high-fidelity photorealistic output with strong text rendering and character consistency across frames. Google's creative AI work through Flow has produced models with documented capability in maintaining visual coherence across multi-image projects — a property that is particularly valuable in Firefly's workflow context.
OpenAI's contribution centers on DALL-E 3's image generation, which brings strong compositional logic and instruction following. OpenAI models have historically performed well on briefs that require precise semantic interpretation — generating images that accurately reflect complex written descriptions rather than stylistically approximating them.
Runway brings video generation capabilities that are widely regarded as among the best available for professional production contexts. Runway's $315 million funding round to develop world models underscored how seriously the company is investing in video generation quality. Its presence in the Firefly library extends Firefly into motion and video for the first time through a partner model pathway, pending Adobe's own video generation features reaching general availability.
Kling, Kuaishou's video generation model, adds a strong contender specifically in stylized and character-driven video generation. Kling has built a reputation for handling animated and stylized aesthetics with higher fidelity than many Western competitors, making it particularly relevant for illustration-heavy brand content and entertainment production.
The 30+ model count extends beyond these headline partners to include specialized models covering illustration aesthetics, architectural visualization, product photography, fashion, and other verticals where professional creative work has distinct quality requirements. Adobe has organized the library with filtering and recommendation tooling to help creators identify which models are best suited to their specific production context rather than requiring them to test every available option.
The model marketplace structure has a commercial logic beyond pure convenience. By making Firefly the interface through which creators access third-party models, Adobe positions itself as infrastructure — capturing the workflow even when creators are using non-Adobe model outputs. Usage of partner models inside Firefly flows through Adobe's pricing and crediting system, which means Adobe captures a portion of the value even from generations that rely entirely on a competitor's model.
Character, illustration, and photographic style personalization
The three personalization categories Adobe has built around custom model training reflect genuinely different professional use cases and technical requirements.
Character personalization is designed for entertainment, gaming, and brand mascot applications. The core problem it solves is character consistency: maintaining a recognizable, coherent character design across a large volume of generated assets — different poses, expressions, environments, and lighting conditions — without manual correction at each step. Animators, game asset creators, and brand teams producing mascot-driven content all share this problem, and it has historically been one of the most time-consuming aspects of AI-assisted creative production.
A custom character model trained on reference sheets, existing character art, and posed variations of the character encodes that character's design into the model weights rather than relying on prompt instructions to enforce consistency. The result is a model that defaults to generating your specific character rather than an averaged approximation. Combined with Firefly's pose control and expression guidance tools, a character model can produce hundreds of consistent assets with the character in varied scenarios — sufficient for a full game asset pack, an animated series style guide, or a multi-touchpoint brand campaign.
Illustration style personalization addresses the needs of illustrators, graphic novelists, concept artists, and brand design teams that have developed distinctive visual vocabularies. The illustration style training pipeline captures line weight preferences, color palette tendencies, shading approaches, compositional habits, and subject treatment patterns from the training image set. A custom illustration model for an artist who has spent years developing a flat-design, bold-color aesthetic with specific texture choices will generate images in that exact register — without the creator having to fight against a model that wants to produce photorealistic outputs or drift toward generic aesthetic trends.
This capability has particular commercial significance for illustrators who do contract work. A client brief that previously required days of back-and-forth to establish style alignment can now be addressed by pointing the client at generated samples from your custom model — demonstrating style fit before any human illustration hours are committed.
Photographic style personalization is aimed at photographers and photography-driven brand teams. The training pipeline here captures exposure tendencies, color grading signatures, depth-of-field preferences, lighting setups, and compositional sensibilities. A portrait photographer with a recognizable aesthetic can train a model that generates portrait references, mood board images, and studio setup previews in their specific style. A brand team that manages a proprietary photo style guide — specific color temperatures, lighting ratios, background treatments — can encode that guide into a generative model that consistently applies it.
The photographic style model is also the most commercially sensitive category, because it operates in a space where distinguishing AI-generated content from genuine photography requires active disclosure. Adobe has built SynthID-compatible watermarking into all Firefly outputs, including those from custom models, and supports C2PA Content Credentials for provenance tracking. These are not optional features — they are applied automatically to every generated image regardless of model type.
Why this matters for brands and creative professionals
For brand teams, the custom model announcement resolves what has been one of the most persistent frustrations with generative AI in professional creative workflows: the inability to reliably produce on-brand output at scale. Brand consistency is not a cosmetic requirement — it is a legal and commercial one. Marks, color systems, and visual identities are protected assets, and generating content that deviates from established brand standards creates both quality problems and potential IP exposure.
A custom Firefly model trained on a brand's approved image library and style guide materials effectively encodes the brand standard into the generation process. Content produced by the model starts from brand-aligned priors rather than requiring post-hoc correction. For organizations producing high volumes of campaign assets, social content, and regional adaptations — where manual brand review of every piece is not operationally feasible — that shift in the generation baseline has real operational value.
For independent creative professionals, the commercial implications cut differently. Illustrators and photographers who have worried that AI tools would commoditize their visual styles can now reframe their aesthetic as a proprietary asset. A custom model trained on an artist's work is an extension of that artist's creative capital — one that can generate supplementary materials, client reference assets, and ideation outputs in the artist's voice without misrepresenting someone else's aesthetic as their own.
The creator economy angle is genuine. Luma's Agents for creative AI and similar platforms have positioned AI as a creative force multiplier, but they have done so primarily at the infrastructure level — improving tools without fundamentally changing the relationship between creators and their own aesthetic capital. Adobe's custom model training moves in a more personal direction: your style becomes an input to the system rather than just a prompt constraint you fight for.
Adobe's competitive moat in the creator economy
Adobe's strategic position is structurally different from any other company in the generative AI space, and the March 2026 expansion is a deliberate effort to deepen that structural advantage.
Adobe reaches more professional creative workflows than any other software company on earth. Creative Cloud subscriptions reach tens of millions of active users across Photoshop, Illustrator, Premiere Pro, After Effects, InDesign, and a dozen other applications. Every one of those users is either currently touching AI-powered features or will encounter them within the next product cycle. Firefly is not a standalone AI product competing for user acquisition against Midjourney or DALL-E — it is the generative layer embedded into workflows that professionals are already paid to operate.
This distribution advantage is compounded by Firefly's compliance positioning. Adobe has built Firefly on training data it has licensed or owns outright, and has made commercial IP indemnification a core product promise. Brands and enterprises that face legal exposure from using AI-generated content cannot easily afford to use tools that carry uncertain training data provenance. Firefly's compliance infrastructure removes that barrier in a way that most competitors — including Midjourney, Stable Diffusion, and several of the partner models now appearing in the Firefly library — have not yet matched.
The platform pivot represented by the 30+ model library is also a competitive move that no standalone model competitor can easily replicate. By making Firefly the interface for third-party models, Adobe benefits from the continued improvement of the broader AI ecosystem without betting its product quality on the performance of any single model. If a new, best-in-class video generation model emerges next quarter, Adobe can add it to the library. A standalone model company has to either build or be left behind.
How this compares to Midjourney, Stable Diffusion, and DALL-E
Understanding where Firefly's new capabilities sit relative to existing tools clarifies what the announcement actually changes.
Midjourney has offered style reference functionality through its --sref flag and niji aesthetic modes for some time. It does not offer custom model training. Users who want to maintain a specific visual style in Midjourney must anchor each generation to reference images — a workflow that requires constant manual input and degrades as prompts grow more complex. Midjourney's output quality in its latest model iterations is exceptional for stylized art and illustration, but it is a single-vendor model with no comparable partner ecosystem.
Stable Diffusion through the open-source community has offered LoRA fine-tuning and full DreamBooth training for several years. Technically, this is the closest analog to Adobe's custom model training. Users can and do train custom models on personal image sets using Civitai models, ComfyUI workflows, or automatic1111. The difference is infrastructure friction and professional context. Running a custom fine-tune requires local GPU hardware or cloud compute management, specialized tooling knowledge, and quality validation work that most professional creators do not have the bandwidth to operate. Adobe is delivering a version of this capability that requires none of that technical overhead — just curated images and a training request. For the market of professional creatives who are not ML practitioners, this is a practically different product despite the technical resemblance.
DALL-E 3 via ChatGPT or the API has no custom model training pathway. It can accept style-referencing through carefully constructed prompts and system instructions, but that is an approximation, not a fine-tuned model. OpenAI's own participation in the Firefly partner library is a tacit acknowledgment that DALL-E 3 is being positioned as a capability ingredient rather than a complete creative platform.
The honest comparison is that Firefly is not trying to win on model quality benchmarks in any individual category. It is trying to win on integration, compliance, workflow depth, and personalization infrastructure — a combination no competitor currently offers in a unified professional-grade environment.
What creators should try first
For creators encountering these features for the first time, a staged exploration approach produces better outcomes than trying everything at once.
Start with the 30+ model library before attempting custom training. Familiarize yourself with how Runway, Kling, and the Google-sourced models handle different generation tasks relative to native Firefly models. This calibrates your expectations and helps you understand which aesthetic registers each model handles best — information that becomes directly useful when you later combine a custom style model with specific partner model capabilities.
When you are ready for custom training, begin with a tightly curated image set rather than uploading your entire archive. Choose images that represent a coherent, high-consistency slice of your work. If you are a photographer, that might mean images from a single shoot style or location context. If you are an illustrator, it might mean images from a single series or client project. Narrow curation produces more useful initial models than broad uploads. You can always train additional models to cover different contexts.
After the initial training completes, test the model against prompts that describe scenarios not present in your training set. This is how you identify the actual generalization behavior of the model — whether it has captured the essence of your style or merely memorized the specific subjects in your training images. A model that generalizes well will apply your aesthetic to novel prompts convincingly. One that has overfit will produce visually inconsistent results when the subject matter diverges from the training data.
Use the character model capability for consistency testing even if you are not a character artist. The underlying consistency infrastructure transfers to other use cases: product photography at consistent angles, architectural rendering at consistent lighting, portrait work with consistent color grading. The "character" framing is the primary use case, not the only one.
Finally, explore how custom models interact with Firefly's generative fill and reference-guided generation tools. The most powerful workflow in the new feature set is not standalone text-to-image from a custom model — it is using a custom model as the style anchor for partial generation operations inside existing images. A brand team editing a product image to add a generated background can now anchor that background generation to their brand's photographic style model, producing outputs that blend naturally with their existing photography rather than looking pasted in.
FAQ
Does Adobe use my training images to improve shared Firefly models?
No. Adobe has stated that images uploaded for custom model training are not used to improve shared Firefly models or other Adobe products. Custom models are private to the user or organization that created them. Enterprise tier contracts include additional data isolation guarantees.
How many images do I need to train a custom Firefly model?
The minimum requirement varies by model type. Adobe's guidance suggests starting with at least 20-50 high-quality, consistent images for style models and 30-100 images covering multiple angles and contexts for character models. More curated, higher-quality images consistently produce better results than larger but inconsistent training sets.
Can I use outputs from my custom Firefly model commercially?
Yes, under Adobe's standard commercial use terms, which extend to outputs generated using custom models. Adobe indemnifies commercial users against IP claims arising from Firefly-generated content, including content from custom models trained on user-provided images. This indemnification is contingent on the training images themselves not violating third-party IP rights — you cannot train a model on someone else's copyrighted work and claim protection on outputs.
Which of the 30+ partner models are available on the free Firefly tier?
Adobe has not published a complete tier breakdown as of the March 2026 announcement. Native Firefly models are available across subscription tiers with credit-based usage. Partner model access — particularly for Runway and OpenAI model outputs — is expected to require paid subscription access, with specific tier-to-model availability to be confirmed in the updated pricing documentation.
How does Firefly's custom model training compare to using LoRA fine-tuning with Stable Diffusion?
Functionally, the capabilities are similar: both approaches fine-tune a base model on a custom image set to capture a specific style or character. The practical difference is infrastructure. Stable Diffusion LoRA training requires local GPU hardware or cloud compute management, familiarity with training frameworks, and manual quality validation. Firefly's training pipeline handles all of that within the platform — no external compute, no configuration files, no validation scripts. For professional creators who are not machine learning engineers, Firefly's approach is accessible in a way that open-source fine-tuning is not.
Sources: Adobe Blog, "Firefly Video Model and Custom Models Expansion" (March 19, 2026); The Verge, "Adobe Firefly gets custom AI model training for brands and creators"; Adobe MAX announcements and Firefly platform documentation.