How to Know When You Have Product-Market Fit
A practitioner's framework for measuring PMF using Sean Ellis scores, retention curves, qualitative signals, and the 5 PMF archetypes — with benchmarks.
Whether you're looking for an angel investor, a growth advisor, or just want to connect — I'm always open to great ideas.
Get in TouchAI, startups & growth insights. No spam.
TL;DR: Product-market fit is not a moment — it is a spectrum. Most founders either declare it too early (confusing early enthusiasm for durable pull) or chase it with the wrong instruments. This post gives you the full diagnostic stack: the Sean Ellis 40% threshold and why it is necessary but not sufficient, retention curves as the most honest PMF signal, the 5 PMF archetypes and which benchmarks apply to each, qualitative indicators that precede the quantitative ones, and a structured process for accelerating your PMF search without wasting 18 months building the wrong thing.
Marc Andreessen coined the term in 2007 and defined it simply: "being in a good market with a product that can satisfy that market." The definition is correct but operationally useless. It tells you what the destination looks like without telling you how to navigate there — or how to know when you have arrived.
I have talked to hundreds of founders across the companies I have invested in and advised. The pattern is consistent. They either:
The cost of error in both directions is severe. Premature PMF declarations lead founders to scale prematurely — hiring sales, ramping paid acquisition, investing in infrastructure — before the underlying pull is real. The result is a leaky bucket with a high-pressure hose aimed at it. Revenue grows, churn grows faster, and the business burns toward a wall.
The cost of calling it too late is more subtle. Founders stuck in search mode — endlessly iterating, never committing — cannot attract the team, capital, or partnerships that require confidence in the direction. They over-pivot, erasing learning with each reset.
The frame I use, and teach every founder I work with: PMF is a spectrum, not a switch. Your job is to know exactly where you are on that spectrum and what the next level requires. Once you have confirmed fit in a segment, the next step is building the go-to-market motion that can scale it.
The worst PMF mistake is confusing the absence of churn evidence with the presence of retention. Early-stage products often retain users because there is no reason to churn yet — they have barely used the product. Silence is not signal.
Before diving into measurement, calibrate to the right stage. PMF means different things at each level.
| Stage | Description | Key Indicator | Common Mistake |
|---|---|---|---|
| Searching | No consistent pull; usage is exploratory | Activation rate < 20% | Mistaking pilot interest for fit |
| Initial PMF | A narrow segment pulls hard; others don't | Retention plateau in one cohort | Trying to scale before isolating the segment |
| Strong PMF | Broad segment pulls; word-of-mouth begins | k-factor > 0.3; NPS > 40 | Under-investing in distribution |
| Dominant PMF | Category-defining pull; switching costs form | Net revenue retention > 120% | Complacency; missing next-curve threats |
Most startup advice conflates these stages. "How to achieve PMF" often means "how to reach Initial PMF" for a pre-seed company and "how to reach Dominant PMF" for a Series B company. The tactics differ radically.
This post focuses primarily on the transition from Searching to Initial PMF, with benchmarks for what Strong PMF looks like once you are there.
Sean Ellis, who led growth at Dropbox, Eventbrite, and LogMeIn, developed the simplest PMF benchmark in the startup canon: ask users how they would feel if they could no longer use your product.
The three response options are: Very Disappointed, Somewhat Disappointed, Not Disappointed. His benchmark from surveying hundreds of companies: if 40% or more of users say "Very Disappointed," you likely have PMF.
Most founders run this survey badly and get numbers that cannot be trusted.
Who to survey: Only active users who have experienced the core value proposition. Define "active" by engagement with the feature that represents your core hypothesis — not just logged-in status. If your retention data shows users typically need 3 sessions before experiencing the core value, exclude anyone with fewer than 3 sessions.
When to survey: At a natural pause — not immediately after onboarding while the novelty effect is live, and not so late that the user has drifted. A reasonable trigger: 2 weeks after activation.
How to send it: Single-question, plain text email from the founder. Response rates for polished multi-question surveys are lower and introduce social desirability bias. The plain-text single question gets honest answers.
Sample size: You need at least 40 responses to trust the number. Below 40, variance is too high. This means you need a minimum of roughly 100-200 active users to survey, depending on your response rate.
| Score | Interpretation | Next Action |
|---|---|---|
| < 20% | No meaningful PMF signal | Revisit problem definition and target segment |
| 20–32% | Weak signal; specific segment may be promising | Analyze "Very Disappointed" responses for pattern |
| 33–39% | Close; worth optimizing for | Identify and focus on the high-love segment |
| 40–50% | Initial PMF in your active user segment | Scale carefully within this segment |
| > 50% | Strong PMF; very rare pre-Series A | Prepare for aggressive distribution |
The Sean Ellis score is a leading indicator, not proof of fit. It measures intent, not behavior. A user can sincerely believe they would be devastated to lose your product and still churn six months later because the workflow changed, the budget shifted, or a competitor emerged. The score is directionally accurate but must be corroborated by behavioral data.
The most common way founders misuse the test: they score 38%, convince themselves they are close enough, and scale. The 2-point gap matters more than it looks. The distribution of "Very Disappointed" responses is not linear — the segment that scores the product at 38% is often qualitatively different from the segment that scores it at 40%, and that difference has structural implications for growth.
My rule: treat the Sean Ellis score as a conversation starter, not a verdict. When it crosses 40%, you earn the right to ask harder questions — not permission to stop asking them.
After the scored question, ask: "What is the primary benefit you get from this product?" and "What type of person do you think would benefit most from this product?"
These open-text responses are where the real value lives. The highest-love segment will answer with specificity that the lukewarm segment cannot match. That specificity is your positioning brief, your ICP definition, and your retention thesis — all in one.
If I could only look at one metric to assess PMF, it would not be the Sean Ellis score. It would be the retention curve shape.
A retention curve plots the percentage of a user cohort (grouped by when they first used the product) that remains active at each subsequent time period. The x-axis is time since first use. The y-axis is percentage retained.
Three curve shapes exist:
The cliff: The curve drops steeply in the first few weeks, then continues dropping to near zero. This is no PMF. Users try the product and leave. The product is not delivering sustained value.
The slope: The curve drops steeply initially (which is normal — some users will always churn quickly regardless of fit), then flattens and stabilizes at a non-zero percentage. This flat tail is the PMF signal. The percentage at which it stabilizes is your retained core.
The smile: Rare, but the most powerful signal. After the initial drop and plateau, retention actually increases over time. This indicates a product where users who stay become more engaged over time — a compounding value product. Think of enterprise SaaS with deep integrations where switching costs increase over the product lifecycle.
| Business Type | D7 Retention | D30 Retention | D90 Retention | PMF Signal |
|---|---|---|---|---|
| Consumer social | 25–35% | 15–25% | 10–20% | Curve flattens above 10% |
| B2B SaaS (SMB) | 50–65% | 40–55% | 35–50% | Month 3+ churn < 3%/mo |
| B2B SaaS (Enterprise) | 60–75% | 55–70% | 50–65% | Annual NRR > 110% |
| Marketplace | 30–45% | 20–35% | 15–30% | Supply/demand balance stable |
| Consumer app with habit | 40–60% | 30–45% | 25–40% | Curve flattens above 20% |
These benchmarks are directional, not absolute. A vertical SaaS product serving a niche with a 6-month buying cycle has structurally different retention dynamics from a consumer app. The key is the shape of the curve relative to itself over successive cohorts.
If your product has any PMF motion at all, successive cohorts should show improving retention. The January cohort retained at 25% at 90 days. The March cohort retained at 31%. The June cohort retained at 38%.
This cohort trajectory is evidence that product improvements are translating into durable engagement improvements — not just increasing acquisition. It is one of the most compelling signals you can show an investor, because it shows learning velocity, not just traction.
Flat or declining cohort curves despite growing user counts are the clearest early warning sign that you are filling a leaky bucket. More users are masking a structural retention problem.
Not all PMF looks the same. The signal differs substantially based on which growth mechanic your product relies on. I use five archetypes drawn from the work of Brian Balfour and Chris Yeh.
Definition: Users stay because switching costs are high, habits are formed, or data lock-in accumulates over time. The value of the product increases with usage.
Examples: CRMs, project management tools, accounting software, email clients.
Primary signal: Low voluntary churn rate. Monthly churn below 2% for SMB, below 1% for enterprise. Net Revenue Retention (NRR) above 100%.
Sean Ellis benchmark: 40–50% at minimum for the core engaged segment.
Watch for: Users who stay but don't expand — this indicates stickiness without advocacy, which limits growth ceiling.
Definition: Users bring other users as a natural function of using the product. The product is inherently collaborative or shareable.
Examples: Slack, Figma, Notion (template sharing), Calendly (scheduling links).
Primary signal: k-factor above 0.3, with organic invite flow visible in acquisition data. Referral as a meaningful acquisition channel without paid incentive.
Sean Ellis benchmark: 50%+ because the users who love it enough to share it are a superset of the users who merely don't want to lose it.
Watch for: Viral mechanics that produce signups but not activation — invitation spam loops that do not translate to retained users.
Definition: The product has strong enough unit economics (LTV/CAC) that paid acquisition is durable and scalable. The "fit" is between the product's value proposition and a targetable, reachable audience.
Examples: Vertical SaaS products with high ACV, B2B tools with clear ROI narratives.
Primary signal: Payback period below 12 months at scale. CAC:LTV ratio of 1:3 or better. Improving conversion rates as messaging tightens.
Sean Ellis benchmark: 40%+ among the paid cohort specifically, not just organic early adopters.
Watch for: LTV assumptions based on early cohort behavior before churn fully matures (typically needs 18–24 months of data).
Definition: The product derives value from a large installed user base — either through network effects, data density, or community. Individual users may not pay, but the aggregate creates defensible scale.
Examples: Developer tools with community adoption, open-source products with enterprise upsell, freemium consumer apps.
Primary signal: Active user growth rate (not just registration), depth of engagement metrics, percentage converting from free to paid.
Sean Ellis benchmark: 40%+ among power users (who are the conversion target), even if the broader free user base scores lower.
Watch for: Vanity user counts that mask engagement thinness. Monthly active/registered ratio below 20% is a red flag.
Definition: The product creates value by connecting supply and demand sides of a transaction. PMF requires fit on both sides simultaneously.
Examples: Two-sided marketplaces, platforms, exchanges.
Primary signal: Liquidity — the percentage of searches or requests that result in a successful match. Transaction repeat rate on the demand side. Supply-side fill rate.
Sean Ellis benchmark: Must be measured separately on each side. Supply side often scores lower due to higher switching costs.
Watch for: Single-side PMF (demand loves it, supply is unhappy, or vice versa). Marketplace products can appear to have PMF on one side while the other side is degrading.
Quantitative metrics lag reality. The qualitative signals arrive weeks or months before the numbers move. Learning to read them is one of the highest-leverage skills in early-stage product management.
You did not build for healthcare, and three healthcare companies found you and asked to buy. You did not target agencies, and agencies keep asking about multi-client features. Unexpected demand from a consistent segment is a stronger PMF signal than expected demand from your target — because it suggests your product solves a problem you did not fully understand.
When users duct-tape your product into workflows it was not designed for — CSV exports, Zapier connectors, browser extensions, manual data entry — they are telling you the core value is high enough that they will absorb friction to maintain it. This is early sticky PMF signal.
"Can you add X? We need it by next quarter or we'll have to find another solution." This is different from vague wishlist requests. Urgency indicates that the user has made a commitment around your product — they are planning business processes against your roadmap. That only happens when the existing product already has strong fit.
Early in a startup, every deal requires extensive education, multiple stakeholders, and significant hand-holding. When deals start closing faster — when prospects arrive pre-educated, pre-convinced, and focused on procurement rather than justification — it means word-of-mouth and category awareness are pulling people in pre-sold. This is a strong initial PMF signal.
In interviews, users use language that is disproportionately emotional relative to the functional utility. "I don't know what I would do without it." "This changed how I work." "My whole team switched because of me." This emotional vocabulary, particularly when it appears across unrelated users without prompting, indicates identity-level attachment — the deepest form of product fit.
The qualitative signals are the canary in the coal mine. The quantitative benchmarks are the body count. Learn to read the canary.
Here is the diagnostic process I walk through with every early-stage founder I advise. It takes 3–4 weeks to run properly and produces a clear PMF score and priority action.
Before measuring anything, write down: Who is the specific user? What specific problem does the product solve for them? What is the frequency and context of use? What does success look like for them after using the product?
This hypothesis is what you are testing. Most founders skip this and measure PMF against a vague general user population, which produces vague and useless results.
Sort your user base by engagement depth — not just frequency, but quality of engagement. For SaaS: which users have integrated the product into their core workflows? For consumer: which users return without prompting? For marketplace: which users transact repeatedly?
These are your "love cluster." They are the segment against whom your PMF measurement is meaningful.
| Instrument | Timeline | What to Look For |
|---|---|---|
| Sean Ellis survey | Week 1 | 40%+ "Very Disappointed" in love cluster |
| Qualitative interviews (10–15) | Weeks 1–2 | Pull language, workaround behavior, emotional language |
| Retention curve analysis | Week 2 | Flattening curve; improving cohort trajectory |
| NPS measurement | Week 2 | NPS > 40 in love cluster |
| Referral attribution | Week 3 | % of new users attributable to existing users |
| Churn interview analysis | Weeks 3–4 | Common exit reasons; are they solvable? |
Rate each dimension on a 1–5 scale:
| Dimension | Weight | Your Score | Weighted Score |
|---|---|---|---|
| Sean Ellis (% Very Disappointed) | 25% | /5 | |
| Retention curve shape | 25% | /5 | |
| Qualitative pull signals | 20% | /5 | |
| Cohort trajectory | 15% | /5 | |
| Organic referral rate | 15% | /5 | |
| Total PMF Score | 100% | /5 |
Scoring guide:
This is the hardest decision in a startup. Most founding teams err toward persistence because pivoting feels like admitting failure. But a well-executed pivot is not failure — it is accelerated learning.
Most pivots that work are adjacent — they keep the core technical capability or customer relationship while changing the problem being solved, the target segment, or the packaging.
Dropbox started as a file-sync tool for developers. The pivot to consumer storage kept the core technology and added simplicity. Slack started as a game (Glitch). The clean-break pivot to business messaging kept the team but abandoned the product entirely.
Clean breaks are rare and extremely risky. Before executing one, exhaust the adjacent pivot options. The sunk cost of existing customers, data, and domain knowledge is a real asset.
My test for whether to pivot: can I keep at least two of the three elements — same team, same technology, same customer? If yes, it is an adjacent pivot and worth exploring. If none of the three survive, it is a restart, and the bar for that must be extraordinarily high.
AI products exhibit PMF signals that differ materially from traditional SaaS. I cover the full AI-specific diagnostic stack — including output quality thresholds and workflow integration signals — in PMF for AI Products: Different Signals, Different Timeline, but the key differences are worth summarizing here.
AI products are extraordinarily easy to demo well. A GPT-4-powered tool can produce genuinely impressive outputs in a 20-minute demo. This creates a systematic inflation of early enthusiasm that does not translate to retention.
Traditional PMF signals — "I loved the demo, I'm signing up" — are unreliable for AI products because the demo experience is categorically better than the day-to-day experience, which requires prompt engineering skill, integration into existing workflows, and tolerance for output variability.
Adjustment: Add a mandatory usage period (7–14 days) before running the Sean Ellis survey. Do not survey users based on demo enthusiasm.
Many AI products have a PMF that is contingent on output quality exceeding a threshold. Below the threshold, the product is interesting but not useful. Above the threshold, it becomes habit-forming.
This threshold is non-linear. Users who experience 90% accuracy may be moderately satisfied. Users who experience 97% accuracy are often deeply attached. The jump from 90% to 97% is not a 7% improvement — it is often the difference between "useful sometimes" and "I use this for everything."
Adjustment: Track accuracy-per-cohort correlation with retention. The inflection in the retention curve often corresponds precisely to the accuracy improvement that crossed the threshold.
Traditional software PMF is signaled by adoption frequency. AI PMF is better signaled by workflow depth — not "does the user open the product daily" but "has the user restructured their workflow around the product."
Workflow restructuring is hard to measure directly. Proxies: integration with other tools (API calls, Zapier connections, native integrations), user-created templates or custom instructions, account sharing with colleagues.
PMF search is often described as if it is fundamentally unpredictable — either you find it or you don't. This is wrong. There are structural practices that shorten the search.
The fastest path to PMF is extreme specificity. Not "small business owners" but "independent financial advisors with $20M–$100M AUM who manage their own compliance." Not "content creators" but "YouTube creators with 100K–500K subscribers who monetize through sponsorships."
Extreme specificity allows you to:
The objection I always hear: "But that market is too small." The answer: you are not trying to serve the niche forever. You are trying to find pull fast. Once you find it in the niche, you expand systematically. Every major consumer company today found PMF in a segment that appeared too small to build a business on.
Most companies treat churn as a metric to minimize. The highest-PMF companies treat churn as an information source to exploit. Every churned user knows something about where the product fails that your retained users cannot tell you.
Systematic churn interviews — 20 minutes, founder-led, within 2 weeks of churn — are the fastest way to isolate the gap between what users expect and what the product delivers. This gap is the exact definition of the PMF problem.
For each user segment you are targeting, ask: what specific version of themselves do they want to become, and can your product make them that version? The "superhero" frame is not a marketing exercise — it is a PMF hypothesis. If you can articulate a specific transformation your product enables for a specific user, you have a testable PMF claim.
When your PMF score is below 3.0, the constraint is almost never code. It is understanding. Run 50 structured customer conversations — not sales calls, not demos, but genuine problem exploration interviews — before writing another line of product code. The Jobs-to-be-Done interview framework (developed by Clayton Christensen and refined by Bob Moesta) is the right tool here.
The gap between activation and value realization is where most PMF searches get stuck. A feature can be adopted (users click on it) without delivering value (users achieve their goal). Map the exact moment in the product journey where your best users first experience the core value, then measure what percentage of new users reach that moment and how long it takes.
Every percentage point improvement in time-to-value and percentage-reaching-value translates directly into a higher Sean Ellis score and better retention.
Q: Can you have PMF in one segment but not others?
Yes — and this is one of the most common situations. A B2B SaaS product might have strong PMF with mid-market companies (50–500 employees) but weak PMF with enterprise accounts (1,000+ employees). The mistake is averaging across segments. Measure PMF by segment separately and make deliberate decisions about which segment to prioritize before expanding.
Q: Does strong revenue growth mean you have PMF?
Not necessarily. Revenue growth is a lagging indicator that can be driven by strong sales execution, favorable market conditions, or exceptional onboarding — none of which indicate durable fit. Revenue combined with strong retention and declining CAC is a more reliable composite signal. For the specific metrics that reveal whether an AI product is actually retaining and expanding, see AI Product Metrics That Matter.
Q: What is the difference between PMF and product-channel fit?
PMF is about whether the product creates enough value for users to sustain engagement. Product-channel fit is about whether there is an efficient, scalable mechanism to reach those users. You can have PMF without product-channel fit — the product is loved, but you cannot reach new users economically. Both are required for a scalable business, and they are solved with different tools.
Q: How long should it take to find PMF?
Consumer products typically find initial PMF in 12–24 months from launch, with significant variance. B2B products typically take longer — 18–36 months — because sales cycles are slower and feedback loops are longer. AI products are showing PMF signals faster than historical averages for some categories, and much slower for others where the workflow integration challenge is high. If you have been searching for more than 30 months with no meaningful improvement in your PMF scorecard, the pivot calculus changes materially.
Q: Should I mention we have PMF to investors?
Only if you can back it with data. Investors hear PMF claims constantly and have high calibration for what it actually means. "We have PMF" followed by retention curves, Sean Ellis scores, and qualitative evidence is compelling. "We have PMF" followed by revenue numbers is a yellow flag — it suggests you are conflating traction with fit.
Q: What kills PMF after you have found it?
Several things: expanding to a new segment before fully serving the original one (the new users pull the product in a direction that erodes fit for existing users), hiring a product team that optimizes for metrics rather than value, competitive pressure that forces price changes that alter the economic basis of fit, and market shifts (regulatory, technological, or behavioral) that change what users need. PMF is not a permanent asset — it requires maintenance.
Q: Is NPS a good PMF proxy?
NPS is directionally useful but has significant limitations as a standalone PMF metric. An NPS above 50 in your active user segment is a strong signal. But NPS is a point-in-time snapshot, is sensitive to survey timing and audience selection, and conflates product satisfaction with customer success quality. Use it as one input in the diagnostic battery, not as the primary verdict.
Finding PMF is the central challenge of early-stage company building — not fundraising, not hiring, not marketing. Everything else is downstream of it. The founders who find it fastest are the ones who measure with precision, interpret with humility, and act with focus.
The framework here gives you the instruments. The discipline is yours.
Traditional PMF signals mislead AI founders. Here's how to read retention, habit, and workflow fit signals specific to AI products — and a 12-week diagnostic.
A practitioner's GTM playbook for AI SaaS founders — ICP definition, positioning, pricing model selection, sales motion, and a 90-day sprint framework.
A diagnostic checklist for founders in years 1-3 — 42 checks across product, market, team, revenue, and operations with a scoring framework and recovery playbook.