Customer Interview Question Template for Founders
50+ customer interview questions by type and goal, with the JTBD framework, Mom Test rules, synthesis process, and opportunity scoring method.
Whether you're looking for an angel investor, a growth advisor, or just want to connect — I'm always open to great ideas.
Get in TouchAI, startups & growth insights. No spam.
TL;DR: Most founder interviews produce garbage data — not because founders lack empathy, but because they ask leading questions, reveal the product too early, and interview people who want to be encouraging. This post gives you a complete 50+ question bank across discovery, validation, and churn interview types presented as markdown tables, the Jobs-to-Be-Done framework adapted for AI-era products, Mom Test rules that actually work, a synthesis process you can run in a spreadsheet, and an Opportunity Scoring method for turning raw interviews into prioritized product decisions. Skip to the question bank table if you are in a hurry.
I have done hundreds of customer interviews across my own companies and the startups I have backed. I have watched hundreds more. The uncomfortable truth is that the majority produce data that is worse than useless — not because customers are dishonest, but because the interview structure is systematically designed to generate flattering noise rather than actionable signal.
Let me be specific about what bad interviews look like in practice, because founders rarely recognize them in the moment.
The easiest people to book a call with are the ones who already like you — friends, former colleagues, warm intros from investors, early fans who responded enthusiastically to your launch post. These are also the people most motivated to encourage you.
This is not a character flaw on their part. It is social behavior. When someone has sacrificed 30 minutes to help you, they feel an implicit obligation to be constructive. Telling you that your idea is poorly conceived or your product is confusing feels unkind. So they emphasize what works, soften what does not, and send you away with a rosier picture than reality warrants.
The solution is not finding harsher critics. It is structuring your questions so that praise requires the same concrete evidence as criticism. Instead of "what do you think of this?" ask "walk me through the last time you had this problem." Opinion is cheap. A specific past incident is not.
"Would you pay for this?" is the most commonly asked and most useless question in early-stage customer research. The answer is almost always yes. And the yes tells you almost nothing.
Hypothetical questions ask people to predict their own future behavior. Decades of behavioral economics research establish that humans are catastrophically bad at this, especially for products they have not encountered before. They will tell you they would use the gym membership, the meal prep delivery, the AI writing assistant — and then not use them when given the option.
The alternative is behavioral questions grounded in the past. "Tell me about the last time you dealt with this problem. What did you do? How long did it take? What happened next?" Past behavior is a far better predictor of future behavior than stated intent.
Founders who are excited about their product — which is most founders — have a powerful urge to show the demo early. The moment the demo appears, the interview transforms. The customer stops describing their world and starts reacting to yours. Every answer from that point is anchored to your solution. You lose the ability to discover whether your framing of the problem is even correct.
The rule in discovery interviews is simple: no demos. If the customer asks to see the product, say "I would love to show you, but I want to make sure I understand your situation first so that what I show you is relevant. Can I ask a few more questions?" Most people accept this framing without hesitation.
Polite customer interviews confirm that a problem exists without establishing how painful it really is. The result is a long list of problems the customer mentioned — none of which are sorted by actual business impact.
The questions that reveal real priority: "How much does this cost you in time or money?" "What happens when it goes wrong?" "Have you tried to solve this before? What did you try? Why did that not work?" If a customer cannot tell you the cost of a problem or what happens when it fails, it is probably not a high-priority problem regardless of what they say in the moment.
This one is specific to founders building AI products. When you show someone a product that does something genuinely novel — an AI that writes their emails, a copilot that analyzes their pipeline — they are reflexively excited. The demo impresses. They say "this is amazing." You leave the call feeling validated.
But amazement is not adoption. I have had prospects rave about demos and never sign up. I have had enterprise buyers forward my deck internally with enthusiasm and then go silent for six months. Excitement about novelty and willingness to change behavior are completely different states. The only way to separate them is to anchor every enthusiastic reaction to a specific behavioral commitment: "When exactly in your workflow would you use this? What would you stop doing instead?"
The goal of a customer interview is not to validate your hypothesis. It is to update your model of the customer's world with the highest-quality evidence available. That requires questions whose answers can genuinely challenge your assumptions — not just confirm them.
Not all customer interviews serve the same purpose. Using a discovery interview structure when you need churn data is as counterproductive as using a validation interview when you are still trying to map the problem space. The three types differ in goal, timing, structure, and what good output looks like.
| Dimension | Discovery Interview | Validation Interview | Churn Interview |
|---|---|---|---|
| Primary goal | Map the problem space; identify real pain | Test whether your proposed solution fits | Understand why customers left |
| When to use | Pre-product; entering new segment | Before building a major feature | Within 48 hours of any churn event |
| Who to interview | ICP who does not need to know your product | Current users or design partner candidates | Churned or inactive customers |
| Demo appropriate? | Never | Yes, with specific fit questions | Only to understand their past experience |
| Key outputs | Problem map, workflow, pain priority | Feature-fit signals, objections, WTP | Real churn reasons, root cause, salvageable accounts |
| Ideal length | 35 to 45 min | 30 to 45 min | 20 to 30 min |
Run discovery interviews until you can predict what the next interviewee will say before they say it. That moment of predictability is the signal that you have saturated the problem space in that customer segment — it is time to move to validation.
Start churn interviews the moment you have any paying customers, even if churn is currently zero. You want the habit built before you desperately need the data.
Jobs-to-Be-Done (JTBD) is the most powerful framework I have encountered for understanding why customers buy products and what they hire those products to accomplish. The core insight, developed by Clayton Christensen and extended practically by Bob Moesta and Tony Ulwick, is that customers do not buy products — they hire them to make progress in their lives or work.
Understanding the job your product is hired to do answers questions that feature-driven research cannot: Who are your real competitors? (Often not the other software in the category, but the spreadsheet, the agency, the workaround.) What does "better" actually mean to the customer? What would cause them to fire your product and hire something else?
| Component | The Question It Answers | Example: Customer Success Tool |
|---|---|---|
| Functional job | What practical outcome are they trying to achieve? | Ensure accounts are on track to renew |
| Emotional job | How do they want to feel while doing it? | Confident, not anxious, about at-risk accounts |
| Social job | How do they want to be perceived by others? | Seen as proactive — catches problems before escalation |
| Job context | When and under what conditions does the job arise? | End of month, preparing for QBR |
Most product teams only research the functional job. The emotional and social dimensions are where your positioning and messaging actually live — and where differentiation is hardest to copy.
Every customer decision involves four forces. Two push toward switching; two resist it.
Forces pushing toward change:
Forces resisting change: 3. The comfort with existing habits 4. The anxiety about the new thing not working out
Understanding these forces explains why customers switch, why they do not, and what marketing actually needs to say. Most SaaS marketing addresses only the pull side — it shows a beautiful product. It ignores the inertia forces: the fear of migration, the learning curve, the internal politics of getting a new tool approved.
The key JTBD move is the timeline excavation. Ask the customer to reconstruct the specific moment they first decided to look for a new solution — or to adopt your product — in chronological detail.
This chronology surfaces all four forces in action. You hear the push (the frustration that created urgency), the pull (what attracted them to the new option), and the inertia (what almost stopped them). That data is your positioning document, your objection-handling guide, and your onboarding design brief simultaneously.
The best JTBD interviews feel like archaeology — you are brushing dirt away from a timeline until the real story emerges. Your default response to almost everything should be "and then what happened?"
Rob Fitzpatrick's The Mom Test is the most practically useful book ever written about customer conversations. The core premise: if your questions are bad enough, even your harshest critics will accidentally mislead you. The title refers to the test: ask questions that your mom could not lie to you about, because they are about verifiable facts rather than opinions about the future.
The three Mom Test rules:
Rule 1: Talk about their life, not your idea. Bad: "What do you think of this concept?" Good: "Walk me through the last time you had to deal with [the problem this solves]."
Rule 2: Ask about specifics in the past, not generics or predictions. Bad: "How often would you use this?" Good: "How many times did this come up for you last month? What did you do each time?"
Rule 3: Talk less and listen more. The typical founder interview has the founder talking 60 to 70% of the time. A productive interview reverses that ratio. Your goal is 20% talking, 80% listening. Use silence as a tool — when a customer trails off and pauses, wait. In that pause is usually the most honest thing they will say.
AI products have a specific complication: customers cannot give you reliable past-behavior data about something they have never used. Nobody was routinely using an AI pipeline copilot two years ago. So you cannot ask "tell me about the last time you used AI to do this" in a discovery interview.
Here is how I adapt the Mom Test for AI-native products:
Interview around the underlying job first. Before AI enters the conversation at all, understand the functional job completely. How do they do this today? How long does it take? What breaks? What do they do when it breaks? Get the full picture of the existing workflow without AI in the frame.
Use past adoption behavior as a proxy. Instead of "would you use AI for this," ask: "Have you tried any tools that automated part of this workflow? What did you try? Why did you stop?" This reveals behavioral data about their appetite for automation without requiring a hypothetical.
Evaluate outputs, not the product. If you need to test whether someone would actually engage with an AI feature, show them the output and let them evaluate it. "Here are five email drafts generated from your last campaign brief — which would you actually send? What would you change?" This grounds the conversation in real evaluation behavior.
Probe the novelty trap directly. If someone says "this is amazing" in the first three minutes of a demo, ask immediately: "Can you tell me about the last time a tool excited you this much in a demo? What ended up happening with it?" This question separates genuine enthusiasm from reflexive tech novelty. The answer is almost always revealing.
| Common Mom Test Violation | Why It Produces Bad Data | AI-Adapted Replacement |
|---|---|---|
| "Would you use an AI to do X?" | Asks for future prediction, not behavior | "How do you do X today? What have you tried to speed it up?" |
| "What features would you want?" | Invites wishlist thinking, not problem description | "Walk me through the last time X was painful. What did you actually do?" |
| "Isn't it frustrating when Y happens?" | Leading question — confirms your existing belief | "Tell me about a time Y affected your work. What happened?" |
| "Would this save you time?" | Hypothetical with obvious self-flattering answer | "How long did you spend on X last week? What did you do with that time?" |
| "Do you think AI will change this space?" | Pure opinion — zero behavioral value | "Have you adopted any AI tools in the last 12 months? What stuck? What didn't?" |
This is the complete question bank I use across all three interview types. A single session should use 8 to 12 questions maximum — treat this as a menu, not a script. Follow threads that open up. Skip any question already answered naturally in the conversation.
| # | Category | Question | Goal |
|---|---|---|---|
| D1 | Context | "Before we get started, can you tell me about your role and what a typical week looks like for you?" | Orient to their daily context — do not skip this |
| D2 | Context | "What are the two or three biggest problems you are dealing with right now in your work?" | Surface high-level pain on their terms |
| D3 | Context | "What does success look like in your role — how does your team measure it?" | Understand what they are actually rewarded for |
| D4 | Context | "What decisions do you make most frequently? Which ones take the most time?" | Identify high-frequency, high-effort tasks |
| D5 | Workflow | "Can you walk me through exactly how you handle [workflow] today, from start to finish?" | Map the workflow step by step — your core question |
| D6 | Workflow | "What tools do you use for this? How do they fit together?" | Identify the current competitive landscape |
| D7 | Workflow | "Where does this process break down most often?" | Find highest-frequency failure points |
| D8 | Workflow | "What parts of this do you do manually that you wish were automated?" | Surface automation opportunity |
| D9 | Workflow | "Who else is involved in this process? What do they do?" | Map stakeholders and buying committee |
| D10 | Workflow | "How long does the full process take, start to finish?" | Quantify the time cost |
| D11 | Workflow | "How often do you do this — daily, weekly, monthly?" | Establish frequency to size the pain |
| D12 | Workflow | "What happens if you get this wrong? What are the consequences?" | Establish stakes and severity |
| D13 | Pain | "What is the most frustrating part of [workflow]?" | Get the emotional core of the problem |
| D14 | Pain | "If you had to describe this problem in one sentence to your CEO, how would you say it?" | Surface the business framing — often your headline copy |
| D15 | Pain | "Have you tried to solve this before? What did you try?" | Reveal past search behavior |
| D16 | Pain | "Why did that not work?" | Understand the failure mode of existing solutions |
| D17 | Pain | "How much does this cost you — in time, money, or in things you cannot do because of it?" | Quantify the pain — let them name a number |
| D18 | Pain | "If this problem went away tomorrow, what would be different about your job?" | Surface the underlying job to be done |
| D19 | Pain | "Is this a problem your competitors face too, or is it specific to your situation?" | Assess market breadth |
| D20 | Pain | "Who in your organization cares most about solving this? Who has budget authority?" | Find the economic buyer |
| D21 | Priority | "Of the problems you have described, which one is causing you the most pain right now?" | Force prioritization across pain points |
| D22 | Priority | "If you could only fix one thing in [workflow] this quarter, what would it be?" | Identify the highest-priority job |
| D23 | Priority | "What problem are you most actively trying to solve — have you allocated time or budget to it?" | Distinguish real priority from theoretical concern |
| D24 | Priority | "What would it mean for you personally if this got solved?" | Get the emotional and social dimensions of the job |
| # | Category | Question | Goal |
|---|---|---|---|
| V1 | Hypothesis | "I want to test a specific assumption with you. We believe [hypothesis]. Does that match your experience?" | State and test your hypothesis explicitly |
| V2 | Solution Fit | "Now that you have seen [product/prototype], where in your workflow would you use this?" | Test for a specific integration point — vague answers predict low adoption |
| V3 | Solution Fit | "What part of what you saw is most relevant to the problems you described?" | Identify fit signals |
| V4 | Solution Fit | "What is missing from what you saw that you would need before using it regularly?" | Surface gaps before you build |
| V5 | Solution Fit | "On a scale of 1 to 10, how well does this match the problem you described? What would make it a 10?" | Calibrate fit — the follow-up is the real answer |
| V6 | Solution Fit | "If you woke up tomorrow and this was fully integrated into your workflow, what would be different?" | Reveal the imagined outcome — compare to your value prop |
| V7 | Adoption | "Who else on your team would need to be involved in using this?" | Map co-users — they often have veto power |
| V8 | Adoption | "What would you need to see before making this part of your routine?" | Understand proof requirements for adoption |
| V9 | Adoption | "What would get in the way of you actually using this consistently?" | Surface blockers before they kill the deal |
| V10 | Adoption | "Have you used AI tools before for similar tasks? How did that go?" | Assess AI adoption history and risk tolerance |
| V11 | Adoption | "What happens when the AI is wrong? How would you catch that?" | Customers who have not thought about failure have not really thought about using the product |
| V12 | Buying | "Is this the kind of tool you would buy yourself, or would it need to go through procurement?" | Qualify the buying path — SMB vs. enterprise distinction |
| V13 | Buying | "What budget category would this come from — tools, software, headcount?" | Locate the budget and competitive displacement |
| V14 | Buying | "What would you expect to pay? What seems too expensive? What seems suspiciously cheap?" | Triangulate willingness to pay — use all three anchors |
| V15 | Buying | "Who would need to approve a purchase like this? What would you need to show them?" | Map the internal approval path |
| V16 | Buying | "If this was available today, how quickly could you move to a purchase decision?" | Test urgency — vague answers signal low priority |
| V17 | Commitment | "If we built exactly what you described, would you be willing to be a reference customer?" | Willingness to be a reference is real skin in the game |
| # | Category | Question | Goal |
|---|---|---|---|
| C1 | Opening | "I am not here to win you back — I genuinely want to understand your experience so we can build a better product. Is that okay?" | Set the frame first — without this, all answers are polite lies |
| C2 | Opening | "Can you take me back to when you first decided to try us? What were you hoping it would do for you?" | Anchor to original intent — reveals the job they hired you for |
| C3 | Inflection | "At what point did your experience start to feel like it wasn't meeting your expectations?" | Find the inflection point — often earlier than the cancellation date |
| C4 | Inflection | "What was happening in your work around that time?" | Get context — often reveals a trigger event |
| C5 | Inflection | "Was there a specific moment or event that made you decide to stop?" | Identify the proximate cause |
| C6 | Failure | "What did you try before making the decision to cancel?" | Understand internal escalation — were they trying to fix it? |
| C7 | Failure | "What problem were you originally trying to solve when you started using us? Did we solve it?" | Test whether you delivered on the original job |
| C8 | Failure | "If you had to summarize in one sentence why it did not work out, what would you say?" | Write verbatim — this is your retention brief |
| C9 | Failure | "What did you end up doing instead — another solution, or did you stop trying?" | Identify the alternative and the job now being done |
| C10 | Root Cause | "Was there a specific feature or outcome you needed that we did not have?" | Pinpoint the gap |
| C11 | Root Cause | "Was there something we could have done differently that would have made this work?" | Surfaces the product roadmap input |
| C12 | Root Cause | "How did your team react to the decision? Was there disagreement internally?" | Reveal champion and detractor dynamics |
| C13 | Competition | "Are you using anything else now to handle this problem? What is better about it?" | Direct competitive intelligence without defensive filtering |
| C14 | Competition | "What is worse about what you are using now?" | Find the re-engagement angle |
| C15 | Re-engage | "If we fixed [specific issue], would you consider coming back?" | Test re-engagement potential — a prioritization signal, not a sales move |
| C16 | Re-engage | "On a scale of 1 to 10, how likely would you be to recommend us to a colleague, even knowing the issues?" | NPS from churned user — even churned customers can be promoters |
| C17 | Advice | "What is one piece of advice you would give me as a founder?" | Low-stakes open ask — often produces the most honest answer of the whole interview |
| # | Question | When to Use |
|---|---|---|
| P1 | "Can you tell me more about that?" | Deepen any answer — use freely, you cannot overuse this |
| P2 | "What do you mean by [specific word they used]?" | Clarify their language and lock in the definition |
| P3 | "And then what happened?" | Extend a timeline — your single most useful follow-up |
| P4 | "Why was that important to you?" | Surface the underlying motivation beneath the behavior |
| P5 | "How did that make you feel?" | Get the emotional stakes — unlocks the real story |
| P6 | "Can you give me a specific example?" | Push from generality to a concrete incident |
| P7 | "Who else would agree with that? Who would push back?" | Map the stakeholder landscape around a belief |
| P8 | "If you had to guess, what would you say?" | Unlock reluctant customers who hedge everything |
| P9 | "What would the best version of this look like?" | Surface the aspiration behind the complaint |
| P10 | "Is there anything I have not asked that you think I should know?" | Catch everything you missed — always ask this last |
The best questions in the world cannot save you from interviewing the wrong people. Recruiting is the most underinvested part of the customer interview process — most founders talk to whoever is willing to show up.
Before scheduling any interview, write down your ideal respondent profile precisely. For B2B:
| Dimension | What to Specify |
|---|---|
| Job title | Exact titles (not departments) |
| Company size | Employee count or revenue band |
| Industry | Specific verticals, not broad categories |
| Problem ownership | Must own the workflow you are researching |
| Buying authority | For validation: must be able to influence purchase decisions |
| Technology profile | Tools they currently use — relevant to your integration story |
Then write a 3-question screener form. Send it before scheduling. Anyone who does not match should be respectfully declined or scheduled for a later research phase.
| Channel | Best For | Notes |
|---|---|---|
| LinkedIn cold outreach | Discovery with specific ICP | Personalize heavily — generic DMs get 1 to 2% response rates |
| Referrals from existing interviews | All types | Always ask for one referral at the end of every call |
| Existing customers (CRM export) | Validation, churn | Selection bias — happiest customers respond most readily |
| Churned users (CRM) | Churn interviews | Reach out within 48 hours or signal quality degrades fast |
| Competitor review sites (G2, Capterra) | Discovery, competitive intelligence | Negative reviews are a goldmine for pain language |
| Vertical Slack communities | Discovery | Post in relevant channels — frame as research, not pitch |
| UserInterviews.com or Respondent.io | Validation when you need participants quickly | Costs money; best for consumer or horizontal B2B |
| Conference attendees | Validation with specific titles | DM people at your vertical conference pre or post event |
For B2B cold discovery interviews, a $75 to $100 gift card increases response rates 2 to 3x. For warm leads or existing customers, a personal email from the founder — with a commitment to share back key themes — outperforms any incentive. Senior buyers respond to the intelligence value: "I will send you a one-page summary of patterns I am hearing across companies like yours."
For churn interviews, a short, personal founder email with no incentive mentioned outperforms automated sequences with gift cards by a wide margin. I consistently see 30 to 40% response rates on founder-written churn emails versus 5 to 10% on automated ones.
35 minutes is the sweet spot for discovery. 30 minutes for validation. 20 to 25 minutes for churn, where the scope is narrower. Book a 45-minute slot for any interview you expect to go deep, but signal your intention to respect their time by finishing at or before 35 minutes.
Always use video. Facial expressions and energy levels carry as much data as words. When someone says "that's fine" with a flat affect, that is a no dressed as a yes. You need to see it.
| Phase | Duration | What You Do |
|---|---|---|
| Warm-up | 3 to 4 min | Introduce yourself, explain the goal, reassure them there are no wrong answers |
| Context setting | 3 to 5 min | "Tell me about your role and what you are responsible for" |
| Main questions | 18 to 25 min | Work through your question set — follow threads, adapt in real time |
| Wrap-up | 3 to 5 min | "Is there anything I haven't asked that I should know?" plus referral ask |
Ask permission to record. The phrasing that almost never gets declined: "Do you mind if I record this? It is just for my notes so I don't miss anything — I won't share it externally." Use an AI transcription tool (Otter.ai, Fireflies.ai, or similar) so you can focus entirely on listening and following threads, not on typing.
Even with recording, keep a live note document with two columns: Observation (what they said) and Interpretation (what you think it means). Fill in observations during the call. Fill in interpretations after. This separation forces intellectual honesty — it stops you from pattern-matching in real time before you have heard everything.
"Thank you for making the time — I genuinely appreciate it. A couple of things before we start: there are no right or wrong answers here, I am trying to learn, not pitch you anything. I may ask follow-up questions that seem obvious — that is intentional, I just want to make sure I understand completely. And please feel free to tell me if a question does not make sense. Ready to start?"
This does four things: sets the no-pitch expectation, gives permission for honest answers, pre-explains the follow-up pattern so it does not feel confrontational, and transfers some control to the participant.
Within two hours, write your "hot take" — 3 to 5 sentences from memory, before reviewing the transcript. Cover: the single most surprising thing you heard, the most important pain point mentioned, one verbatim quote you want to keep, and one thing you would do differently next time. The hot take captures your immediate reaction, which is often the most signal-dense data you have.
Raw interview data — transcripts, notes, hot takes — is not insight. Insight requires synthesis: finding patterns across multiple conversations and drawing conclusions more reliable than any single data point.
After every five interviews, run a 60-minute affinity mapping session.
Step 1 — Extract observations. Read all five transcripts. Pull out every observation that seems meaningful: a quote, a workflow detail, a pain point, a surprising behavior. Write each on a separate card (sticky notes, FigJam, or a spreadsheet row). Aim for 15 to 25 observations per interview.
Step 2 — Group by theme. Without pre-imposing categories, move cards into groups based on similarity. Let clusters emerge from the data. Name each cluster with a one-line problem statement — not "pricing" but "customers cannot justify cost without a clear ROI calculation."
Step 3 — Score by frequency and intensity. For each cluster: how many interviews mentioned it? For those that did, how intensely (large portion of the interview, strong emotional language, volunteered without prompting)? High frequency plus high intensity are your highest-priority signals.
Step 4 — Write insight statements. For each high-priority cluster, write: "[X of Y interviewees] experience [this behavior or pain or belief]. This matters because [why it is significant]. The implication is [what to do about it]."
After every 10 interviews, produce a synthesis document with these sections:
| Section | Contents |
|---|---|
| Participant summary | Roles, company sizes, use cases — anonymized |
| Top 5 insights | Insight statements with supporting verbatim quotes |
| Workflow map | How your ICP does the job today, step by step |
| Pain priority ranking | Top 5 pain points by frequency and intensity |
| Surprise findings | Things that contradicted your prior assumptions |
| Open questions | Hypotheses not yet resolved |
| Recommended actions | Specific product, sales, or research decisions implied |
Opportunity Scoring, developed by Tony Ulwick as part of Outcome-Driven Innovation, is the most rigorous method I have found for translating interview data into prioritized product decisions.
The core insight: the best product opportunities are jobs that are both important to the customer and poorly served by their current solution. High importance plus high satisfaction means no opportunity — the incumbent wins. High importance plus low satisfaction is the gap your product should fill.
Opportunity Score equals Importance plus the larger of (Importance minus Satisfaction) or zero.
In plain language: importance is the baseline. If satisfaction is lower than importance, the gap adds to the score. If satisfaction is already high, the score equals importance only. Scores above 10 are generally strong opportunities. Scores above 14 are critical priorities.
After qualitative discovery interviews have identified your top 10 to 15 jobs, survey 20 to 50 ICP-matched respondents. For each job, ask two questions on a 1 to 10 scale:
| Job to Be Done | Importance | Satisfaction | Opportunity Score | Priority |
|---|---|---|---|---|
| Know which accounts are at risk before it is too late | 9.2 | 3.1 | 15.3 | Critical |
| Prepare QBR materials without hours of data collection | 8.7 | 2.8 | 14.6 | Critical |
| Identify expansion opportunities in existing accounts | 8.1 | 4.2 | 12.0 | High |
| Track product adoption per account automatically | 7.9 | 3.9 | 11.9 | High |
| Get alerts when account health drops below threshold | 8.4 | 5.1 | 11.7 | High |
| Generate renewal forecasts from account data | 7.3 | 3.5 | 11.1 | Medium |
| Share account context with the broader team | 6.8 | 5.4 | 8.2 | Low |
| Automate follow-up task creation after calls | 6.2 | 5.8 | 6.6 | Low |
| Log call notes and sync to CRM | 7.1 | 7.3 | 7.1 | Overserved |
| Build custom success plans per account | 5.9 | 6.2 | 5.9 | Ignore |
| Score Range | Interpretation | Action |
|---|---|---|
| Above 15 | Critical opportunity — important, very poorly served | Build first |
| 12 to 15 | Strong opportunity — important, underserved | Next phase |
| 10 to 12 | Moderate — worth including | Plan accordingly |
| 8 to 10 | Low priority — low importance or already served | Defer |
| Below 8 | Avoid — not important enough or well-covered | Skip |
| Satisfaction above Importance | Overserved — customers have more than they need | Do not invest further |
Opportunity scoring works best within a single customer segment. Mixing enterprise and SMB respondents, or different verticals, in the same scoring table produces averaged data that is meaningless for any specific segment. Segment first, then score.
Knowing what bad data looks like is as important as knowing how to collect good data. These are the signals that should trigger skepticism about what you heard.
They agreed with everything. A customer who never pushes back, never says "actually, that is not really a big deal for us," is being polite — not informative. Healthy interviews have friction. If you got zero pushback, you asked leading questions.
The energy was flat throughout. Real problems have emotional texture — frustration, embarrassment, urgency. If someone described a problem in a disengaged, theoretical way, it may be a problem they are aware of but have never felt urgency to solve. Awareness does not equal priority.
They changed their answer when you changed the framing. If they said "$200/month" and then you mentioned other companies paying "$500/month" and they immediately shifted — their answer was anchored to your framing, not their genuine willingness to pay.
They could not give a specific example. "This happens all the time" with no specific incident means the problem is theoretical. "It happened last Tuesday and here is exactly what I did" is lived experience. Only the latter predicts behavior.
They spent most of the time asking about your product. When the customer is more interested in your solution than in describing their problem, the interview has become a demo. Redirect: "I want to make sure I understand your situation before I show you anything — can we stay on the problem side for a few more minutes?"
You cannot identify a specific job to be done. If you walked out of a 40-minute conversation and cannot write one sentence describing the job the customer was trying to accomplish, the interview did not work. You talked too much, asked the wrong questions, or interviewed the wrong person.
Every interview sounds identical. Suspiciously uniform feedback across interviews is a sign of leading questions. Real customer variance is messy — different priorities, different workarounds, different contexts. If everyone sounds the same, you are eliciting confirmation, not discovery.
Nothing you heard would change your roadmap. Build a "surprises" column into your synthesis document. If you have zero surprises after 10 interviews, something is broken in your process. Either your questions are not probing deep enough, or you are talking to customers who are too similar to each other.
How many customer interviews do I need before I can act?
For discovery: 15 to 20 within your ideal ICP is typically sufficient to saturate the pattern space. You will know you are there when you can predict what the next interviewee will say before they say it. For validation: 8 to 12 with people who match your design partner profile. For churn: every single churned customer you can reach, with no exceptions, until the patterns are clear.
Should I interview competitors' customers?
Aggressively. Competitors' unhappy customers are your best discovery source and your richest pool of competitive intelligence. Find them in negative reviews on G2 and Capterra, in relevant subreddits, in niche Slack communities. The frame is not "come try us instead" — it is "tell me about your experience with X. What's working? What's not?" You get competitive data and qualified prospects in the same conversation.
What if the customer starts pitching ideas for my product?
Thank them and redirect. "That's really interesting — I want to make sure I understand the underlying problem before we get into solutions. Can you tell me what happens today when you try to do that?" Feature requests are never the real data. The problem underneath the feature request is what you need. Features are one particular customer's guess at a solution — your job is to understand what solution space even makes sense.
How do I interview customers without revealing my idea?
You do not have to conceal the idea — you just delay revealing it. Run the first two-thirds of the interview as pure problem discovery. Only if it is relevant do you share the concept, and even then share the minimum: "We are exploring a tool that does X. Does that resonate with what you described?" Watch their facial reaction before their words arrive.
What is the best way to recruit churn interview participants?
A short personal email from the founder with no incentive mentioned outperforms everything else. The email should: acknowledge the cancellation without defensiveness, state clearly that you are not trying to win them back, ask for 15 minutes because their perspective is genuinely valuable, and offer to share key themes back. That fourth element dramatically improves response rates — senior buyers see the intelligence value and respond to it.
How do I get honest answers about pricing?
Never ask "what would you pay?" directly. Triangulate from multiple angles: ask about internal justification ("How would you justify the cost to your boss — what would you need to show?"), use past purchase anchors ("What is the most you have spent on a tool like this? What made it worth it?"), and try an indirect reference framing ("Some of our customers in similar roles pay $X/month — does that seem reasonable, high, or low?"). Three different angles give you a range more reliable than any single answer.
Can I run customer interviews async — over email or forms?
You can, but signal quality drops significantly. The follow-up question is the most valuable instrument in an interview — "tell me more about that" or "why?" — and async formats cannot support natural follow-up. Email and forms work well for quantitative scoring (Opportunity Score surveys, NPS, post-onboarding check-ins). Use live video for qualitative exploration, discovery, and churn interviews. Even a 20-minute phone call produces five times the insight of a well-designed email survey.
What do I do when the customer says something that invalidates my current hypothesis?
First, resist every instinct to defend the hypothesis. The customer is not attacking you — they are giving you evidence. Thank them, ask follow-up questions to fully understand their perspective, and end the interview professionally. Then take 30 minutes to sit with the contradiction before deciding how to respond. Most invalidating signals, when examined carefully, do not destroy the hypothesis outright — they reveal a more accurate version of it. "We were right that the problem is painful, but we were wrong about who owns it" is progress, not failure.
How do I take notes during an interview without losing focus on listening?
Use a structured template with predetermined fields: workflow steps, pain points, verbatim quotes, surprises, follow-up questions needed. Filling predefined fields is much faster than free-form notes and produces more consistent, comparable data across interviews. Better still, use AI transcription and focus completely on listening and following threads during the call. Review the transcript within an hour while your hot take is still fresh.
How do I interview customers when I do not have any customers yet?
Interview your ideal ICP about the problem, not your product. You do not need customers to do customer discovery — you need people who have the problem you are trying to solve. Frame your outreach as research: "I am doing research on how [ICP role] handles [problem area]. Would you be willing to share your experience in a 30-minute call?" Most people are willing to talk about their work problems without any product pitch involved.
When does customer interview data go stale?
Discovery data has a shelf life of roughly three to six months in fast-moving categories — shorter in AI, where the competitive landscape shifts quarterly. Validation data should be refreshed before every major roadmap decision. Churn interview data does not go stale — it should be reviewed and synthesized into your retention strategy on an ongoing basis, and revisited whenever you notice a pattern change in who is churning and why.
Customer interviews are the highest-leverage research method available to an early-stage founder. They are also almost universally executed poorly. The gap between how founders think they run interviews and how they actually run them is vast — I lived on the wrong side of that gap for years before I took the discipline seriously.
The framework here is not about becoming a better listener in some soft-skills sense. It is about treating customer conversations with the same rigor you would apply to any other experiment: a clear hypothesis, a disciplined method, an honest record, and an accounting of what was confirmed and what was not.
Use the question bank as a starting point, not a script. Let the customer take you somewhere unexpected. And when they say something that surprises you — something that challenges what you believed — write it in bold. That is probably the most valuable thing you learned all month.
Working on customer research for your product and want to compare notes? Find me on X.
A practitioner's playbook on PLG for AI products — cold start problem, aha moment engineering, onboarding design, team-led growth, PLG metrics, and a 12-week readiness audit.
A practitioner's GTM playbook for AI SaaS founders — ICP definition, positioning, pricing model selection, sales motion, and a 90-day sprint framework.
A founder's guide to AI product pricing strategy — usage-based models, cost structure, unit economics, tiering, and how to stop under-pricing your AI.