# The 5 Angel Checks I Run Before Writing a Check

**TL;DR:** After [38+ investments across six years](/blog/angel-investing-transparency), I've distilled my pre-investment process into 5 non-negotiable checks: founder-market fit, market timing, business model integrity, reference quality, and momentum signal. Each check has a specific set of questions, clear red flags, and a scoring methodology. Skip any one of them and you're not doing due diligence — you're just hoping.

---

## What you will learn

1. [The pre-diligence gut check](#the-pre-diligence-gut-check)
2. [Check 1: Founder-Market Fit](#check-1-founder-market-fit)
3. [Check 2: Market Timing](#check-2-market-timing)
4. [Check 3: Business Model Integrity](#check-3-business-model-integrity)
5. [Check 4: Reference Check](#check-4-reference-check)
6. [Check 5: Momentum Check](#check-5-momentum-check)
7. [Time allocation across the 5 checks](#time-allocation-across-the-5-checks)
8. [When to skip checks](#when-to-skip-checks)
9. [The post-check decision framework](#the-post-check-decision-framework)
10. [Frequently asked questions](#frequently-asked-questions)

---

## The pre-diligence gut check

Before I run any of the five formal checks, I apply a pre-diligence filter. This happens in the first conversation with a founder — usually a 30-minute intro call. If the pre-diligence filter doesn't pass, I don't move to the formal checks. I thank the founder for their time and I pass.

The pre-diligence filter is not a scored evaluation. It's a binary gate built on three questions I ask myself after every first conversation:

**Question 1: Did this founder tell me something I didn't already know?**

The best founders have insight that surprises you. Not necessarily about technology — sometimes about the customer's behavior, the supply chain, the regulatory environment, or a non-obvious competitive dynamic. If I got off the call knowing exactly what I expected to know before it started, the founder has not yet demonstrated the depth that predicts success.

**Question 2: Did the founder handle my hardest question well?**

In every first call, I ask one question I expect to be hard. Sometimes it's about the competitive landscape. Sometimes it's about unit economics assumptions. Sometimes it's about why a large incumbent hasn't already built this. A founder who handles a hard question well — not by deflecting, but by engaging with the difficulty of it honestly — is a different species from one who gets defensive or vague.

**Question 3: Do I want to spend more time with this person?**

This sounds unscientific, but it's real. The best founders have a quality of energy and focus that makes you want to keep talking. This isn't about charisma or likability. Some of my best founders are introverted and direct almost to the point of bluntness. What they share is intellectual intensity and a clear sense of what they're building and why. If I don't want to keep talking, that tells me something.

If the first conversation clears all three questions, I schedule a second meeting and begin the formal due diligence process.

### The average time split

| Phase | Time Investment |
|-------|----------------|
| Pre-diligence (first call) | 30-45 minutes |
| Formal 5-check process | 8-15 hours |
| Final decision / term review | 1-2 hours |
| **Total** | **10-18 hours** |

For very small checks (under $25K) into founders I know well, I sometimes compress or skip parts of the formal process. I'll address that in the "when to skip checks" section.

---

## Check 1: Founder-Market Fit

### Why this check matters more than any other

If there is one belief I've arrived at after 38+ investments, it's this: the quality of the founder matters more than the quality of the idea, the size of the market, or the current state of the product. Markets change. Ideas pivot. Products evolve. The founder is the only constant.

But "founder quality" is not a monolithic trait. It's not about intelligence alone, or charisma, or resilience alone. The specific question I'm trying to answer in this check is narrower and more actionable: **does this specific founder have an asymmetric advantage in building this specific company?**

The answer to that question is almost always rooted in one or more of three things: domain expertise, distribution advantage, or demonstrated obsession. Let me break each down.

### Domain expertise: the unfair knowledge advantage

Domain expertise means the founder knows something about this market — through professional experience, personal experience, or research intensity — that gives them a meaningful head start over any well-funded competitor who decided to enter tomorrow.

A founder with 7 years of experience in healthcare revenue cycle management building a product for hospital billing departments has domain expertise. A founder who worked in two health system operations roles and personally processed insurance claims has domain expertise. A founder who read a lot about healthcare and thought it seemed like a big market does not.

The distinction sounds obvious. It's often not obvious in practice, because founders who lack domain expertise are frequently very good at simulating it. They know the terminology, they can name the key players, they can describe the customer's pain point articulately. But when you probe — when you ask them to describe a specific, non-obvious workflow that their product improves, or to name the second-tier competitor that incumbents are actually more worried about than the obvious one — the simulation cracks.

**Questions I ask to probe domain expertise:**

- "Walk me through what happens when one of your customers does [specific workflow] today, before your product exists."
- "What's the single most counterintuitive thing you learned about this market in the last six months?"
- "Who is the most underestimated competitor in this space, and why are they underestimated?"
- "Which customer segment have you deliberately decided not to serve initially, and why?"
- "What would a smart, well-funded incumbent do differently from what you're doing, and why haven't they done it?"

The last question is particularly diagnostic. Founders who have thought deeply about competitive dynamics can usually describe the incumbent's strategic constraints clearly. Founders who haven't tend to either dismiss incumbents ("they're too slow") or overestimate them ("they could build this if they wanted to").

### Distribution advantage: who they know that others don't

The second form of asymmetric founder advantage is distribution. A founder who has a personal relationship with 200 potential buyers — because they came from that industry, or built a community in it, or have a podcast listened to by practitioners — can acquire customers at a cost that no well-funded competitor can match in the first 12-18 months.

Distribution advantages are temporary but often long enough to matter. If a founder can acquire the first 50 customers at near-zero CAC through personal relationships, they have 18 months of runway to build the product, generate case studies, and develop an inbound and outbound motion before a competitor with more capital but less distribution catches up.

**Questions I ask to probe distribution advantage:**

- "How many potential customers could you call tomorrow who would take the call because they know and trust you personally?"
- "Who are the first 10 customers you're going to close, and how do you know each of them?"
- "What is your plan to acquire customers after you've exhausted your personal network?"
- "If you had no paid marketing budget, how would you reach 100 customers in the next 12 months?"

The last question separates founders who have a genuine distribution advantage from founders who are planning to buy their way to customers. The former have a concrete answer. The latter have a vague answer that involves content marketing and cold outreach at scale.

### Obsession signals: who was thinking about this problem before it was a business

The third dimension of founder-market fit is what I call obsession. The founder was interested in this problem before it was a venture-backed opportunity. They can point to the specific moment — often years before founding — when they became aware of the problem. They have notebooks, Slack messages, half-finished prototypes, or a history of blog posts that predate the company.

Obsession is a proxy for resilience. Founders who are obsessed with a problem will work through the hard periods because they can't stop thinking about the problem. Founders who identified a market opportunity rationally will burn out when the opportunity is harder than it looked.

**Questions I ask to probe obsession:**

- "When did you first become aware of this problem? What were you doing at the time?"
- "What have you tried to build in this space before this company?"
- "What do you read about this problem that has nothing to do with building a business?"
- "Describe the last time you talked to a customer and left the conversation frustrated because you realized the problem is bigger than you thought."

### Red flags in founder-market fit

| Red Flag | Why It Concerns Me |
|----------|-------------------|
| Founder discovered market through a trend report | No personal insight; following consensus thinking |
| Can't describe a non-obvious customer pain point | Hasn't talked to enough customers, or described pain superficially |
| Domain expertise is 1-2 years; market has 20-year incumbents | Not enough time to develop real insight advantage |
| Distribution plan is entirely paid or inbound | No near-term customer acquisition advantage |
| Struggles to describe what the company won't do | No focus; probably building for investors not customers |
| Can't name specific people who will be their first customers | Distribution plan is theoretical, not real |
| Previous company in completely unrelated domain | No domain transfer; need exceptional compensating factors |

### Green flags in founder-market fit

| Green Flag | Why It Increases Conviction |
|------------|----------------------------|
| Has worked in this exact domain for 5+ years | Built real insight; understands non-obvious dynamics |
| Has personal experience with the problem | Emotional investment in solving it correctly |
| Can name 10 potential customers who will take their call | Concrete distribution advantage for early traction |
| Prior failed attempt in the space | Has the most valuable form of domain knowledge: what doesn't work |
| Has been talking to customers about this problem for 1+ year before founding | Deep insight; not chasing a trend |
| Partner or customer has already been introduced to the product before funding | Demand is real, not hypothetical |

### Scoring founder-market fit (1-10)

| Score | Interpretation |
|-------|---------------|
| 9-10  | Exceptional asymmetric fit; deep domain + distribution + obsession |
| 7-8   | Strong fit; at least two of three dimensions clearly present |
| 5-6   | Moderate fit; one dimension strong, others unclear |
| 3-4   | Weak fit; founder capable but not specifically suited to this market |
| 1-2   | No fit; founder chose this market opportunistically |

**Weight in overall framework: 35%**

This is the highest-weighted check in my framework, and it's not close.

---

## Check 2: Market Timing

### Why timing is a separate check from market size

Most angel investors think about market sizing (how big is the TAM?) but far fewer think rigorously about market timing (why now, and not three years ago or three years from now?). These are different questions with different answers, and confusing them is one of the more expensive mistakes I made in my early years.

Market size tells you the ceiling. Market timing tells you whether you'll reach it before you run out of runway.

The venture graveyard is full of companies that were right about the destination but wrong about the speed. Webvan understood that grocery delivery was a large market. They were right. They were also eight years too early — before smartphones, before GPS-optimized routing, before behavioral normalization of food delivery as a category. Being right about the market wasn't enough.

For an early-stage company running on 18-24 months of seed runway, the timing question is specific: **are there forces active today that make this market penetrable right now, that didn't exist or weren't active two years ago?**

### The three timing triggers I look for

**Trigger 1: Technology enabler crossing a viability threshold**

The most common timing unlock is a technology that has crossed from "theoretically possible" to "practically deployable." Large language models are the obvious example from 2022-2024 — they crossed a capability threshold that made a whole new class of products viable. GPS reached mass mobile adoption around 2009-2010, enabling the entire gig economy. Cloud computing reaching enterprise-grade reliability in 2010-2012 enabled the first wave of modern SaaS.

For any company I evaluate, I ask: what is the underlying technology that makes this product possible today that wasn't possible or practical 24 months ago? If the founder can't point to a specific recent development, the timing thesis is weak.

**Trigger 2: Behavioral shift that has already occurred**

The most durable timing advantages come from behavioral changes that have already happened at meaningful scale — not changes that are expected to happen. Remote work normalization happened in 2020-2021. It happened. Companies built on remote-work behavioral patterns locked in in that era are on solid ground because the behavior change is real. Companies built on behavioral changes that are expected to happen over the next five years are making a bet on prediction.

I look for behavioral shifts with evidence: survey data, industry reports, adoption metrics, regulatory changes that reflect a shift already underway. Not projections. Evidence.

**Trigger 3: Regulatory or structural change that has opened a new space**

Sometimes markets are large but inaccessible because of regulatory constraints. When those constraints change — through new legislation, court decisions, or regulatory guidance shifts — a new space opens. The best founders in these markets have been watching the regulatory trajectory for years and are ready to build the moment the opening appears.

### Specific questions for the timing check

- "Why is this company startable today that wasn't startable in 2022? And why won't it be too late in 2028?"
- "What specific event or development in the last 18 months has changed the market dynamics for this product?"
- "What needs to be true about the world in 24 months for this company to win? How confident are you that each of those things will be true?"
- "Who else is starting a company in this space right now, and why will you reach scale before they do given that you're starting at roughly the same time?"
- "If a $500M fund decided to compete with you starting today, what would make your timing advantage durable?"

### The "why now" framework

When I evaluate timing, I use a simple grid:

| Timing Factor | Evidence Required | Weak Signal | Strong Signal |
|---------------|------------------|-------------|---------------|
| Technology enabler | Specific capability that crossed a threshold | "AI is getting better" | "GPT-4 made X task 10x cheaper, enabling our margin profile" |
| Behavioral shift | Adoption data or observable change | "Remote work is growing" | "Our target buyer now has a distributed team 73% of the time per Bureau of Labor Stats 2024" |
| Structural change | Regulatory or market structure event | "Healthcare is changing" | "The 2024 CMS ruling on X changed who can bill for Y service" |
| Incumbent inertia | Specific constraint on incumbents | "Big companies are slow" | "The incumbent's platform was built before SaaS; migrating existing customers costs $X" |

### Red flags in market timing

| Red Flag | Why It Concerns Me |
|----------|-------------------|
| Timing thesis is "the market is big and growing" | Size ≠ timing; no specific enabler identified |
| The same product could have been built 5 years ago | No recent unlock; why is the founder starting now? |
| Timing depends on behavior change that hasn't happened yet | High prediction risk; runway may not be enough |
| No awareness of why incumbents are constrained | May be optimizing for a market that incumbents can enter easily |
| "AI is a tailwind" without specifics | Undifferentiated; applies to every company equally |

### Green flags in market timing

| Green Flag | Why It Increases Conviction |
|------------|----------------------------|
| Can name the specific event that opened this space | Deeply researched; not just following narrative |
| Technology enabler is less than 24 months old | Early in the cycle; competitive window is real |
| Behavioral shift already happened; data supports it | Low prediction risk; building on solid ground |
| Regulatory change creates compliance requirement | Externally mandated demand; not reliant on voluntary adoption |
| Timing window is narrow; founder is already building | Urgency signals good judgment about competitive dynamics |

**Weight in overall framework: 20%**

---

## Check 3: Business Model Integrity

### Why unit economics matter at the pre-seed stage

I want to address a pushback I hear frequently: "You can't evaluate unit economics at pre-seed. There's no data." This is partially true and mostly wrong.

You're correct that you won't have three years of cohort data at the pre-seed stage. But you can — and should — evaluate whether the unit economics model is structurally sound. The question isn't "what are your current LTV:CAC ratios?" The question is: "Given what you know about CAC, retention, and gross margin in this market, does the math work at scale if execution is reasonable?" For benchmarks by stage, the [SaaS metrics data](/blog/saas-metrics-benchmarks) gives useful reference points for what healthy looks like at each funding level.

The founders who have thought through this model are demonstrably different from founders who haven't. The ones who have can tell you their assumption for CAC at scale, their gross margin target, their expected payback period, and what needs to be true about retention for the math to work. The ones who haven't give you hand-wavy answers about growing the top line and "figuring out monetization later."

In B2B SaaS — which is my primary investment category — there is enough public benchmarking data to construct a plausible unit economics model for almost any product. That model will be wrong in specifics. It won't be wrong directionally. And direction is what matters at pre-seed.

### The unit economics math I build for every investment

I build a simple model with six inputs:

1. **Average contract value (ACV):** What does the typical customer pay annually?
2. **Gross margin:** What percentage of revenue is left after direct costs?
3. **Average customer acquisition cost (CAC):** All-in cost to acquire one customer, including sales and marketing labor.
4. **Payback period:** Months to recover CAC from gross margin. Formula: CAC / (ACV × Gross Margin / 12).
5. **Net revenue retention (NRR):** Does the average customer pay more, the same, or less in year two than year one?
6. **LTV:CAC ratio:** Lifetime value of a customer divided by cost to acquire them. For a healthy SaaS business, this should be 3:1 or better at scale.

### Benchmarks I use for B2B SaaS

| Metric | Below Average | Average | Strong | Best-in-Class |
|--------|--------------|---------|--------|---------------|
| Gross Margin | <60% | 60-70% | 70-80% | >80% |
| CAC Payback | >24 months | 18-24 months | 12-18 months | <12 months |
| LTV:CAC | <2x | 2-3x | 3-5x | >5x |
| NRR | <90% | 90-100% | 100-110% | >120% |
| Logo Churn (annual) | >20% | 10-20% | 5-10% | <5% |

At pre-seed, I'm not expecting the company to be at "average" on all these metrics. I'm checking whether the model, if executed well, could reach "strong" or better. If the structural economics of the market make "strong" impossible — because, for example, the buyer segment has chronic budget constraints, the procurement cycle is long, or gross margins are structurally compressed by cost of goods — that's a fundamental problem, not an execution one.

### Questions I ask to probe business model integrity

- "Walk me through the unit economics of your average customer — from first contact to fully paying, including all the time and cost involved."
- "What does your gross margin look like today, and what does it look like at 500 customers?"
- "How long does it take to recover your customer acquisition cost from the gross margin of that customer?"
- "What is your assumption for net revenue retention at 24 months? What's the evidence behind that assumption?"
- "Who is the budget owner for this purchase, and how often do they renew vs. churn contracts in your category?"
- "At $10M ARR, what does your team structure look like and what does that do to your fully loaded gross margin?"
- "What happens to your CAC if you run out of warm introductions and have to build a cold outbound motion?"

The last question is important. Many early-stage companies have very low early CAC because founders are doing all the selling through personal networks. That doesn't persist. Understanding how CAC changes as the company scales is one of the most important unit economics questions to ask — the [customer acquisition cost calculation](/blog/how-to-calculate-customer-acquisition-cost) walks through the full math.

### The marketplace and consumer exceptions

My B2B SaaS benchmarks above don't apply cleanly to marketplaces or consumer products. For marketplaces, the relevant metrics shift to:

| Metric | Importance | Target |
|--------|-----------|--------|
| Take rate | High | Market-appropriate; 10-30% for most verticals |
| Supplier retention | High | Year-2 supplier retention >70% |
| Buyer repeat rate | High | 40%+ of transactions from repeat buyers within 12 months |
| GMV per cohort growth | High | Do cohorts grow their spend over time? |
| CAC per side | Medium | Separately track supplier and buyer CAC |

Consumer products are the hardest to evaluate at pre-seed, which is one reason I've reduced my consumer allocation significantly. The key metrics I focus on for consumer are D30 retention (what percentage of users are still active 30 days after first use), payback period on user acquisition, and evidence of organic virality (k-factor above 0.3 is a meaningful signal).

### Red flags in business model integrity

| Red Flag | Why It Concerns Me |
|----------|-------------------|
| "We'll figure out pricing later" | Pricing is a strategic decision, not a sales tactic |
| Gross margin below 50% in software | Cost structure suggests services or infrastructure dependency |
| CAC payback exceeds 24 months | Capital-intensive growth; dependent on continuous fundraising |
| No answer on what drives NRR | Doesn't understand customer expansion or contraction dynamics |
| Revenue model requires volume before margin appears | Structural trap for early-stage companies |
| ACV so low that unit economics only work with enormous volume | Path to efficient unit economics requires massive scale |
| "Our competitors charge X, we'll charge less" | Pricing-as-strategy without margin analysis |

### Green flags in business model integrity

| Green Flag | Why It Increases Conviction |
|------------|----------------------------|
| Gross margin >70% with a clear path to >80% | Healthy software economics; can fund growth from revenue |
| CAC payback under 12 months | Fast return on customer acquisition investment |
| Early customers expanding without prompting | Strong product-market fit signal; NRR will be excellent |
| Pricing anchored to value delivered, not competitor pricing | Pricing discipline; pricing will improve as company scales |
| Budget owner is clear; purchase doesn't require committee | Low CAC sales motion; fast revenue realization |
| Demonstrated ability to sell without founder involvement | Early evidence of repeatable sales motion |

**Weight in overall framework: 20%**

---

## Check 4: Reference Check

### Why references are the most underutilized diligence tool

Angel investors skip reference checks more often than any other step in the diligence process. The most common reason is time: "I only have a few days before the round closes, and the references will slow me down." The second most common reason is social discomfort: "I know this founder through a mutual friend, and it feels awkward to run references."

Both reasons are wrong.

The round closing pressure is often manufactured — or at minimum, it's a negotiating tactic that will soften if you explain that you're committed but need to complete your process. Any founder who won't give you a week to run references is a red flag in itself.

The social discomfort reflects a misunderstanding of what references are for. A reference check is not an investigation. It's a structured conversation designed to surface the founder's working style, growth areas, and track record — information that will make you a better investor and advisor to the company you're about to join as a stakeholder.

The best founders I've invested in welcomed reference checks enthusiastically. They had a list of contacts ready, including people who would give me honest mixed feedback. The founders who pushed back hardest on references were, in two cases, hiding things that would have significantly affected my decision.

### The reference methodology

I run a minimum of three reference calls for any check above $50K. My target breakdown:

| Reference Type | Number | Why |
|----------------|--------|-----|
| Former colleagues (same level or below) | 1-2 | Reveals working style, treatment of subordinates |
| Former managers | 1 | Reveals execution quality and career progression |
| Early customers or customer references | 1-2 | Reveals how founder sells and supports |
| Former investors (if any) | 0-1 | Reveals investor relationship quality |

Note that I do not let the founder choose all my references. I ask for a starter list and then I go "off list" — I find 1-2 references independently through LinkedIn, mutual connections, or the company's early customer network. The off-list references are almost always the most informative, because the people a founder deliberately omits from their reference list are the ones most likely to give you nuanced or critical feedback.

### The reference call script

My calls run 20-30 minutes. I open with:

"I'm considering investing in [founder's name] and [company name]. I'm not looking for a performance review — I just want to understand how [founder] works and what makes them exceptional. Can I ask you a few questions?"

Then I move through six core questions:

1. "How do you know [founder], and in what context did you work together?"
2. "What is [founder] genuinely exceptional at? Not good at — exceptional."
3. "What is [founder]'s biggest growth area as a leader and operator?"
4. "Tell me about a difficult situation you saw [founder] navigate. How did they handle it?"
5. "If [founder] asked you to join their company as an early employee, would you? Why or why not?"
6. "Is there anything I should specifically probe with [founder] before making this investment decision?"

The fifth question — the "would you join" question — is the most diagnostic. References who have worked closely with someone and would not join their company are telling you something important, even if they package it diplomatically.

The sixth question almost always produces the most valuable information in the call. References want to be helpful. When you give them an explicit opening to surface concerns, most will take it.

### Reading between the lines

Reference calls are as much about what isn't said as what is said. Watch for:

- **Long pauses before answering:** Often signals the reference is choosing their words carefully around a sensitive topic.
- **Pivoting to compliments without answering the question:** "She's incredible, really, one of the best I've ever worked with" in response to "what's her biggest growth area" means the reference doesn't want to give you honest feedback.
- **Very short answers:** References who have worked closely with someone and are enthusiastic will talk for 5-10 minutes per question. Short answers suggest limited relationship or limited enthusiasm.
- **"He's a great guy" without specifics:** Personal likeability is not the same as professional effectiveness.

### Red flags in the reference check

| Red Flag | Why It Concerns Me |
|----------|-------------------|
| Founder pushes back on references or delays providing contacts | What are they trying to prevent me from hearing? |
| All references are personal friends, not professional colleagues | Self-selected for maximum positive bias |
| Reference gives noticeably vague answers to specific questions | May be protecting founder from honest feedback |
| Multiple references independently mention the same concern | Pattern signal; not one person's opinion |
| Reference would not join the company if asked | Strong negative signal from someone with close knowledge |
| Founder provided only senior references, no peer/subordinate contacts | Missing the references most likely to reveal working style |
| No references have worked in the last 5 years with founder | Too much time for character assessment to be reliable |

### Green flags in the reference check

| Green Flag | Why It Increases Conviction |
|------------|----------------------------|
| Founder proactively offers a mixed reference ("call X, she'll give you honest feedback") | High integrity; self-awareness |
| Multiple references independently describe the same strength without being prompted | Consistent character signal across sources |
| Reference would immediately join the company if asked | Extraordinary endorsement from someone with full information |
| Off-list references are as positive as on-list references | No material omissions from reference list |
| References describe specific instances of excellence, not generalizations | Evidence-based endorsement, not platitudes |
| Customer references describe the founder as a partner, not a vendor | Commercial integrity; long-term relationship orientation |

**Weight in overall framework: 15%**

---

## Check 5: Momentum Check

### Why the last 90 days tell you more than the last year

The momentum check is the most recent-data-focused of the five checks. It answers a specific question: **what is the trajectory of this company right now, and what explains that trajectory?**

I focus specifically on the last 90 days because that's recent enough to reflect current reality rather than historical narrative, and long enough to represent a real trend rather than a single data point. A company that grew 25% in the last 90 days is in a meaningfully different position than one that grew 5% — and both are in different positions than one that declined.

Momentum is a leading indicator in both directions. Accelerating momentum is hard to fake. It means something is working in the go-to-market, or the product has reached a level of functionality that customers want, or a recent marketing initiative has opened a new channel. Decelerating momentum is also hard to hide — though founders try, by averaging it over a longer period, focusing on lagging metrics, or attributing slowdowns to one-time factors.

### Metrics I ask for in the momentum check

For revenue-generating companies, I ask for:

| Metric | What I'm Looking For |
|--------|---------------------|
| MoM revenue growth (last 3 months) | Trend line, not just point-in-time |
| MoM revenue growth (preceding 3 months) | Is the trend accelerating or decelerating? |
| New customer additions per month (last 3 months) | Separate from revenue; accounts for ACV changes |
| Gross churn rate (last quarter) | Are customers leaving? |
| Pipeline velocity | How fast are deals moving to close? |
| NPS or equivalent customer satisfaction signal | Leading indicator of retention |

For pre-revenue companies, I ask for:

| Metric | What I'm Looking For |
|--------|---------------------|
| Active users or engaged beta users | Real usage, not just signups |
| User session depth or engagement metrics | Are users getting value? |
| Waitlist or inbound demand growth | Organic demand building |
| Customer interviews completed in last 90 days | Founder is learning rapidly |
| Product iterations shipped in last 90 days | Velocity of learning and building |
| Letters of intent or verbal commitments from prospective customers | Real demand signals |

### The "why" behind the momentum

The momentum metrics themselves are only half the check. The more valuable half is asking the founder to explain the "why" behind the numbers. This is where you learn whether the founder has genuine insight into their business or is reporting numbers without understanding what drives them.

**Questions I ask in the momentum check:**

- "Walk me through what happened in the last 90 days. What milestones did you hit, and what drove them?"
- "What is the single biggest thing that changed in the last 90 days that you didn't anticipate 90 days ago?"
- "Where is growth coming from right now — channel, segment, product feature?"
- "What has not worked in the last 90 days, and what did you do about it?"
- "If you got 3x the customers tomorrow, what breaks first in your product or operations?"
- "What are you most worried about in the next 90 days that could slow momentum?"
- "What would have to happen in the next 90 days for you to feel you're on track for your next raise?"

The fourth question — "what hasn't worked" — is particularly important. Founders who can articulate their recent failures with specificity and describe what they've done about them are learning from their business in real time. Founders who struggle to identify failures in the last 90 days either have an exceptional business or are not being honest with themselves.

### Distinguishing real momentum from manufactured momentum

I've been shown manufactured momentum. Founders with financial sophistication can present numbers in ways that imply strong performance while hiding fundamental problems. Here's what I watch for:

**Signs of manufactured momentum:**

- Revenue metrics are quoted in non-standard ways ("bookings" instead of ARR, "revenue run rate" on annualized single-month data)
- User counts include inactive or churned users without disclosure
- Growth metrics exclude a recent bad month ("we're looking at Q4 which was a blip")
- Cohort data is presented cumulatively rather than per-period, hiding declining new cohorts
- Customer count includes pilots, free users, or trials counted as paying customers

When I see any of these, I ask for raw data. Not a formatted presentation — the actual spreadsheet or database export. Founders with genuine momentum are almost always happy to show you the raw numbers because they're proud of them. Founders who are manufacturing momentum resist or delay.

### Recent milestones that increase conviction

Beyond the metrics, I look for specific milestone events in the last 90 days that represent inflection points:

| Milestone | Conviction Impact |
|-----------|-----------------|
| First enterprise customer signed | Validates upmarket potential |
| First customer without founder involvement in sales | Repeatable sales motion emerging |
| First revenue from an unexpected segment | Reveals larger TAM than pitched |
| Product shipped that moved a key metric | [Product-market fit](/blog/how-to-achieve-product-market-fit) tightening |
| Key hire closed (senior with domain expertise) | Team building against conviction |
| Inbound introductions from potential acquirers | Strategic value emerging |
| Major media coverage driving organic signups | Brand building; organic growth possible |

### Red flags in the momentum check

| Red Flag | Why It Concerns Me |
|----------|-------------------|
| Metrics quoted in non-standard terms | May be hiding unflattering standard metrics |
| Momentum relies on a single large customer | Concentration risk; not repeatable |
| No clear explanation for what drove recent growth | Founder doesn't understand their own business |
| Growth decelerating and founder attributes it to external factors | Blame externalization; accountability gap |
| No product iterations in last 60 days | Stalled development |
| Customer conversations declining in frequency | May be avoiding bad news from market |
| "We just need capital to grow" without demonstrated demand | Capital won't solve an absent demand signal |

### Green flags in the momentum check

| Green Flag | Why It Increases Conviction |
|------------|----------------------------|
| Revenue growth rate accelerating month-over-month | Something working and improving |
| Can explain the specific driver of recent growth | Knows their business deeply |
| Growth coming from unexpected source (reveals larger market) | Positive surprise signal |
| Customers acquired in last 90 days larger than average | Moving upmarket naturally |
| Founder describes recent failure candidly and specifically | Learning at speed; intellectual honesty |
| New revenue without founder in the sales process | Early repeatable motion |

**Weight in overall framework: 10%**

---

## Time allocation across the 5 checks

The natural question is: how much time should each check take? Here's my working allocation for a standard diligence process:

| Check | Typical Time | Primary Activities |
|-------|-------------|-------------------|
| Founder-Market Fit | 3-4 hours | 2-3 founder conversations; background research; competitive analysis |
| Market Timing | 1-2 hours | Market research; timing thesis validation; competitive mapping |
| Business Model Integrity | 2-3 hours | Unit economics modeling; financial documents review; customer economics calls |
| Reference Check | 2-3 hours | 3-4 reference calls at 20-30 min each; off-list source identification |
| Momentum Check | 1-2 hours | Metrics review; founder interview on recent trajectory; data audit |
| **Total** | **9-14 hours** | Excludes first conversation and final decision work |

The founder-market fit check takes the longest because it's the most judgment-intensive and the hardest to compress. The market timing check is faster because much of it is desk research rather than primary conversation. The momentum check is efficient because the data is either there or it isn't.

For investments above $100K, I add a sixth step: a deep product review session where I use the product myself (for software) or review the technical architecture with a trusted engineer who can evaluate build quality and debt.

---

## When to skip checks

I'll be direct about this: I do skip checks in specific circumstances, and I think pretending I don't would be dishonest.

**Circumstances where I skip or compress checks:**

1. **Small checks into known founders.** If I'm writing a $10K-$25K check into a founder I've worked with closely for 2+ years — either as an advisor, a prior investment, or a professional relationship — I compress the formal process significantly. I still ask the momentum questions and do a quick market timing review, but I skip the extensive founder-market fit conversation and references because I already have that data from lived experience.

2. **Follow-on investments into existing portfolio companies.** If I'm exercising pro-rata rights in a company I've invested in, the due diligence is fundamentally different. I focus entirely on the momentum check and a quick re-evaluation of whether my original thesis has been validated, partially validated, or changed. I don't re-run the full five checks.

3. **Very fast-moving rounds with a trusted lead.** If a lead investor I know and trust well has done thorough diligence and I'm joining a round at the same terms with a $15K-$25K check, I sometimes compress the process and rely partially on the lead's judgment. This is explicitly a shortcut, and I accept the quality tradeoff consciously.

**What I never skip, regardless of circumstances:**

- At least one substantive conversation with the founder that includes hard questions
- A basic momentum check (even if informal)
- A gut check on the timing thesis

Even in relationships where I trust the founder completely, I find that going through even a compressed version of these checks produces better outcomes — both because it catches things I wouldn't otherwise catch and because it forces the founder to articulate their thinking in ways that are valuable for our working relationship going forward.

---

## The post-check decision framework

After running all five checks, I aggregate the scores into a weighted total and apply the decision matrix. Here is the full scoring framework:

### Scoring matrix

| Check | Weight | Score (1-10) | Weighted Score |
|-------|--------|-------------|----------------|
| Founder-Market Fit | 35% | __ | __ |
| Market Timing | 20% | __ | __ |
| Business Model Integrity | 20% | __ | __ |
| Reference Check | 15% | __ | __ |
| Momentum Check | 10% | __ | __ |
| **Total Weighted Score** | 100% | — | **/10** |

### Decision thresholds

| Weighted Score | Decision | Action |
|----------------|----------|--------|
| 8.5 - 10.0 | Strong invest | Full check size; pursue pro-rata rights actively |
| 7.5 - 8.5 | Invest | Standard check size; request pro-rata |
| 6.5 - 7.5 | Invest small | 50% of standard check; no pro-rata request |
| 5.5 - 6.5 | Pass / Stay close | Decline; add to watch list for future raise |
| Below 5.5 | Pass | Decline; no follow-up scheduled |

### The override rules

The weighted score is a guide, not a formula. I apply two override rules that can change the decision regardless of score:

**Override 1: Single-check hard pass.** If any individual check scores below 3, I pass regardless of the total score. A founder-market fit score of 2 cannot be overcome by a momentum score of 9. These are not compensatory in my framework — each check identifies a different type of fatal flaw.

**Override 2: Reference check veto.** If reference calls surface a specific, credible concern about integrity — not just "she's hard to work with" but something that suggests the founder is not honest with investors, has a history of misrepresenting metrics, or treats team members in ways that would damage culture — I pass regardless of score. Integrity concerns are not overridable by business quality.

### The 48-hour rule

After completing all five checks, I give myself a minimum of 48 hours before communicating a final decision. During those 48 hours, I write a one-page investment memo that forces me to articulate:

1. The primary reason I'm investing (or passing)
2. The biggest risk to the investment thesis
3. What would have to be true for this investment to be a 10x return

The act of writing the memo catches reasoning errors that survive informal analysis. Several times, I've started writing a memo in favor of an investment and discovered — mid-sentence — that I couldn't actually make the case for the biggest risk without becoming concerned. That discovery has saved me from at least two poor investments.

### How the framework has changed my decisions

Since implementing the formal five-check framework in 2021, my pass rate on opportunities I've formally evaluated has increased from roughly 40% to roughly 65%. I'm investing in fewer companies but with substantially higher average conviction. On paper, the investments made under the formal framework are outperforming the pre-framework investments, which is consistent with the hypothesis that structured diligence improves decision quality. The [portfolio diversification math](/blog/angel-portfolio-diversification-strategy) explains why conviction and breadth both matter in the same portfolio.

The most common reason I pass now, according to my own records, is founder-market fit. Roughly 45% of formal passes are founder-market fit failures. The second most common is business model integrity — mostly companies where the structural unit economics cannot get to a healthy number even with excellent execution. A full breakdown of the wins and losses this framework has produced is in [what my 6 wins and 5 losses teach about investment risk](/blog/exits-and-learnings-angel-investing).

The rarest pass reason is momentum — which makes sense, because a company with weak momentum usually also has other problems that surface earlier in the framework.

---

## Frequently asked questions

### How long does the full 5-check process take from first meeting to decision?

For a standard investment of $25K-$75K, the process runs 2-3 weeks from first conversation to decision. For larger checks ($75K+), I allocate 3-4 weeks to allow time for a deeper reference check and any additional product or technical review. The fastest I've ever moved from intro call to signed commitment is 5 days — and in retrospect, that was too fast, even though the investment turned out well.

### Do you share your scoring with founders after you pass?

Occasionally, and only when I have an ongoing relationship with the founder. If a founder I genuinely respect and want to support is pitching a company that scores low on market timing, I'll share that specific concern and why — not the score itself, but the substantive concern. I've had several founders come back to me 12-18 months later after addressing the concern, and I've invested in two of them at a later stage.

I don't share scoring for passes where the founder-market fit concern is the primary driver. That conversation is rarely productive and often harmful to the relationship.

### How do you handle competitive investments — if you're being asked into a round where you've also seen a competitor?

This is a situation I've encountered three times. My approach: full transparency with both founders immediately. I tell both that I've seen the other company and that I'll be making a decision about which, if either, I'm investing in. I don't share confidential information from either diligence process with the other founder. And I make the decision purely on the five-check framework, not on which founder I have a better personal relationship with.

In practice, this situation usually resolves itself quickly because the frameworks produce meaningfully different scores for each company.

### What's your policy on investing in two competitors in the same space?

I don't. Once I've invested in a company, I consider the sector locked for direct competitors. I've turned down several opportunities that looked attractive because I had an existing investment in a direct competitor. The returns from maintaining clean conflicts of interest outweigh any specific investment opportunity.

### How do you handle founders who don't want to share detailed financial data in diligence?

I treat it as a yellow flag, not necessarily a red flag. Some founders — particularly those who have had investors misuse financial data in prior fundraises — are cautious about sharing detailed metrics before a commitment. I offer two alternatives: I can sign an NDA before data sharing, or I can proceed on the basis of ranges rather than actuals and give a provisional commitment contingent on data verification.

If a founder won't share any financial data and won't sign an NDA, I pass. At that point the reluctance has crossed from caution to opacity, and opacity is a red flag.

### Do you use the same checklist for international investments as for US ones?

The five checks are universal, but some of the benchmarks shift for different markets. Unit economics benchmarks differ by geography — a strong LTV:CAC ratio in India may look different from one in the US given different price points and CAC dynamics. Reference check methodology also differs internationally; in some markets, asking for off-list references is culturally unusual and I adapt my approach accordingly.

The founder-market fit check and market timing check are structurally identical regardless of geography. The business model integrity benchmarks require the most calibration for different markets.

### What's the most common mistake angels make in due diligence?

Anchoring on the first piece of positive information they receive. The presentation is almost always designed to lead with the most compelling data point — a strong growth number, a prestigious customer logo, an impressive advisor. Once that anchor is set, everything else in diligence gets evaluated relative to it.

The way I counter this is to explicitly assign my first impressions to paper before diligence starts, then see whether the evidence across the five checks supports or undermines those first impressions. When the evidence undermines my first impression, I take that seriously rather than re-rationalizing back to the anchor.

### How do you stay current on benchmarks and market dynamics across sectors?

Primarily through three sources: reading the public filings and investor letters of later-stage companies in the sectors I invest in, regular conversations with operators (not investors) in the domains I'm evaluating, and curated databases of startup metrics like those published by SaaStr, Bessemer, and OpenView. The operator conversations are the most valuable — they give you ground-truth benchmarks from people running businesses, not theoretical benchmarks from investors.

### When should an aspiring angel start using a formal checklist?

Immediately. Even if you're writing $5K checks and have no formal track record, using a structured framework from the beginning builds the muscle before you have larger amounts of capital at stake. The cost of having an amateur framework when you're writing $5K checks is low. The cost of having an amateur framework when you're writing $75K checks is very high.

Start with the minimum viable version: founder-market fit, market timing, and momentum. Add business model integrity and reference checks as your check sizes grow.

### What's your process if you're on the fence after scoring?

I've learned to treat indecision as a signal. In the early years, when I was on the fence, I'd often find a reason to invest — usually social pressure from the deal ecosystem or FOMO about the round closing. Since I've been disciplined about the framework, I treat a genuine fence-sit (score in the 6.5-7.5 range with no strong conviction either way) as a sign to either invest at half my standard check size or pass entirely.

The investments where I had to talk myself into it have underperformed the investments where the decision was clear. Not every time — there are exceptions — but the pattern is consistent enough that I've stopped trying to force decisions that the framework doesn't make obvious.

---

The five checks are not a guarantee. Early-stage investing is probabilistic, and even a perfect diligence process will produce failures. What the framework gives you is not certainty — it gives you a consistent methodology that produces better decisions at the margin, catches avoidable mistakes, and builds a track record you can actually learn from.

The most important thing I've learned across 38+ investments is that the discipline of the framework matters as much as the framework itself. Running four of five checks rigorously and skipping the fifth under time pressure is not 80% of the process. It's a different process entirely — and the check you skip is usually the one that would have changed your decision.