The 5 Angel Checks We Run Before Writing a Check
The exact pre-investment checklist I run before every angel check: founder-market fit, timing, unit economics, references, and momentum — with scoring tables and decision matrix.
Whether you're looking for an angel investor, a growth advisor, or just want to connect — I'm always open to great ideas.
Get in TouchAI, startups & growth insights. No spam.
TL;DR: After 38+ investments across six years, I've distilled my pre-investment process into 5 non-negotiable checks: founder-market fit, market timing, business model integrity, reference quality, and momentum signal. Each check has a specific set of questions, clear red flags, and a scoring methodology. Skip any one of them and you're not doing due diligence — you're just hoping.
Before I run any of the five formal checks, I apply a pre-diligence filter. This happens in the first conversation with a founder — usually a 30-minute intro call. If the pre-diligence filter doesn't pass, I don't move to the formal checks. I thank the founder for their time and I pass.
The pre-diligence filter is not a scored evaluation. It's a binary gate built on three questions I ask myself after every first conversation:
Question 1: Did this founder tell me something I didn't already know?
The best founders have insight that surprises you. Not necessarily about technology — sometimes about the customer's behavior, the supply chain, the regulatory environment, or a non-obvious competitive dynamic. If I got off the call knowing exactly what I expected to know before it started, the founder has not yet demonstrated the depth that predicts success.
Question 2: Did the founder handle my hardest question well?
In every first call, I ask one question I expect to be hard. Sometimes it's about the competitive landscape. Sometimes it's about unit economics assumptions. Sometimes it's about why a large incumbent hasn't already built this. A founder who handles a hard question well — not by deflecting, but by engaging with the difficulty of it honestly — is a different species from one who gets defensive or vague.
Question 3: Do I want to spend more time with this person?
This sounds unscientific, but it's real. The best founders have a quality of energy and focus that makes you want to keep talking. This isn't about charisma or likability. Some of my best founders are introverted and direct almost to the point of bluntness. What they share is intellectual intensity and a clear sense of what they're building and why. If I don't want to keep talking, that tells me something.
If the first conversation clears all three questions, I schedule a second meeting and begin the formal due diligence process.
| Phase | Time Investment |
|---|---|
| Pre-diligence (first call) | 30-45 minutes |
| Formal 5-check process | 8-15 hours |
| Final decision / term review | 1-2 hours |
| Total | 10-18 hours |
For very small checks (under $25K) into founders I know well, I sometimes compress or skip parts of the formal process. I'll address that in the "when to skip checks" section.
If there is one belief I've arrived at after 38+ investments, it's this: the quality of the founder matters more than the quality of the idea, the size of the market, or the current state of the product. Markets change. Ideas pivot. Products evolve. The founder is the only constant.
But "founder quality" is not a monolithic trait. It's not about intelligence alone, or charisma, or resilience alone. The specific question I'm trying to answer in this check is narrower and more actionable: does this specific founder have an asymmetric advantage in building this specific company?
The answer to that question is almost always rooted in one or more of three things: domain expertise, distribution advantage, or demonstrated obsession. Let me break each down.
Domain expertise means the founder knows something about this market — through professional experience, personal experience, or research intensity — that gives them a meaningful head start over any well-funded competitor who decided to enter tomorrow.
A founder with 7 years of experience in healthcare revenue cycle management building a product for hospital billing departments has domain expertise. A founder who worked in two health system operations roles and personally processed insurance claims has domain expertise. A founder who read a lot about healthcare and thought it seemed like a big market does not.
The distinction sounds obvious. It's often not obvious in practice, because founders who lack domain expertise are frequently very good at simulating it. They know the terminology, they can name the key players, they can describe the customer's pain point articulately. But when you probe — when you ask them to describe a specific, non-obvious workflow that their product improves, or to name the second-tier competitor that incumbents are actually more worried about than the obvious one — the simulation cracks.
Questions I ask to probe domain expertise:
The last question is particularly diagnostic. Founders who have thought deeply about competitive dynamics can usually describe the incumbent's strategic constraints clearly. Founders who haven't tend to either dismiss incumbents ("they're too slow") or overestimate them ("they could build this if they wanted to").
The second form of asymmetric founder advantage is distribution. A founder who has a personal relationship with 200 potential buyers — because they came from that industry, or built a community in it, or have a podcast listened to by practitioners — can acquire customers at a cost that no well-funded competitor can match in the first 12-18 months.
Distribution advantages are temporary but often long enough to matter. If a founder can acquire the first 50 customers at near-zero CAC through personal relationships, they have 18 months of runway to build the product, generate case studies, and develop an inbound and outbound motion before a competitor with more capital but less distribution catches up.
Questions I ask to probe distribution advantage:
The last question separates founders who have a genuine distribution advantage from founders who are planning to buy their way to customers. The former have a concrete answer. The latter have a vague answer that involves content marketing and cold outreach at scale.
The third dimension of founder-market fit is what I call obsession. The founder was interested in this problem before it was a venture-backed opportunity. They can point to the specific moment — often years before founding — when they became aware of the problem. They have notebooks, Slack messages, half-finished prototypes, or a history of blog posts that predate the company.
Obsession is a proxy for resilience. Founders who are obsessed with a problem will work through the hard periods because they can't stop thinking about the problem. Founders who identified a market opportunity rationally will burn out when the opportunity is harder than it looked.
Questions I ask to probe obsession:
| Red Flag | Why It Concerns Me |
|---|---|
| Founder discovered market through a trend report | No personal insight; following consensus thinking |
| Can't describe a non-obvious customer pain point | Hasn't talked to enough customers, or described pain superficially |
| Domain expertise is 1-2 years; market has 20-year incumbents | Not enough time to develop real insight advantage |
| Distribution plan is entirely paid or inbound | No near-term customer acquisition advantage |
| Struggles to describe what the company won't do | No focus; probably building for investors not customers |
| Can't name specific people who will be their first customers | Distribution plan is theoretical, not real |
| Previous company in completely unrelated domain | No domain transfer; need exceptional compensating factors |
| Green Flag | Why It Increases Conviction |
|---|---|
| Has worked in this exact domain for 5+ years | Built real insight; understands non-obvious dynamics |
| Has personal experience with the problem | Emotional investment in solving it correctly |
| Can name 10 potential customers who will take their call | Concrete distribution advantage for early traction |
| Prior failed attempt in the space | Has the most valuable form of domain knowledge: what doesn't work |
| Has been talking to customers about this problem for 1+ year before founding | Deep insight; not chasing a trend |
| Partner or customer has already been introduced to the product before funding | Demand is real, not hypothetical |
| Score | Interpretation |
|---|---|
| 9-10 | Exceptional asymmetric fit; deep domain + distribution + obsession |
| 7-8 | Strong fit; at least two of three dimensions clearly present |
| 5-6 | Moderate fit; one dimension strong, others unclear |
| 3-4 | Weak fit; founder capable but not specifically suited to this market |
| 1-2 | No fit; founder chose this market opportunistically |
Weight in overall framework: 35%
This is the highest-weighted check in my framework, and it's not close.
Most angel investors think about market sizing (how big is the TAM?) but far fewer think rigorously about market timing (why now, and not three years ago or three years from now?). These are different questions with different answers, and confusing them is one of the more expensive mistakes I made in my early years.
Market size tells you the ceiling. Market timing tells you whether you'll reach it before you run out of runway.
The venture graveyard is full of companies that were right about the destination but wrong about the speed. Webvan understood that grocery delivery was a large market. They were right. They were also eight years too early — before smartphones, before GPS-optimized routing, before behavioral normalization of food delivery as a category. Being right about the market wasn't enough.
For an early-stage company running on 18-24 months of seed runway, the timing question is specific: are there forces active today that make this market penetrable right now, that didn't exist or weren't active two years ago?
Trigger 1: Technology enabler crossing a viability threshold
The most common timing unlock is a technology that has crossed from "theoretically possible" to "practically deployable." Large language models are the obvious example from 2022-2024 — they crossed a capability threshold that made a whole new class of products viable. GPS reached mass mobile adoption around 2009-2010, enabling the entire gig economy. Cloud computing reaching enterprise-grade reliability in 2010-2012 enabled the first wave of modern SaaS.
For any company I evaluate, I ask: what is the underlying technology that makes this product possible today that wasn't possible or practical 24 months ago? If the founder can't point to a specific recent development, the timing thesis is weak.
Trigger 2: Behavioral shift that has already occurred
The most durable timing advantages come from behavioral changes that have already happened at meaningful scale — not changes that are expected to happen. Remote work normalization happened in 2020-2021. It happened. Companies built on remote-work behavioral patterns locked in in that era are on solid ground because the behavior change is real. Companies built on behavioral changes that are expected to happen over the next five years are making a bet on prediction.
I look for behavioral shifts with evidence: survey data, industry reports, adoption metrics, regulatory changes that reflect a shift already underway. Not projections. Evidence.
Trigger 3: Regulatory or structural change that has opened a new space
Sometimes markets are large but inaccessible because of regulatory constraints. When those constraints change — through new legislation, court decisions, or regulatory guidance shifts — a new space opens. The best founders in these markets have been watching the regulatory trajectory for years and are ready to build the moment the opening appears.
When I evaluate timing, I use a simple grid:
| Timing Factor | Evidence Required | Weak Signal | Strong Signal |
|---|---|---|---|
| Technology enabler | Specific capability that crossed a threshold | "AI is getting better" | "GPT-4 made X task 10x cheaper, enabling our margin profile" |
| Behavioral shift | Adoption data or observable change | "Remote work is growing" | "Our target buyer now has a distributed team 73% of the time per Bureau of Labor Stats 2024" |
| Structural change | Regulatory or market structure event | "Healthcare is changing" | "The 2024 CMS ruling on X changed who can bill for Y service" |
| Incumbent inertia | Specific constraint on incumbents | "Big companies are slow" | "The incumbent's platform was built before SaaS; migrating existing customers costs $X" |
| Red Flag | Why It Concerns Me |
|---|---|
| Timing thesis is "the market is big and growing" | Size ≠ timing; no specific enabler identified |
| The same product could have been built 5 years ago | No recent unlock; why is the founder starting now? |
| Timing depends on behavior change that hasn't happened yet | High prediction risk; runway may not be enough |
| No awareness of why incumbents are constrained | May be optimizing for a market that incumbents can enter easily |
| "AI is a tailwind" without specifics | Undifferentiated; applies to every company equally |
| Green Flag | Why It Increases Conviction |
|---|---|
| Can name the specific event that opened this space | Deeply researched; not just following narrative |
| Technology enabler is less than 24 months old | Early in the cycle; competitive window is real |
| Behavioral shift already happened; data supports it | Low prediction risk; building on solid ground |
| Regulatory change creates compliance requirement | Externally mandated demand; not reliant on voluntary adoption |
| Timing window is narrow; founder is already building | Urgency signals good judgment about competitive dynamics |
Weight in overall framework: 20%
I want to address a pushback I hear frequently: "You can't evaluate unit economics at pre-seed. There's no data." This is partially true and mostly wrong.
You're correct that you won't have three years of cohort data at the pre-seed stage. But you can — and should — evaluate whether the unit economics model is structurally sound. The question isn't "what are your current LTV:CAC ratios?" The question is: "Given what you know about CAC, retention, and gross margin in this market, does the math work at scale if execution is reasonable?" For benchmarks by stage, the SaaS metrics data gives useful reference points for what healthy looks like at each funding level.
The founders who have thought through this model are demonstrably different from founders who haven't. The ones who have can tell you their assumption for CAC at scale, their gross margin target, their expected payback period, and what needs to be true about retention for the math to work. The ones who haven't give you hand-wavy answers about growing the top line and "figuring out monetization later."
In B2B SaaS — which is my primary investment category — there is enough public benchmarking data to construct a plausible unit economics model for almost any product. That model will be wrong in specifics. It won't be wrong directionally. And direction is what matters at pre-seed.
I build a simple model with six inputs:
| Metric | Below Average | Average | Strong | Best-in-Class |
|---|---|---|---|---|
| Gross Margin | <60% | 60-70% | 70-80% | >80% |
| CAC Payback | >24 months | 18-24 months | 12-18 months | <12 months |
| LTV:CAC | <2x | 2-3x | 3-5x | >5x |
| NRR | <90% | 90-100% | 100-110% | >120% |
| Logo Churn (annual) | >20% | 10-20% | 5-10% | <5% |
At pre-seed, I'm not expecting the company to be at "average" on all these metrics. I'm checking whether the model, if executed well, could reach "strong" or better. If the structural economics of the market make "strong" impossible — because, for example, the buyer segment has chronic budget constraints, the procurement cycle is long, or gross margins are structurally compressed by cost of goods — that's a fundamental problem, not an execution one.
The last question is important. Many early-stage companies have very low early CAC because founders are doing all the selling through personal networks. That doesn't persist. Understanding how CAC changes as the company scales is one of the most important unit economics questions to ask — the customer acquisition cost calculation walks through the full math.
My B2B SaaS benchmarks above don't apply cleanly to marketplaces or consumer products. For marketplaces, the relevant metrics shift to:
| Metric | Importance | Target |
|---|---|---|
| Take rate | High | Market-appropriate; 10-30% for most verticals |
| Supplier retention | High | Year-2 supplier retention >70% |
| Buyer repeat rate | High | 40%+ of transactions from repeat buyers within 12 months |
| GMV per cohort growth | High | Do cohorts grow their spend over time? |
| CAC per side | Medium | Separately track supplier and buyer CAC |
Consumer products are the hardest to evaluate at pre-seed, which is one reason I've reduced my consumer allocation significantly. The key metrics I focus on for consumer are D30 retention (what percentage of users are still active 30 days after first use), payback period on user acquisition, and evidence of organic virality (k-factor above 0.3 is a meaningful signal).
| Red Flag | Why It Concerns Me |
|---|---|
| "We'll figure out pricing later" | Pricing is a strategic decision, not a sales tactic |
| Gross margin below 50% in software | Cost structure suggests services or infrastructure dependency |
| CAC payback exceeds 24 months | Capital-intensive growth; dependent on continuous fundraising |
| No answer on what drives NRR | Doesn't understand customer expansion or contraction dynamics |
| Revenue model requires volume before margin appears | Structural trap for early-stage companies |
| ACV so low that unit economics only work with enormous volume | Path to efficient unit economics requires massive scale |
| "Our competitors charge X, we'll charge less" | Pricing-as-strategy without margin analysis |
| Green Flag | Why It Increases Conviction |
|---|---|
| Gross margin >70% with a clear path to >80% | Healthy software economics; can fund growth from revenue |
| CAC payback under 12 months | Fast return on customer acquisition investment |
| Early customers expanding without prompting | Strong product-market fit signal; NRR will be excellent |
| Pricing anchored to value delivered, not competitor pricing | Pricing discipline; pricing will improve as company scales |
| Budget owner is clear; purchase doesn't require committee | Low CAC sales motion; fast revenue realization |
| Demonstrated ability to sell without founder involvement | Early evidence of repeatable sales motion |
Weight in overall framework: 20%
Angel investors skip reference checks more often than any other step in the diligence process. The most common reason is time: "I only have a few days before the round closes, and the references will slow me down." The second most common reason is social discomfort: "I know this founder through a mutual friend, and it feels awkward to run references."
Both reasons are wrong.
The round closing pressure is often manufactured — or at minimum, it's a negotiating tactic that will soften if you explain that you're committed but need to complete your process. Any founder who won't give you a week to run references is a red flag in itself.
The social discomfort reflects a misunderstanding of what references are for. A reference check is not an investigation. It's a structured conversation designed to surface the founder's working style, growth areas, and track record — information that will make you a better investor and advisor to the company you're about to join as a stakeholder.
The best founders I've invested in welcomed reference checks enthusiastically. They had a list of contacts ready, including people who would give me honest mixed feedback. The founders who pushed back hardest on references were, in two cases, hiding things that would have significantly affected my decision.
I run a minimum of three reference calls for any check above $50K. My target breakdown:
| Reference Type | Number | Why |
|---|---|---|
| Former colleagues (same level or below) | 1-2 | Reveals working style, treatment of subordinates |
| Former managers | 1 | Reveals execution quality and career progression |
| Early customers or customer references | 1-2 | Reveals how founder sells and supports |
| Former investors (if any) | 0-1 | Reveals investor relationship quality |
Note that I do not let the founder choose all my references. I ask for a starter list and then I go "off list" — I find 1-2 references independently through LinkedIn, mutual connections, or the company's early customer network. The off-list references are almost always the most informative, because the people a founder deliberately omits from their reference list are the ones most likely to give you nuanced or critical feedback.
My calls run 20-30 minutes. I open with:
"I'm considering investing in [founder's name] and [company name]. I'm not looking for a performance review — I just want to understand how [founder] works and what makes them exceptional. Can I ask you a few questions?"
Then I move through six core questions:
The fifth question — the "would you join" question — is the most diagnostic. References who have worked closely with someone and would not join their company are telling you something important, even if they package it diplomatically.
The sixth question almost always produces the most valuable information in the call. References want to be helpful. When you give them an explicit opening to surface concerns, most will take it.
Reference calls are as much about what isn't said as what is said. Watch for:
| Red Flag | Why It Concerns Me |
|---|---|
| Founder pushes back on references or delays providing contacts | What are they trying to prevent me from hearing? |
| All references are personal friends, not professional colleagues | Self-selected for maximum positive bias |
| Reference gives noticeably vague answers to specific questions | May be protecting founder from honest feedback |
| Multiple references independently mention the same concern | Pattern signal; not one person's opinion |
| Reference would not join the company if asked | Strong negative signal from someone with close knowledge |
| Founder provided only senior references, no peer/subordinate contacts | Missing the references most likely to reveal working style |
| No references have worked in the last 5 years with founder | Too much time for character assessment to be reliable |
| Green Flag | Why It Increases Conviction |
|---|---|
| Founder proactively offers a mixed reference ("call X, she'll give you honest feedback") | High integrity; self-awareness |
| Multiple references independently describe the same strength without being prompted | Consistent character signal across sources |
| Reference would immediately join the company if asked | Extraordinary endorsement from someone with full information |
| Off-list references are as positive as on-list references | No material omissions from reference list |
| References describe specific instances of excellence, not generalizations | Evidence-based endorsement, not platitudes |
| Customer references describe the founder as a partner, not a vendor | Commercial integrity; long-term relationship orientation |
Weight in overall framework: 15%
The momentum check is the most recent-data-focused of the five checks. It answers a specific question: what is the trajectory of this company right now, and what explains that trajectory?
I focus specifically on the last 90 days because that's recent enough to reflect current reality rather than historical narrative, and long enough to represent a real trend rather than a single data point. A company that grew 25% in the last 90 days is in a meaningfully different position than one that grew 5% — and both are in different positions than one that declined.
Momentum is a leading indicator in both directions. Accelerating momentum is hard to fake. It means something is working in the go-to-market, or the product has reached a level of functionality that customers want, or a recent marketing initiative has opened a new channel. Decelerating momentum is also hard to hide — though founders try, by averaging it over a longer period, focusing on lagging metrics, or attributing slowdowns to one-time factors.
For revenue-generating companies, I ask for:
| Metric | What I'm Looking For |
|---|---|
| MoM revenue growth (last 3 months) | Trend line, not just point-in-time |
| MoM revenue growth (preceding 3 months) | Is the trend accelerating or decelerating? |
| New customer additions per month (last 3 months) | Separate from revenue; accounts for ACV changes |
| Gross churn rate (last quarter) | Are customers leaving? |
| Pipeline velocity | How fast are deals moving to close? |
| NPS or equivalent customer satisfaction signal | Leading indicator of retention |
For pre-revenue companies, I ask for:
| Metric | What I'm Looking For |
|---|---|
| Active users or engaged beta users | Real usage, not just signups |
| User session depth or engagement metrics | Are users getting value? |
| Waitlist or inbound demand growth | Organic demand building |
| Customer interviews completed in last 90 days | Founder is learning rapidly |
| Product iterations shipped in last 90 days | Velocity of learning and building |
| Letters of intent or verbal commitments from prospective customers | Real demand signals |
The momentum metrics themselves are only half the check. The more valuable half is asking the founder to explain the "why" behind the numbers. This is where you learn whether the founder has genuine insight into their business or is reporting numbers without understanding what drives them.
Questions I ask in the momentum check:
The fourth question — "what hasn't worked" — is particularly important. Founders who can articulate their recent failures with specificity and describe what they've done about them are learning from their business in real time. Founders who struggle to identify failures in the last 90 days either have an exceptional business or are not being honest with themselves.
I've been shown manufactured momentum. Founders with financial sophistication can present numbers in ways that imply strong performance while hiding fundamental problems. Here's what I watch for:
Signs of manufactured momentum:
When I see any of these, I ask for raw data. Not a formatted presentation — the actual spreadsheet or database export. Founders with genuine momentum are almost always happy to show you the raw numbers because they're proud of them. Founders who are manufacturing momentum resist or delay.
Beyond the metrics, I look for specific milestone events in the last 90 days that represent inflection points:
| Milestone | Conviction Impact |
|---|---|
| First enterprise customer signed | Validates upmarket potential |
| First customer without founder involvement in sales | Repeatable sales motion emerging |
| First revenue from an unexpected segment | Reveals larger TAM than pitched |
| Product shipped that moved a key metric | Product-market fit tightening |
| Key hire closed (senior with domain expertise) | Team building against conviction |
| Inbound introductions from potential acquirers | Strategic value emerging |
| Major media coverage driving organic signups | Brand building; organic growth possible |
| Red Flag | Why It Concerns Me |
|---|---|
| Metrics quoted in non-standard terms | May be hiding unflattering standard metrics |
| Momentum relies on a single large customer | Concentration risk; not repeatable |
| No clear explanation for what drove recent growth | Founder doesn't understand their own business |
| Growth decelerating and founder attributes it to external factors | Blame externalization; accountability gap |
| No product iterations in last 60 days | Stalled development |
| Customer conversations declining in frequency | May be avoiding bad news from market |
| "We just need capital to grow" without demonstrated demand | Capital won't solve an absent demand signal |
| Green Flag | Why It Increases Conviction |
|---|---|
| Revenue growth rate accelerating month-over-month | Something working and improving |
| Can explain the specific driver of recent growth | Knows their business deeply |
| Growth coming from unexpected source (reveals larger market) | Positive surprise signal |
| Customers acquired in last 90 days larger than average | Moving upmarket naturally |
| Founder describes recent failure candidly and specifically | Learning at speed; intellectual honesty |
| New revenue without founder in the sales process | Early repeatable motion |
Weight in overall framework: 10%
The natural question is: how much time should each check take? Here's my working allocation for a standard diligence process:
| Check | Typical Time | Primary Activities |
|---|---|---|
| Founder-Market Fit | 3-4 hours | 2-3 founder conversations; background research; competitive analysis |
| Market Timing | 1-2 hours | Market research; timing thesis validation; competitive mapping |
| Business Model Integrity | 2-3 hours | Unit economics modeling; financial documents review; customer economics calls |
| Reference Check | 2-3 hours | 3-4 reference calls at 20-30 min each; off-list source identification |
| Momentum Check | 1-2 hours | Metrics review; founder interview on recent trajectory; data audit |
| Total | 9-14 hours | Excludes first conversation and final decision work |
The founder-market fit check takes the longest because it's the most judgment-intensive and the hardest to compress. The market timing check is faster because much of it is desk research rather than primary conversation. The momentum check is efficient because the data is either there or it isn't.
For investments above $100K, I add a sixth step: a deep product review session where I use the product myself (for software) or review the technical architecture with a trusted engineer who can evaluate build quality and debt.
I'll be direct about this: I do skip checks in specific circumstances, and I think pretending I don't would be dishonest.
Circumstances where I skip or compress checks:
Small checks into known founders. If I'm writing a $10K-$25K check into a founder I've worked with closely for 2+ years — either as an advisor, a prior investment, or a professional relationship — I compress the formal process significantly. I still ask the momentum questions and do a quick market timing review, but I skip the extensive founder-market fit conversation and references because I already have that data from lived experience.
Follow-on investments into existing portfolio companies. If I'm exercising pro-rata rights in a company I've invested in, the due diligence is fundamentally different. I focus entirely on the momentum check and a quick re-evaluation of whether my original thesis has been validated, partially validated, or changed. I don't re-run the full five checks.
Very fast-moving rounds with a trusted lead. If a lead investor I know and trust well has done thorough diligence and I'm joining a round at the same terms with a $15K-$25K check, I sometimes compress the process and rely partially on the lead's judgment. This is explicitly a shortcut, and I accept the quality tradeoff consciously.
What I never skip, regardless of circumstances:
Even in relationships where I trust the founder completely, I find that going through even a compressed version of these checks produces better outcomes — both because it catches things I wouldn't otherwise catch and because it forces the founder to articulate their thinking in ways that are valuable for our working relationship going forward.
After running all five checks, I aggregate the scores into a weighted total and apply the decision matrix. Here is the full scoring framework:
| Check | Weight | Score (1-10) | Weighted Score |
|---|---|---|---|
| Founder-Market Fit | 35% | __ | __ |
| Market Timing | 20% | __ | __ |
| Business Model Integrity | 20% | __ | __ |
| Reference Check | 15% | __ | __ |
| Momentum Check | 10% | __ | __ |
| Total Weighted Score | 100% | — | /10 |
| Weighted Score | Decision | Action |
|---|---|---|
| 8.5 - 10.0 | Strong invest | Full check size; pursue pro-rata rights actively |
| 7.5 - 8.5 | Invest | Standard check size; request pro-rata |
| 6.5 - 7.5 | Invest small | 50% of standard check; no pro-rata request |
| 5.5 - 6.5 | Pass / Stay close | Decline; add to watch list for future raise |
| Below 5.5 | Pass | Decline; no follow-up scheduled |
The weighted score is a guide, not a formula. I apply two override rules that can change the decision regardless of score:
Override 1: Single-check hard pass. If any individual check scores below 3, I pass regardless of the total score. A founder-market fit score of 2 cannot be overcome by a momentum score of 9. These are not compensatory in my framework — each check identifies a different type of fatal flaw.
Override 2: Reference check veto. If reference calls surface a specific, credible concern about integrity — not just "she's hard to work with" but something that suggests the founder is not honest with investors, has a history of misrepresenting metrics, or treats team members in ways that would damage culture — I pass regardless of score. Integrity concerns are not overridable by business quality.
After completing all five checks, I give myself a minimum of 48 hours before communicating a final decision. During those 48 hours, I write a one-page investment memo that forces me to articulate:
The act of writing the memo catches reasoning errors that survive informal analysis. Several times, I've started writing a memo in favor of an investment and discovered — mid-sentence — that I couldn't actually make the case for the biggest risk without becoming concerned. That discovery has saved me from at least two poor investments.
Since implementing the formal five-check framework in 2021, my pass rate on opportunities I've formally evaluated has increased from roughly 40% to roughly 65%. I'm investing in fewer companies but with substantially higher average conviction. On paper, the investments made under the formal framework are outperforming the pre-framework investments, which is consistent with the hypothesis that structured diligence improves decision quality. The portfolio diversification math explains why conviction and breadth both matter in the same portfolio.
The most common reason I pass now, according to my own records, is founder-market fit. Roughly 45% of formal passes are founder-market fit failures. The second most common is business model integrity — mostly companies where the structural unit economics cannot get to a healthy number even with excellent execution. A full breakdown of the wins and losses this framework has produced is in what our 6 wins and 5 losses teach about investment risk.
The rarest pass reason is momentum — which makes sense, because a company with weak momentum usually also has other problems that surface earlier in the framework.
For a standard investment of $25K-$75K, the process runs 2-3 weeks from first conversation to decision. For larger checks ($75K+), I allocate 3-4 weeks to allow time for a deeper reference check and any additional product or technical review. The fastest I've ever moved from intro call to signed commitment is 5 days — and in retrospect, that was too fast, even though the investment turned out well.
Occasionally, and only when I have an ongoing relationship with the founder. If a founder I genuinely respect and want to support is pitching a company that scores low on market timing, I'll share that specific concern and why — not the score itself, but the substantive concern. I've had several founders come back to me 12-18 months later after addressing the concern, and I've invested in two of them at a later stage.
I don't share scoring for passes where the founder-market fit concern is the primary driver. That conversation is rarely productive and often harmful to the relationship.
This is a situation I've encountered three times. My approach: full transparency with both founders immediately. I tell both that I've seen the other company and that I'll be making a decision about which, if either, I'm investing in. I don't share confidential information from either diligence process with the other founder. And I make the decision purely on the five-check framework, not on which founder I have a better personal relationship with.
In practice, this situation usually resolves itself quickly because the frameworks produce meaningfully different scores for each company.
I don't. Once I've invested in a company, I consider the sector locked for direct competitors. I've turned down several opportunities that looked attractive because I had an existing investment in a direct competitor. The returns from maintaining clean conflicts of interest outweigh any specific investment opportunity.
I treat it as a yellow flag, not necessarily a red flag. Some founders — particularly those who have had investors misuse financial data in prior fundraises — are cautious about sharing detailed metrics before a commitment. I offer two alternatives: I can sign an NDA before data sharing, or I can proceed on the basis of ranges rather than actuals and give a provisional commitment contingent on data verification.
If a founder won't share any financial data and won't sign an NDA, I pass. At that point the reluctance has crossed from caution to opacity, and opacity is a red flag.
The five checks are universal, but some of the benchmarks shift for different markets. Unit economics benchmarks differ by geography — a strong LTV:CAC ratio in India may look different from one in the US given different price points and CAC dynamics. Reference check methodology also differs internationally; in some markets, asking for off-list references is culturally unusual and I adapt my approach accordingly.
The founder-market fit check and market timing check are structurally identical regardless of geography. The business model integrity benchmarks require the most calibration for different markets.
Anchoring on the first piece of positive information they receive. The presentation is almost always designed to lead with the most compelling data point — a strong growth number, a prestigious customer logo, an impressive advisor. Once that anchor is set, everything else in diligence gets evaluated relative to it.
The way I counter this is to explicitly assign my first impressions to paper before diligence starts, then see whether the evidence across the five checks supports or undermines those first impressions. When the evidence undermines my first impression, I take that seriously rather than re-rationalizing back to the anchor.
Primarily through three sources: reading the public filings and investor letters of later-stage companies in the sectors I invest in, regular conversations with operators (not investors) in the domains I'm evaluating, and curated databases of startup metrics like those published by SaaStr, Bessemer, and OpenView. The operator conversations are the most valuable — they give you ground-truth benchmarks from people running businesses, not theoretical benchmarks from investors.
Immediately. Even if you're writing $5K checks and have no formal track record, using a structured framework from the beginning builds the muscle before you have larger amounts of capital at stake. The cost of having an amateur framework when you're writing $5K checks is low. The cost of having an amateur framework when you're writing $75K checks is very high.
Start with the minimum viable version: founder-market fit, market timing, and momentum. Add business model integrity and reference checks as your check sizes grow.
I've learned to treat indecision as a signal. In the early years, when I was on the fence, I'd often find a reason to invest — usually social pressure from the deal ecosystem or FOMO about the round closing. Since I've been disciplined about the framework, I treat a genuine fence-sit (score in the 6.5-7.5 range with no strong conviction either way) as a sign to either invest at half my standard check size or pass entirely.
The investments where I had to talk myself into it have underperformed the investments where the decision was clear. Not every time — there are exceptions — but the pattern is consistent enough that I've stopped trying to force decisions that the framework doesn't make obvious.
The five checks are not a guarantee. Early-stage investing is probabilistic, and even a perfect diligence process will produce failures. What the framework gives you is not certainty — it gives you a consistent methodology that produces better decisions at the margin, catches avoidable mistakes, and builds a track record you can actually learn from.
The most important thing I've learned across 38+ investments is that the discipline of the framework matters as much as the framework itself. Running four of five checks rigorously and skipping the fifth under time pressure is not 80% of the process. It's a different process entirely — and the check you skip is usually the one that would have changed your decision.
A rare look inside a 38-company angel portfolio: the tracking system, vintage analysis, wins, losses, and the decision framework that evolved over seven years.
A post-mortem across 38+ angel investments: the 3 patterns behind every win, 3 patterns behind every loss, and the risk framework that changed everything.
A brutally honest case study of a portfolio company failure — what we missed, what went wrong, and what I changed about my investing process after.