Welcome to Slow Ventures’ Snailmail, where we slow down and share what’s been on our partners’ and founders’ minds this week.

TL;DR:

Others:

How to Price Out-Of-The-Money Seed Investment (Call Options)

Sam Lessin

Context:
I just did an interview and the question came up: how do you actually think about pricing seed-stage AI investing in 2026? Because the math really has changed. There is so much narrative gravity, so much capital piling into foundation models, and yet so few real businesses on top. Most of what people are calling "AI investing" right now is really just betting on which big lab wins, with VCs paying first-class prices for what is effectively a lottery ticket.

That framing is wrong, and I think there is a cleaner way to think about it. At seed, you are evaluating two separate things at once: (1) is this a real business with a path to a real outcome, and (2) given how wild the next decade looks, what is the chance this thing becomes something nobody can imagine today.

Most seed investors are anchoring on the first variable and pretending the second doesn't exist. I think you have to do both.

Market Signal:
In normal seed math, you are buying ownership at a price that has to make sense if the company hits its base-case outcome. In AI seed math, the base case has gotten harder to value because the technology underneath is moving faster than any business model on top of it. The realistic outcome distribution is now bimodal in a way it has not been before: either the company gets steamrolled by the next model release, or it captures something nobody on the cap table fully anticipated when they wrote the check.

That looks an awful lot like an out-of-the-money call option. You are not really paying for the present value of expected cash flows. You are paying for the optionality on a future world that does not exist yet.

The implication: at seed in 2026, you are pricing two variables at once. One is the conventional business question (is the team good, is the wedge real, will customers pay). The other is the option-pricing question (how cheap is it, and how convex is the upside if the world reorganizes around AI in a way nobody can predict).

The honest read is that most of the seed checks being written into AI today are being priced as if only the first variable matters, when actually the second variable is doing 80% of the work. That gap is where the money is. And it is also why the same companies that look "expensive" today might look obviously underpriced in three years. Or look like nothing.

Takeaways:

  • Seed AI is option pricing. Stop pretending it is conventional cash-flow math. The strike price is the round size. The convexity is the optionality on a future world.

  • The two variables are independent. A great team with a small option is a fine business but a bad seed bet. A weak team with a massive option is a lottery ticket. The seed sweet spot is where both are real.

  • You can't have only a 140x in the biggest winner. That is the structural problem with seed in this era. If the model funds get the head and the application layer gets the tail, the seed math has to clear an absurdly high bar to make the fund work.

  • For founders: know which variable you are selling. If you are pitching the conventional business, your seed terms should reflect that. If you are pitching the option, your seed terms should reflect THAT. Conflating the two is how rounds break.

Everyone in Between Is Probably Cooked

Will Quist

Context:
The clearest read on the AI stack right now: there are two durable ends, and a middle that is going to get squeezed out.

If you want outcomes, you will buy them from the foundation-model providers. If you want employees, you will run specific models on your own proprietary data. The companies caught in between are doing neither well.

Market Signal:
The model rat race still matters. But the companies that win are not going to be the ones with the best model on any given Tuesday. They are going to be the ones who can monetize today's model fast enough to fund tomorrow's training run, while simultaneously building products that don't depend on model leadership.

That is a two-variable problem. Almost no one is solving for both.

Pure model companies are in a brutal race where the frontier moves every few months. Pure application companies are model-agnostic, which means their competitor can swap to the same model tomorrow. Neither position is durable.

The durable position is in between: proprietary data or workflows that make your model materially better, combined with products that capture value regardless of which model is best on any given Tuesday.

That is the bet. Not the model. Not the wrapper. The integration.

Takeaways:

  • Outcome buyers go to the labs. When you want a thing done, you will go to the provider that owns the model. The labs win the consumer outcomes layer.

  • Employee buyers run their own. When you want a model that knows your company, your data, and your processes, you will run a smaller model on your stack. The proprietary-data layer wins the enterprise.

  • Neither pure model companies nor pure wrappers survive the squeeze. Both ends move toward each other and crush the middle.

  • The question isn't "which model will win." It's "where in the stack does compounding value live." That is not a model question. It is a data and workflow question.

Don’t Wait For The AI Shock

Yoni Rechtman

Context:
Every conversation about AI eventually arrives at the same place: jobs, displacement, and what we owe people when the work disappears. The reflexive answer that has dominated the discourse is universal basic income. Send everyone a check. Decouple income from labor. Done.

I find myself saying more and more that this answer is wrong. Not because I think UBI is impossible. Because I think it is a profound surrender of the things that make a society work.

Market Signal:
The case for UBI rests on a particular reading of where the AI shock lands: jobs vanish in a single wave, capital concentrates in a small set of firms, and the only humane response is to redistribute the surplus directly to people who used to do the work. Each step in that argument is contestable. Jobs do not vanish in a wave. They migrate. Capital does concentrate, but it also gets deployed into new categories of work that didn't exist before. And the historical track record of "just pay people not to work" is genuinely bad for the people receiving the check.

What I want, and what I think the politics will eventually demand, is a society that re-deploys the energy AI frees up into work that humans are uniquely positioned to do. Care work. Creative work. The long tail of things that benefit from a human in the loop. Not because AI cannot do them, but because the value of a human doing them is materially higher.

The "four jobs" frame I wrote about last month was a provocation. The honest version is closer to forty. But the framework still holds. The economy reorganizes. New categories of work emerge. The question is not whether AI takes jobs. It is whether we build the institutions that let people move to the next thing without falling through the floor.

UBI is a coping mechanism for a system that has given up on that move.

Takeaways:

  • The shock isn't a wave. It is a migration. Treating it as a wave produces the wrong policy.

  • Capital concentrates, but it also creates new categories. The relevant question is whether the migration paths are open or closed.

  • UBI is an institutional surrender. It treats the worker as a beneficiary of the system rather than a participant in it. That is bad for the worker and bad for the society.

  • The right policy infrastructure is migration infrastructure. Re-skilling, geographic mobility, healthcare detached from employment, lower frictional costs of starting and joining new ventures. None of this is sexy. All of it is more durable than a check.

  • The harsh truth is: the people most loudly advocating UBI in San Francisco today are the same people who would never accept it for themselves. There is something to that.

More Musing From The Team

Disagree with anything? Hit reply, we read every note.

Keep Reading