Welcome to Slow Ventures’ Snailmail, where we slow down and share what’s been on our partners’ and founders’ minds this week.
TL;DR:
Others:
Correction from last week: The companies listed under PortCo Activity were incorrect.

Prove You’re a Robot: Why the Next CAPTCHA is for Bots, Not Humans
Sam Lessin

Context:
I had one of those classic pool house chats with my good friend, an ex-Fin co-worker, and computer founder Rob Cheung. We started riffing on how “prove you are a human” on the internet has gone from this theoretical, academic concern to a state-actor-level problem, to now: a deep, universal one. At this point, everyone should just assume all content is fake or machine-made until proven otherwise.
But here’s the twist: we’re all obsessed with “prove you’re a human,” but there are actually some killer use-cases where you want to prove you’re a bot (not a human). And hilariously, it’s pretty easy to implement—at least at first.
The basic idea: just ask questions on the API with a time limit that no human could possibly answer fast enough.
How it Works (and why it’s fun):
Make a request to make a request.
Get back a math/science problem with a 2-second time limit.
Submit your actual request to the endpoint, with your solution.
Unlike the usual human CAPTCHAs (where you log in once per session), you can do this with literally every API call, at a very low (but not zero) cost.
But it gets more interesting: Solving hard problems for machines (stuff that’s just beyond human speed or ability) is still low cost, but it’s not zero. So what?
Here’s where it gets spicy:
A) Pay with compute.
Humans have been labeling “where’s the bus?” for image recognition datasets. But machines? They can solve actual compute problems for you as the price of accessing your API. Want in? Solve for me.
B) Dynamic rate limiting.
Make the problems harder and harder, more and more expensive for rate limiting. Not all problems are created equal—even for bots. So you can make rate limits more granular, call by call.
C) Differentiation, benchmarking, gating.
Could you start to differentiate between hardware, models, or even do benchmarking? Sure, most bots look the same, but with the right API and problem types, you could tell the difference between types of matrix-multiplier hardware at scale. Want to keep open-source models out (or in)? Set the challenge accordingly. Want access? First, pay me with your best interpretive painting of a mouse playing with sexy cheese.
Market Signal
The arms race isn’t just humans vs. bots—it’s bots vs. bots, and we need new primitives for trust and access.
API providers are already thinking about how to extract value from bots, not just humans. “Proof of bot” is a primitive that’s about to get productized.
As compute becomes the new currency, “pay with cycles” could be the next big API monetization model.

OpenAI’s $110B Endgame: Welcome to the First AI GSE
Yoni Rechtman

Context:
OpenAI just pulled off the largest private fundraise in history—$110B. That’s not a typo. On the very same day, Trump publicly threatened to kill Anthropic, and OpenAI, with a little equivocation, stepped right over the body to grab the Pentagon contract. This is not a normal week in tech. This is one event, but it’s the event that tells you where the puck is headed.
Market Signal:
Foundation models (GPT-4, Claude, Gemini) are mind-blowing products. But are they good businesses? I’m not so sure. Training these models isn’t R&D anymore, it’s a cost of revenue. If you stop training, your users churn to the next best thing or a knockoff. The labs are burning through capital to build assets that depreciate faster than a WeWork office lease. And each new state-of-the-art model costs more than the last. We’re staring down the barrel of $1B+ training runs. There are only so many places you can raise $100B. For context, the largest IPO in history (Saudi Aramco) was “just” $30B. Capital markets are running out of road. Retail and public markets can’t absorb this kind of capex.
Takeaways:
The obvious next check comes from Uncle Sam. The only things propping up the economy right now are AI capex and the great national nursing home. Without AI infrastructure spend, we’d be in a recession, and no administration (Trump or otherwise) wants that on their watch. The DoD contract fight is just a preview of what’s coming: the government picking winners, Chinese-style.
The endgame? OpenAI becomes a GSE (Government-Sponsored Entity) or SOE (State-Owned Enterprise), at least in practice if not on paper. The specific mechanism (equity, debt, guarantees, some weird new structure) is almost beside the point. What matters is that the taxpayer is almost certainly left holding the bag.
Asks:
If you’re an operator, investor, or just someone who cares about the future of startups:
Rethink what “building on top of AI” means when the underlying platforms are essentially state-subsidized utilities.
Consider where capital will flow if foundation model training becomes a national priority, not a private market play.
Start demanding more transparency on how these deals are structured—because you, me, and everyone else might be footing the bill.

The Trillion-Dollar Bottleneck Nobody's Building For
Christian Catalini (Slow Founder Alum)

Context
For 300,000 years, human brains were the only game in town. Now we're building a second kind of intelligence: one that doesn't inherit our muscles or hormones, just the sum total of everything we've ever written, drawn, coded, and measured.
The real economic constraint of the AI era isn't intelligence (that's becoming a commodity), it's verification. We can generate infinite output. We cannot infinitely check whether that output is correct, safe, or aligned with what we actually wanted.
This formalizes as the "Measurability Gap" (∆m): the widening chasm between what AI agents can execute vs. what humans can afford to verify.
The uncomfortable punchline: the "human-in-the-loop" model everyone's counting on is structurally unstable. It's being eroded from below (juniors aren't learning because AI took their entry-level work—the "Missing Junior Loop") and from within (experts are training their own replacements by turning their intuition into training data, the "Codifier's Curse"). Left unchecked, we drift into a "Hollow Economy" — explosive nominal output, collapsing actual human oversight.
Market Signal
The bottleneck has shifted. It's no longer about building smarter models. It's about building the trust infrastructure to deploy them safely: verification tooling, observability, cryptographic provenance, liability frameworks. This is where the next wave of defensible companies gets built.
"Software-as-Labor" replaces SaaS. When AI agents do the work (not just provide the tool), the revenue model flips from selling access to selling outcomes. The new moat? Liability-as-a-Service: the ability to insure and underwrite what your AI produces.
Skill-biased → Measurability-biased disruption. Prestigious, highly educated roles get automated if they're measurable (think: junior analysts, radiologists reading scans, code review). The jobs that survive? Long feedback loops, ambiguous objectives, social consensus — venture capital, creative direction, mentorship, status-driven markets.
The "Trojan Horse" externality. Companies will rationally ship unverified AI because the upside is private and the risk is socialized. This is the market failure of the decade.
One person = one startup. The collapse in execution costs means a single person directing AI agents can operate at the scale of a traditional funded team. The barrier shifts from resources to taste, judgment, and verification ability.
Takeaways
Verification is the new moat. Companies that treat oversight as a "compliance tax" will accumulate invisible risk. The winners treat it as a core production technology: verification infrastructure, ground truth data, and adversarial red-teaming become primary competitive advantages.
The junior pipeline is breaking. If AI takes the entry-level work that trains future experts, we're cannibalizing the supply of people qualified to oversee AI. Companies and universities need to build "synthetic apprenticeships": AI-generated practice environments that compress years of experience into months.
Value bifurcates radically. The measurable economy races to the marginal cost of compute. The real rents migrate to (a) verification/liability underwriting, (b) deep tech & long-horizon R&D where ground truth is scarce, and (c) the "status economy" — provenance, human connection, meaning-making. Think: why Hermès will be fine but mid-tier professional services won't.

More Musing From The Team

