
Context:
Google dropped a report that should have everyone in the security and startup ecosystem on high alert: they’re seeing a surge in targeted spear phishing campaigns, not just from your usual e-crime suspects, but also from nation states like North Korea and Iran. The kicker? These actors are using Gemini—Google’s own LLM—to automate and scale their attacks. This isn’t theoretical. It’s live, in the wild, and it’s making the email security problem a hundred times worse than it was even a year ago.
We’re in the middle of an inflection point. For twenty years, cloud-based security vendors have eaten the lunch of their client-server predecessors. But AI—specifically LLMs and agentic adversarial agents—breaks the mold. The old guard is facing a fundamental architectural crisis. The status quo is up for grabs. Market cap is in play. And the volume and sophistication of attacks are on a vertical Y-axis.
Market Signal:
AI is not only a tool for defenders; it’s a weapon for attackers. The volume and sophistication of threats are exploding because LLMs let adversaries create highly personalized, at-scale phishing campaigns that bypass legacy, one-size-fits-all security models. The old “hot dog, not hot dog” approach—centralized, black-box machine learning—can’t keep up. It’s too slow, too rigid, and too dependent on the vendor to adapt.
Takeaways:
The traditional, centralized security model is fundamentally broken in the AI era.
Attackers are moving faster, using LLMs to scale and customize attacks in ways that break the old rules.
Defenders have to be just as dynamic, leveraging programmable, transparent, and community-driven approaches.
Open source matters—not just philosophically, but tactically, because it democratizes defense and enables rapid, tailored adaptation.
The future isn’t buying “software” as a static tool; it’s hiring a team of agents that work for you, adapt in real time, and can be managed like high-leverage employees.
