General Analysis raises $10M in seed funding to secure agentic AI against real-world attacks
AI agents are moving fast inside real companies. In some cases, faster than the teams responsible for keeping them in check.
In March, a single adversarial agent persuaded 50 live customer service bots to hand out more than $10 million in fake perks. Million-dollar gift cards. Years of free services. Anything it could extract. Each target took only a few minutes. Out of 55 systems tested, just five held the line.
That experiment didn’t happen in a lab for show. It came from General Analysis, a San Francisco startup that just raised $10 million in seed funding led by Altos Ventures, with backing from 645 Ventures, Menlo Ventures, Y Combinator, and a group of early investors. The company is going after a problem that’s starting to worry enterprise security teams: AI agents that don’t behave like traditional software and can’t be secured the same way.
General Analysis was founded by Rez Havaei, who previously worked at Cohere and NVIDIA, alongside Maximilian Li from Harvard University and Rex Liu from California Institute of Technology. Their pitch is simple: securing AI agents is an entirely different discipline, one that requires new tools and new ways of thinking.
That gap is starting to show up across industries. Companies are pushing agents into customer support, finance, and internal operations, where decisions carry real consequences. The upside is clear, and delaying deployment rarely feels like an option. The risk is less obvious until something breaks.
Traditional security methods rely on predictable systems. Engineers can read code, trace behavior, and reason about outcomes. Agentic systems don’t follow that script. They take in new inputs, generate responses, and act in ways that shift from one moment to the next. That makes failure harder to spot before it happens.
The team at General Analysis has been testing those limits. In earlier work, the researchers showed how a widely used integration in Cursor could be manipulated through a single malicious support ticket. The result: an internal agent could be tricked into exposing a full private database. The finding caught the attention of Simon Willison, who described it as a case of the “lethal trifecta” — a system that holds sensitive data, ingests untrusted input, and can communicate outward.
Security teams are feeling the pressure. Lock agents down too tightly, and they stop being useful. Open them up and the risk becomes hard to measure.
“We hear from security teams that they want agents that are secure by design,” said Havaei. “What that often turns into in practice is a stack of isolation layers and ad hoc context restrictions that makes a system feel more controlled. Those measures either fail to eliminate the underlying vulnerability or constrain the agent enough to limit its usefulness. The problem is that feeling safer and being safer are not the same thing.”
General Analysis takes a different route. The company treats AI security as something that has to be tested in the open, under pressure. Instead of trying to prove a system is safe, it measures how often it fails and how severe those failures are.
“Our position is that security for AI systems is an empirical problem. It has to be grounded in rigorous measurement of how those systems behave under realistic and adversarial conditions. You cannot prove an agent is safe,” said Li. “You can only measure how often it fails, and how badly, and drive both numbers down.”
When AI agents start acting on their own, security breaks in new ways
That philosophy shapes how the company works with customers. It runs adversarial simulations against live systems, identifies where things break, and helps teams decide which defenses actually reduce risk without crippling performance. There’s no single configuration that works everywhere. Each system comes with trade-offs.
“One advantage of agents is that they are much easier to study systematically than the human workflows they are beginning to replace,” said Liu. “Many of those workflows were never especially secure to begin with, and their failures are often hard to observe or improve rigorously. But as those workflows become agentic, they also become more measurable and more improvable — which creates a path for businesses to become more secure in practice than they were before.”
Investors see the timing as critical. “Agentic systems represent a paradigm shift in security. Safety and security in the AI era demand continuous adversarial testing rooted in deep research, not static rule sets,” said Tae Yoon of Altos Ventures. “Rez, Rex, and Max are exactly the kind of team this moment calls for: technically brilliant, deeply scrappy, and moving incredibly fast.”
General Analysis is already working with enterprise customers whose systems reach hundreds of millions of users. As more companies hand off decisions to AI agents, the question is starting to shift. It’s no longer a question of whether these systems will be deployed. It’s how much risk organizations are willing to accept before they know how those systems behave under pressure.

