Lyrie.ai launches real-time zero-day tracking as AI agents emerge as the next cybersecurity battleground
For years, cybersecurity teams worried about phishing emails, ransomware gangs, and cloud breaches. Now a new concern is coming into focus: autonomous AI agents that can read email, write code, execute commands, move money, and interact with other systems without human approval at every step.
That shift is creating a new attack surface across the internet, and startups are racing to build the security infrastructure around it.
Lyrie.ai, the security platform developed by OTT Cybersecurity LLC, said Tuesday it has deployed a real-time zero-day tracking and disclosure system built to identify vulnerabilities before public exploits spiral into large-scale breaches. The company also announced acceptance into Anthropic’s Cyber Verification Program and released a new open cryptographic framework called the Agent Trust Protocol, or ATP, aimed at securing autonomous AI agents operating online.
Why AI agents are creating a new security blind spot
The announcement arrives as companies push AI agents deeper into enterprise workflows. Large language models are no longer limited to answering prompts inside chat windows. They are beginning to interact directly with APIs, internal systems, software repositories, financial tools, and customer data. That transition has created a growing concern within the cybersecurity industry: nobody has yet to fully solve the problem of verifying whether an AI agent is legitimate, authorized, or compromised.
Lyrie says its platform was built around that problem.
The company’s threat intelligence engine continuously monitors infrastructure, open-source repositories, APIs, and agent communication channels for signs of emerging vulnerabilities. Once a zero-day exploit is confirmed, the platform generates disclosure packages that include technical analysis, remediation guidance, and proof-of-concept details for affected organizations.
The race to stop zero-day attacks before they spread
In several verified incidents, Lyrie said that affected organizations received remediation support within hours of discovery, before public disclosure.
“The difference between a breach and a near-miss is usually measured in hours. We built Lyrie to be the system that finds the threat before it finds you — and tells you exactly what to do about it,” said Guy Sheetrit, CEO and Founder of OTT Cybersecurity LLC, the company behind Lyrie.ai.
The startup is positioning itself less as a traditional cybersecurity vendor and more as a foundational trust layer for autonomous AI systems.
That positioning became clearer with the release of the Agent Trust Protocol.
The protocol attempts to solve a growing problem across agentic AI systems: identity verification. As AI agents begin interacting with each other online, organizations need a way to verify who deployed the agent, what permissions it has, whether its instructions were altered, and whether its access has been revoked.
Lyrie.ai Real-Time Zero-Day Tracking (Credit: Lyrie.Ai)
According to Lyrie, ATP enables systems to validate an AI agent’s identity, scope, delegation authority, attestation status, and revocation history in real time.
“Every AI agent on the internet today is a stranger. You don’t know who it is, what it’s authorized to do, or whether it’s been tampered with. ATP is the protocol that changes that,” Guy adds.
The company said the protocol is open and royalty-free, with plans to submit it to the Internet Engineering Task Force, the standards body responsible for major internet protocols. A reference implementation has already been released under an MIT license on GitHub.
Lyrie’s broader platform combines offensive and defensive security tooling into a single system. Its products include autonomous penetration-testing workflows, GPU-accelerated red-teaming infrastructure, binary-vulnerability research tools, and threat coverage mapped to the OWASP Agentic Security Initiative catalog.
Lyrie joins Anthropic’s Cyber Verification Program
The startup’s acceptance into Anthropic’s Cyber Verification Program also places it in a small group of organizations approved to conduct certain dual-use cybersecurity activities on Claude’s AI infrastructure, in accordance with Anthropic’s safety policies.
“Being among the first companies accepted into Anthropic’s Cyber Verification Program validates what we’ve built. Lyrie isn’t a security tool that sits alongside AI. It’s the security layer that AI runs on top of”.
The rise of autonomous AI agents is prompting a broader rethink across cybersecurity. The old model assumed humans sat between software systems and high-risk actions. Agentic AI changes that assumption.
Security firms are now racing to answer a difficult question: what happens when software begins acting independently at internet scale?
