Human.org, a startup building a platform to prove you’re human and not AI, raises $7.3M in pre-seed funding
![](https://techstartups.com/wp-content/uploads/2025/02/Human.org-Founder-Kirill-Avery.jpg)
OpenAI made waves with the launch of ChatGPT, introducing the world to generative AI capable of holding natural, human-like conversations. But we’re now seeing a shift from simple chatbots to more complex AI agents—systems designed to perform tasks, make decisions, and even act on behalf of humans with minimal supervision.
These agents are moving beyond answering questions; they’re managing schedules, handling customer service, and even trading stocks. As AI agents become more autonomous and integrated into our daily lives, a critical question arises: how do we know who’s behind the screen—a person or a machine?
However, as AI agents inch closer to matching human intelligence, a glaring problem remains: there’s no clear way to verify if an online actor is a real person, an AI working on someone’s behalf, or who that person actually is.
AI Alignment: The Trust Problem Between Humans and AI
AI Alignment (AIA) is the defining techno-social challenge of our era. That’s why Human.org is stepping in with a solution—building an infrastructure to keep AI systems in check and aligned with human values. Its protocol operates on a layer 1 blockchain, creating a public, verifiable identity system for both humans and AI agents. This ensures transparency, accountability, and, most importantly, that humans stay in control.
Investors see the urgency, and Human.org has just raised $7.3 million in pre-seed funding from backers like HF0, Soma Capital, Spearhead, Pioneer Fund, Hummingbird VC, and notable angels like Val Vavilov (Bitfury), James Tamplin (HF0), and Sheridan Clayborne (Lendtable).
Founded in 2023 by 23-year-old serial entrepreneur Kirill Avery, Human.org is the first AI safety lab focused on solving the AI alignment problem through decentralized trust infrastructure. The company plans to roll out its protocol by Q2 2025.
“We started the world’s first product-based AI safety lab with one goal: solving AIA to fix the trust crisis and enable AI and humans to coexist together,” Human.org said on its website.
Kirill’s entrepreneurial journey started young. He began building apps at 11, immigrated to the U.S. at 17 to join Y Combinator as one of its youngest solo founders, and later recognized the critical need for AI accountability while at HF0, a top AI startup residency in San Francisco.
For the past two years, Human.org has been developing a blockchain and identity protocol that guarantees transparent and secure interactions between verified humans and AI agents, all without government or corporate oversight. With backing from Pioneer Fund and leading AI and crypto investors, Human.org is laying the groundwork for a future where AI remains under human control. More details can be found at human.org.
The Growing Threat: AI-Powered Misinformation and Distrust
Even before generative AI took off, the internet was awash with misinformation, bots, and anonymous accounts. Now, with AI agents approaching human-like communication, the risks are escalating. Bad actors could deploy AI at scale for market manipulation, financial fraud, and widespread misinformation campaigns that could undermine democracies and destabilize governments.
Right now, there’s no universal way to verify if an AI agent represents a real person, nor a reliable method to hold these systems accountable. As AI-generated content continues to flood the internet, the stakes are rising for economies, democracies, and personal interactions.
“Having grown up in a society where you couldn’t trust what you saw online, I wanted to create Human to ensure human identity and expression remain protected in the era of AI,” said Kirill Avery, Founder and CEO of Human. “If we don’t tackle these problems now, we risk losing control over ourselves and our society.”
Human.org’s Solution: A Trust Layer for the Internet
Human.org is building the internet’s trust infrastructure to make sure AI systems stay accountable. Unlike government or corporate-controlled solutions, Human’s blockchain protocol gives individuals control over their digital identity while maintaining privacy.
“AI agents need identity and authority, and Human is solving that with technology that keeps everything transparent and secure,” said Eric Norman, General Partner at Pioneer Fund. “Kirill has one of the biggest visions of any founder I know and genuinely wants to create something that helps people work better together.”
The protocol is built around five key technologies. First, there’s the Human Network, a blockchain designed to facilitate secure interactions between verified humans and AI agents. Complementing this is Human ID, a cryptographically secure system that ensures real human identities can be verified with confidence. To bring accountability to AI systems, Human.org introduces Agent ID, which makes it possible to trace AI agents back to their human creators. The ecosystem also includes Humancoin, a digital currency distributed to verified users, incentivizing trustworthy interactions. Finally, the Human App serves as a user-friendly interface, allowing individuals to manage their identities, conduct transactions, log in securely, and interact with AI agents in a controlled environment.