AI bots now dominate the Internet, surpassing human traffic for the first time
The internet was built on a simple premise: there’s a human on the other side of the screen. That assumption is breaking down faster than anyone expected.
Now, a new State of AI Traffic report from Human Security says automated systems have crossed a line. Bots and AI-driven activity now outweigh human traffic across large parts of the web, marking a structural shift in how the internet works.
“The internet as a whole was created with this very basic notion that there’s a human being on the other side of the computer screen, and that notion is very rapidly being replaced,” Stu Solomon, CEO of Human Security, told CNBC.
Key Findings from the Report
- Automated traffic grew 8x faster than human traffic year over year
- AI-driven traffic jumped 187% in 2025, nearly tripling
- Agentic AI traffic surged 7,851%
- Over 95% of AI traffic is concentrated in retail, media, and travel
- Nearly 1 in 5 visits is now a scraping attempt
- Account takeover attempts quadrupled, averaging 402,000 per organization
The numbers behind that shift are hard to ignore. Automated traffic grew nearly eight times faster than human activity in 2025. AI-driven traffic alone surged 187% over the year, the report finds. The monthly volume nearly tripled, driven by the widespread use of systems such as ChatGPT, Claude, and Gemini.
From Humans to Machines: AI Bots Are Now the Internet’s Largest Users
Most of that traffic still comes from training crawlers, which account for about 67.5% of AI activity. That share is slipping as newer forms of automation take over. AI scrapers jumped 597% last year. Agentic AI, software that doesn’t just read the web but acts on it, exploded by 7,851%.
That last category may be the most important. These systems don’t just collect information. They browse products, log into accounts, and complete transactions on behalf of users.
The report also highlights the sharp rise of agentic systems, a newer class of AI that acts on behalf of users rather than just retrieving information. Tools like OpenClaw are part of that shift, completing tasks across websites without direct human input. Activity from these agents was barely measurable in 2024. By 2025, it had surged nearly 8,000%, according to Human Security, signaling how quickly the web is moving from passive consumption to autonomous execution.
In 2025, more than three-quarters of agent-driven activity happened on product and search pages. Smaller portions showed up in account areas, authentication flows, and even checkout pages. The behavior resembles a human moving through a site. The difference is that no person is actually clicking.
That creates a new kind of internet economy. Businesses that allow these agents to interact with their platforms can capture demand that others never see. At the same time, it opens a new attack surface.
The same paths used by AI agents, product discovery, account access, and checkout are the exact paths targeted by fraud operations. The report found that nearly one in five site visits in 2025 were scraping attempts. Account takeover attempts more than quadrupled, averaging over 400,000 per organization. Carding attacks have climbed 250% over the past few years.
The harder problem isn’t volume. Its intent.
An AI agent scanning products and completing a purchase could be a shopper’s assistant. It could just as easily be an automated fraud script. The behavior is nearly identical.
“This notion of machine bad, human good just is not realistic,” Solomon said. “You have to live in a world where machines are acting on our behalf, and we have to establish a level of trust that’s persistent over time.”
That trust gap is razor thin. Across the data analyzed by Human Security’s platform, the difference between benign and malicious automation was about 0.5%. The old model of labeling traffic as “bot or human” no longer holds.
The shift isn’t happening evenly across the web. AI traffic is concentrated in a few high-value sectors. Retail and e-commerce, streaming and media, and travel and hospitality collectively accounted for more than 95% of AI-driven traffic in 2025. These are the places where fresh, structured data fuels AI products and where users expect fast answers and actions.
Power is concentrated as well. OpenAI generated roughly 69% of observed AI bot traffic. Meta followed with about 16%, and Anthropic accounted for around 11%. The rest of the field barely registers.
Not everyone is convinced the measurements capture the full picture. Filippo Menczer, a professor of Informatics and Computer Science at Indiana University, pointed out that tracking automated traffic across the internet is inherently messy.
“You can try to estimate the amount of bot traffic by looking at the agent strings, but these are very noisy estimates,” Menczer said. “They depend on what sample you get. They are depending on where you’re getting the data, where the measurements are coming from.”
Human Security acknowledges those limitations. Its analysis is based on data from its Human Defense Platform, which processed more than one quadrillion interactions. The report relies in part on user-agent strings, a method that becomes less reliable as actors disguise their identity.
Still, the direction is clear.
At SXSW last week in Austin, Matthew Prince said bot traffic made up about 20% of the internet before generative AI took off, driven largely by search engine crawlers. He expects AI systems to pass human traffic entirely by 2027, citing the demand for data from generative models.
What’s changing now goes beyond scale. The internet is moving from something humans browse to something machines act on. AI agents are becoming participants, making decisions, executing tasks, and completing transactions in real time.
That shift forces a new question for every company online: not whether a visitor is human, but whether the action can be trusted.
The agentic internet has arrived. The systems that define trust are trying to catch up.

