AI toys leak 50,000 children’s conversations. They’re still on the market
Artificial intelligence is quickly moving from offices and smartphones into one of the most sensitive spaces imaginable: children’s bedrooms.
AI-powered toys are now being marketed to kids as young as three, often equipped with microphones, cloud connectivity, and the ability to store conversations to personalize interactions. As adoption grows, so do the risks. In the past year alone, multiple incidents have exposed thousands of children’s private interactions due to security lapses and data-handling failures, raising fresh alarms among researchers and policymakers.
At the same time, many of the underlying AI models powering consumer products are being developed by companies that also serve enterprise and government clients, underscoring how widely this technology now spans the modern digital ecosystem.
A stuffed dinosaur designed to be a child’s always-listening companion has become the latest warning sign in the rush to bring AI into the playroom.
Security researchers recently uncovered a major privacy lapse involving Bondu, the company behind a $199 AI-powered plush toy that talks with children like an imaginary friend. Their finding was unsettling: tens of thousands of private conversations between kids and the toy were sitting behind a web portal that required little more than a standard Gmail login to access.
The company says the issue is now fixed. But the episode raises broader questions about whether the fast-growing market for AI companions is outpacing the safeguards meant to protect children.
A curious neighbor, a quick check, and a troubling discovery
The chain of events started simply.
Earlier this year, security researcher Joseph Thacker was chatting with a neighbor who had preordered Bondu toys for her children. She knew Thacker had studied AI risks affecting kids and wanted his take on the product.
What began as a casual look turned into something far more serious.
Within minutes, Thacker and fellow researcher Joel Margolis found that Bondu’s web-based console, intended for parents and internal monitoring, was effectively open to anyone with a Google account. No sophisticated intrusion was required. No special access. Just a login.
According to WIRED reporting, Malwarebytes analysis, and NBC News findings, the researchers were able to view what appeared to be nearly the full archive of conversations between children and their AI dinosaur toys.
What they saw went well beyond harmless chatter.
What the toys were quietly collecting

The exposed records painted intimate portraits of young users. The database included children’s names, birth dates, family member details, personal preferences, and detailed summaries of conversations meant to feel private and one-on-one.
“In total, Margolis and Thacker discovered that the data Bondu left unprotected—accessible to anyone who logged in to the company’s public-facing web console with their Google username—included children’s names, birth dates, family member names, “objectives” for the child chosen by a parent, and most disturbingly, detailed summaries and transcripts of every previous chat between the child and their Bondu, a toy practically designed to elicit intimate one-on-one conversation,” WIRED reported.
Bondu later confirmed that more than 50,000 chat transcripts were accessible through the portal, representing nearly all historical conversations except those manually deleted.
“Bondu’s safety and behavior systems were built over 18 months of beta testing with thousands of families. Thanks to rigorous review processes and continuous monitoring, we did not receive a single report of unsafe or inappropriate behavior from Bondu throughout the entire beta period,” Bondu said, without making reference to security or privacy.
For Thacker, the moment was jarring.
“It felt pretty intrusive and really weird to know these things,” he told WIRED. “Being able to see all these conversations was a massive violation of children’s privacy.”
The toy itself is explicitly designed to encourage openness. Marketed as an AI friend, Bondu prompts children to share thoughts, feelings, favorite activities, and daily experiences. That design, while engaging for kids, also creates unusually rich behavioral data.
Bondu responds quickly
After the researchers alerted the company, Bondu moved fast. The vulnerable console was taken offline within minutes and later restored with stronger authentication controls.
CEO Fateen Anam Rafid said the security fixes were completed within hours, followed by a broader review and additional protections. The company also said it found no evidence that anyone beyond the researchers accessed the data.
The researchers themselves reported that they did not download or retain sensitive material beyond limited proof shared with journalists.
Even so, the incident quickly drew attention in Washington. US Senator Maggie Hassan sent a letter to Bondu describing the exposure as “devastating” and demanding detailed answers about the company’s data practices.
The deeper risk behind AI companions
While the specific vulnerability appears to be resolved, researchers say the episode exposes a broader structural concern with AI toys.
Unlike traditional connected gadgets, AI companions are built to encourage continuous, emotionally open conversation. That dynamic naturally produces highly sensitive data streams, often involving young children who may not understand the implications of what they share.
Bondu’s system stored written transcripts of every interaction to help personalize future conversations. The company said audio recordings were automatically deleted after a short time, but the text histories remained.
Margolis warned that the depth of information could be dangerous if mishandled.
“To be blunt, this is a kidnapper’s dream,” he said, pointing to the level of personal detail available in the logs.
Third-party AI and the expanding data surface
Bondu also acknowledged using enterprise AI services from outside providers to generate responses and run safety checks. That process involves transmitting portions of conversations for processing.
The company said it minimizes the data shared and operates under enterprise agreements stating prompts and outputs are not used to train external models. Even so, the architecture highlights how many parties may touch sensitive data once AI systems enter the loop.
Researchers say the key question is no longer whether companies intend to protect children’s data. It is whether the systems being built are mature enough to do so consistently.
Margolis noted that even with the portal secured, risks remain if internal access controls fail or employee credentials are compromised. In complex AI pipelines, a single weak point can reopen exposure.
A fast-growing market, still catching up on safeguards
Warnings about AI toys have been growing over the past year, though most public concern has focused on chatbot behavior, such as inappropriate responses or unsafe suggestions.
This case shifts attention to infrastructure risk. The problem was not what the toy said. It was how much it remembered and how easily that memory could be reached.
Thacker suspects the vulnerable console may have been created using generative AI coding tools, sometimes called “vibe coding,” which can introduce security gaps if not carefully reviewed. Bondu did not confirm whether AI tools were used in the development process.
For Thacker personally, the experience was enough to change his mind about bringing AI toys into his own home.
Before the discovery, he had considered buying one.
Now, he says, the answer is clear.
“Do I really want this in my house? No, I don’t,” he said. “It’s kind of just a privacy nightmare.”
The bottom line
Bondu appears to have addressed the immediate flaw quickly. But the incident lands at a moment when AI companions are moving from novelty to mainstream consumer products, often aimed at the youngest and most vulnerable users.
As companies race to build more lifelike digital companions, the Bondu episode is a reminder that conversational intelligence brings with it something less visible but equally powerful: massive, deeply personal data trails.
And in the AI toy boom, the question is no longer just what these products can say.
It is what they quietly remember.
Watch the full video coverage below:
For readers who want a deeper visual breakdown of the AI toy privacy risks, watch the full report below.

