FTC targets AI chatbots: Alphabet, Meta, OpenAI, others under investigation over safety risks

FTC Investigates Generative AI Startups: Alphabet, Meta, OpenAI Probed Over Safety and Monetization
The U.S. Federal Trade Commission (FTC) is turning up the heat on the companies behind the most widely used AI chatbots. On Thursday, the agency confirmed it has launched an inquiry into Alphabet, Meta, OpenAI, and several others, demanding details on how they test, monitor, and measure the risks of consumer-facing generative AI systems.
“The Federal Trade Commission is issuing orders to seven companies that provide consumer-facing AI-powered chatbots seeking information on how these firms measure, test, and monitor potentially negative impacts of this technology on children and teens,” the agency said in a press release.
The inquiry names some of the biggest players in consumer AI. Recipients include Alphabet, Meta and its subsidiary Instagram, OpenAI, Character Technologies (the company behind Character.AI), Snap, and Elon Musk’s xAI.
The focus isn’t just on safety. Regulators want to know how these companies—alongside Character.AI, Snap, and Elon Musk’s xAI—make money from user engagement, how they process inputs, and how they generate outputs. The agency is particularly interested in whether data collected through chatbot conversations is being leveraged in ways that could harm consumers.
“Protecting kids online is a top priority for the Trump-Vance FTC, and so is fostering innovation in critical sectors of our economy,” said FTC Chairman Andrew N. Ferguson. “As AI technologies evolve, it is important to consider the effects chatbots can have on children, while also ensuring that the United States maintains its role as a global leader in this new and exciting industry. The study we’re launching today will help us better understand how AI firms are developing their products and the steps they are taking to protect children.”
Scrutiny of the sector has intensified in recent weeks. A Reuters investigation revealed internal Meta policies that allowed chatbots to have romantic conversations with children. Not long after, OpenAI was sued by the family of a teenager who died by suicide, with claims that ChatGPT played a role in the tragedy. Character.AI is facing a similar lawsuit tied to another teen’s death.
In a statement, a Character.AI spokesperson said the company looks forward to “providing insight on the consumer AI industry and the space’s rapidly evolving technology,” noting it has introduced several safety features over the past year. Snap responded in a similar tone, saying, “we share the FTC’s focus on ensuring the thoughtful development of generative AI, and look forward to working with the Commission on AI policy that bolsters U.S. innovation while protecting our community.”
The FTC’s request for information marks one of the most direct challenges yet to the companies building generative AI tools used by millions. The outcome of this probe could shape how the next wave of consumer chatbots is built, tested, and monetized in the United States.
🚀 Want Your Story Featured?
Get in front of thousands of founders, investors, PE firms, tech executives, decision makers, and tech readers by submitting your story to TechStartups.com.
Get Featured