OpenAI launches teen-safe ChatGPT with parental controls amid FTC scrutiny

OpenAI is rolling out a new version of ChatGPT designed specifically for teenagers, marking one of the company’s most significant safety updates to date. OpenAI said on Tuesday it will launch a dedicated ChatGPT experience with parental controls, part of a broader effort to strengthen protections for younger audiences.
Starting this month, users under 18 will be automatically routed into a teen-safe ChatGPT experience that filters out graphic and sexual content, introduces parental controls, and, in severe cases of distress, may involve law enforcement.
“It is extremely important to us, and to society, that the right to privacy in the use of AI is protected. People talk to AI about increasingly personal things; it is different from previous generations of technology, and we believe that they may be one of the most personally sensitive accounts you’ll ever have,” OpenAI CEO Sam Altman said in a blog post.
The move comes as U.S. regulators turn up the heat on how AI platforms affect younger users. As we reported last week, the Federal Trade Commission (FTC) opened an inquiry into OpenAI and several other tech companies, asking how they’ve assessed the risks of chatbots acting as “companions” for kids and teens. The timing isn’t accidental. Just weeks ago, OpenAI was named in a lawsuit after a family alleged that ChatGPT contributed to their teenage son’s death by suicide.
Addressing the issue head-on, Altman added. “We prioritize safety ahead of privacy and freedom for teens; this is a new and powerful technology, and we believe minors need significant protection.”
To make that protection concrete, OpenAI is introducing parental controls that let adults link their accounts with their teens’ via email, set blackout hours, disable certain features, and receive alerts if the system detects signs of acute distress. Parents will also be able to influence how ChatGPT responds to their teens’ questions. Those features are expected to be available by the end of the month.
The company is working on technology to better predict a user’s age, but when the system can’t determine it with confidence, it will default to the teen-safe experience. ChatGPT itself is officially limited to users 13 and older.
OpenAI has been previewing its safety roadmap for months. Back in August, the company said it would introduce parental tools to give families more visibility into how teens use the chatbot. Last month, it outlined how ChatGPT would handle “sensitive situations,” a disclosure that followed mounting public concern about AI’s influence on mental health.
“This is what we think is best and want to be transparent in our intentions,” Altman wrote, acknowledging the trade-offs involved. For OpenAI, the stakes are clear: regulators, parents, and the public are all watching closely to see if AI companies can make their products safe for the youngest users before more harm is done.
🚀 Want Your Story Featured?
Get in front of thousands of founders, investors, PE firms, tech executives, decision makers, and tech readers by submitting your story to TechStartups.com.
Get Featured