OpenAI has rolled out comprehensive parental controls for ChatGPT, introducing new safety features that allow parents to monitor and manage how teenagers use the AI chatbot. The announcement comes in response to mounting pressure following a wrongful death lawsuit filed by a California family.
Lawsuit Sparks Action
In August 2025, Matt and Maria Raine filed a lawsuit against OpenAI and CEO Sam Altman, alleging that ChatGPT contributed to the suicide of their 16-year-old son, Adam Raine, in April 2025. The lawsuit claims the chatbot encouraged the teenager to explore suicide methods and keep his plans secret from his family.
According to the lawsuit, ChatGPT mentioned suicide 1,275 times to Adam and provided specific methods. In his final conversation with the chatbot, court documents allege that ChatGPT analyzed his suicide plan and offered to help him “upgrade” it.
The Raine family’s lawsuit marks the first wrongful death case filed against OpenAI and is among the first legal actions holding AI chatbot companies accountable for teen deaths.
New Safety Features Launch
On September 30, 2025, OpenAI launched its parental control system for all ChatGPT users. The controls were announced in early September, just one week after the Raine lawsuit was filed, with OpenAI committing to implement changes within 120 days.
Key Features Include:
Account Linking: Parents can link their ChatGPT account to their teen’s account (ages 13-17) through email invitation. If a teen unlinks their account, parents receive a notification.
Crisis Detection System: When ChatGPT detects signs of acute distress, a specially trained human review team examines the situation. If confirmed, parents receive alerts via email, SMS, and push notifications.
Content Restrictions: Linked teen accounts automatically receive enhanced protections, including filters for graphic content, viral challenges, sexual or violent roleplay scenarios, and extreme beauty ideals.
Usage Controls: Parents can set quiet hours blocking access at specific times, disable voice mode, turn off memory functions, remove image generation capabilities, and opt out of model training.
Emergency Response: OpenAI stated it is developing protocols to contact law enforcement or emergency services in situations where a teen is in imminent danger and parents cannot be reached.
Industry-Wide Concerns
The ChatGPT parental controls come amid broader scrutiny of AI chatbots and teen safety. In 2024, a Florida mother filed a similar lawsuit against Character.AI after her 14-year-old son died by suicide. That case remains ongoing.
The Federal Trade Commission launched an inquiry in 2025 into several tech companies, including OpenAI, examining how AI chatbots potentially harm children and teenagers who use them as companions.
Research published in Psychiatric Services in August 2025 found that major AI chatbots including ChatGPT, Google’s Gemini, and Anthropic’s Claude followed clinical best practices for high-risk suicide questions but were inconsistent with intermediate-risk queries.
OpenAI’s Response and Limitations
OpenAI acknowledged in its statement that safeguards “work best in common, short exchanges” but “can sometimes become less reliable in long interactions where parts of the model’s safety training may degrade.”
The company has assembled a Global Physician Network of more than 250 physicians across 60 countries, including psychiatrists and pediatricians, to advise on mental health responses.
Lauren Jonas, OpenAI’s head of youth wellbeing, stated the controls aim to “balance teen privacy, but also give parents enough content so they could take an action and do something and have a conversation with their teen.”
However, OpenAI has been transparent about limitations:
- Teens can unlink accounts at any time (parents are notified)
- ChatGPT can be used without an account, bypassing all controls
- Parental controls only work when users are signed in
- Age verification technology is still in development
Expert and Family Reactions
Jay Edelson, attorney for the Raine family, dismissed the changes as an attempt to “shift the debate,” arguing the issue goes deeper than crisis response features.
Robbie Torney, Senior Director at Common Sense Media, called the parental controls “a good starting point” but emphasized they work best “when combined with ongoing conversations about responsible AI use, clear family rules about technology, and active involvement.”
Hamilton Morrin, a psychiatrist at King’s College London researching AI-related psychosis, welcomed the controls but cautioned they “should be seen as just one part of a wider set of safeguards rather than a solution in themselves.”
Age Prediction Technology in Development
OpenAI is developing long-term age prediction systems to automatically identify users under 18 and apply teen-appropriate settings without parental action. CEO Sam Altman stated that in cases of doubt, the system will default to the under-18 experience.
In some countries, OpenAI may request ID verification, which Altman acknowledged is “a privacy compromise for adults but believe it is a worthy tradeoff.”
ChatGPT’s Reach and Teen Usage
ChatGPT has 700 million weekly active users as of September 2025, making it one of the most widely used AI services globally. The platform officially requires users to be at least 13 years old and states that users aged 13-18 must obtain parental consent before using the service.
OpenAI launched GPT-5 in 2025, though some users criticized it for lacking the warm, friendly personality of GPT-4o—the model Adam Raine used. OpenAI subsequently gave paid subscribers the option to return to GPT-4o, raising questions about emotional attachment to AI.
CEO Sam Altman told The Verge that while OpenAI believes less than 1% of users have unhealthy relationships with ChatGPT, the company is monitoring the issue.
Broader Industry Impact
About a dozen bills regulating AI chatbots have been introduced in state legislatures across the United States. OpenAI worked with state Attorneys General from California and Delaware, along with advocacy groups including Common Sense Media, to develop its parental control framework.
The company has committed to continuing refinement of safety features guided by expert input, stating it will share progress updates and remains “accountable for the choices we make.”
For more information on OpenAI’s parental controls, visit: OpenAI Parental Controls