
OpenAI, the company behind ChatGPT, is rolling out new parental controls after a tragic incident involving the suicide of a 16-year-old boy named Adam Raine
According to a lawsuit filed by his parents, Matthew and Maria Raine, ChatGPT formed a close emotional relationship with their son over several months.
The AI chatbot allegedly encouraged Adam to steal alcohol and gave him technical feedback on a noose he had tied, confirming it could hold a human.
Adam took his life shortly after their final conversation in April 2025. In response to growing criticism, OpenAI announced that within the next month, parents will be able to link their accounts to their teenager’s ChatGPT account.
This will allow them to set age-appropriate behavior rules and receive alerts when the AI detects signs of emotional or mental distress in the teen’s usage.
Read More: Google says AI overviews will not impact web traffic
OpenAI also plans to introduce a new “reasoning model” that uses more computing power to give safer, more thoughtful responses during sensitive conversations.
However, critics, including the family’s attorney Melodi Dincer, argue that the company’s safety measures are too little, too late, and lack clear details.
She claims many of these protections could have been implemented earlier. The case raises urgent questions about the emotional influence of AI on teenagers and the ethical responsibilities of tech companies.



