In case you missed it, AI assistant ChatGPT and its owner, OpenAI, face a lawsuit alleging that the platform helped sixteen-year-old Adam Raine plan and execute his suicide. On the heels of this, the company has revealed that it is “rolling out Parental Controls within the next month.”
This won’t help the AI platform avoid any criticism already levelled at it for inducing mental psychosis in some users. ChatGPT and OpenAI’s implementation of more safety features — assuming they work — could minimise the effects of this in the future. Just don’t expect it to be the near future, though. The company adds that “[s]ome of this work will move very quickly, while other parts will take more time. ”
Don’t ask ChatGPT that
Improved parental control, however, should take place soon. “Within the next month,” if OpenAI’s timelines prove correct. At that time, parents should be able to link their ChatGPT accounts with their child’s, provided that the child is older than thirteen — the minimum age for having an AI account and a common yardstick for social media and internet accounts in general.
Once that’s done, via an emailed invitation link, parents can “[c]ontrol how ChatGPT responds to their teen with age-appropriate model behavior rules, which are on by default.” What those rules are isn’t explained, and may still be in the planning phases. Parents will also control chat history and memory, enabling or disabling those remotely. Finally, the new controls will notify parents “when the system detects their teen is in a moment of acute distress.”
Read More: OpenAI launches ChatGPT Go, its cheapest subscription plan – but it’s India-only for now
It all sounds laudable enough, and these technical features should arrive faster than other promised updates like convening an “Expert Council on Well-Being and AI” or “adding even more clinicians and researchers to our network, including those with deep expertise in areas like eating disorders, substance use, and adolescent health.” They have the benefit of not relying on retraining the backend systems — which also means that little will have changed in ChatGPT by the time the rollout takes place.
A more cynical take is that shunting ChatGPT’s tendencies to converse in a way that keeps users engaged, up to and including giving out methods for suicide for ‘research purposes’, onto parents gives OpenAI some breathing room concerning legal liability. After all, parental controls, whether they work or not, introduce doubts about any failings in the AI itself, shifting the blame to parents when ChatGPT supplies a simple method for creating sarin gas in response to questions about lung infection.




