Skip to content

AI company implements family safety features and upgrades GPT model following adolescent fatality due to online interaction

AI Developer OpenAI Striving to Make Artificial Intelligence Not Only Smarter, but Also Safer Following Incidents of Misuse Linked to ChatGPT

AI company implements family safety features and GPT-5 update following adolescent self-harm...
AI company implements family safety features and GPT-5 update following adolescent self-harm incident

AI company implements family safety features and upgrades GPT model following adolescent fatality due to online interaction

OpenAI, the renowned artificial intelligence (AI) research organisation, is set to roll out significant changes to its chatbot services within the next month. The updates aim to provide a safer and more supportive environment, particularly for younger users, in response to a series of tragic incidents.

The core of these changes lies in the introduction of a new real-time router that will direct sensitive conversations to OpenAI's more advanced reasoning models, such as GPT-5 and o3. These models are trained to take more time, consider context, and reason before responding, making them well-suited for providing helpful, possibly even life-saving, interventions when users express distress.

One of the key goals is to prevent situations like the one involving 16-year-old Adam Raine, who discussed suicide plans with ChatGPT, leading to his tragic death in April. To address such concerns, OpenAI is expanding the list of specialists in its network to include those in eating disorders, substance use, and adolescent health. This move will ensure that the AI models are equipped to handle a wider range of sensitive topics.

To further enhance user safety, OpenAI is introducing parental controls as part of its safety measures. Parents will be able to link accounts with their teens and enable age-appropriate behaviour rules. They will also receive alerts if the system detects their child in acute distress.

However, these changes have not been without criticism. Some question the reliability of the system in detecting distress in real time, while others express concerns about the amount of control parents will have and whether the company's safeguards are more about optics than substantive protection.

In an effort to address these concerns, OpenAI has assembled an Expert Council on Well-Being and AI. The guidance from this council has shaped model training and evaluation. The Global Physician Network of OpenAI consists of over 250 doctors across 60 countries.

It's worth noting that no publicly available information from the recent months details the names of the members appointed to OpenAI's Expert Council on Well-Being and AI. The California-based parents of Adam Raine have taken legal action against OpenAI, accusing the company of wrongful death.

OpenAI's new real-time router can shift conversations to models designed for deeper thinking when sensitive topics arise. This proactive approach is part of OpenAI's 120-day initiative to preview and roll out improvements this year. The company is also expanding its collaborations with these advanced models, aiming to provide a more responsive and supportive AI experience for its users.

Read also:

Latest