European Union's Commission unveils program to enhance vocational education for youth.
Anthropic, the technology company known for its chat-based AI model Claude, has announced a significant change in its data retention policy. The company plans to use chat transcripts and code sessions from the consumer versions of Claude for training new AI models.
This decision marks a shift for Anthropic, as it had previously avoided using data from real interactions for model improvement. The change affects Claude Free, Pro, and Max, as well as the use of Claude Code.
The extended data retention period will span five years for users who consent to data use. However, it is important to note that this only applies to new or continued chat and coding sessions. Existing conversations remain unaffected by the extended data retention period.
Anthropic emphasises that it does not sell user data to third parties. Instead, the company uses a combination of tools and automated processes to filter or anonymize sensitive data. Users always retain control over the setting of data use for training new AI models. New users will be asked to make a decision during registration, and existing users can change their preferences at any time in the privacy options. The setting will apply to new chats.
The decision to use data from real interactions for model improvement is driven by the need for valuable insights into helpful and accurate answers, especially in AI programming assistance. The long development cycles of AI models justify Anthropic's dramatic extension of the data retention period. The intense competition in the AI market is another factor in this decision.
It is worth noting that commercial services such as Claude for Work, Claude Gov, Claude for Education, API usage via Amazon Bedrock, and Google Cloud's Vertex AI are excluded from the new data use regulations.
Users have until September 28, 2025, to decide on their preferences regarding data use for training new AI models. Deleted individual chats will not be included in future training cycles. Users can opt out of Anthropic's planned data use for training new AI models if they wish.
This move by Anthropic is part of a broader trend in the AI industry, as companies seek to leverage real-world data to improve the performance and usefulness of their AI models. As always, users are encouraged to review and understand the privacy settings of the services they use.
Read also:
- Understanding Hemorrhagic Gastroenteritis: Key Facts
- Stopping Osteoporosis Treatment: Timeline Considerations
- Trump's Policies: Tariffs, AI, Surveillance, and Possible Martial Law
- Expanded Community Health Involvement by CK Birla Hospitals, Jaipur, Maintained Through Consistent Outreach Programs Across Rajasthan