Skip to content

Anthropic plans to utilize user conversations to fuel Claude's AI capabilities, but users have the option to opt-out if they choose.

Enhancements aim to boost model safety and bolster capabilities in areas such as programming, data analysis, and problem-solving for Claudia.

Anthropic plans to utilize your conversations to fuel Claude's operations, but you have the option...
Anthropic plans to utilize your conversations to fuel Claude's operations, but you have the option to refuse.

Anthropic plans to utilize user conversations to fuel Claude's AI capabilities, but users have the option to opt-out if they choose.

Anthropic, the company behind the AI model Claude, has announced a significant change to its data collection policy. The new policy, set to take effect on September 28, 2025, aims to improve the safety and performance of Claude in tasks such as coding, analysis, and reasoning.

Under the new policy, users of Claude Free, Pro, and Max will have the option to allow their conversations and coding sessions to be used for training the model. This decision, made by Anthropic's developers and management, is part of an industry trend where companies prioritize the acquisition of real-world data for improving AI capabilities.

By default, the training permission is switched on in the new policy. However, users have the option to opt out of their data being used for training if they so choose. Enterprise customers are exempt from these changes, similar to OpenAI's treatment of corporate clients.

Previously, Anthropic had a policy of deleting prompts and responses after 30 days, with exceptions for policy violations. The stored data could be kept for up to two years for policy violations. Under the new policy, opt-in chats will be stored for only five years. If no decision is made by the deadline, the system will enable data collection by default.

The speed and subtlety of Anthropic's policy shift illustrate the rapid evolution of user expectations around privacy. The rollout of the new data collection policy has raised concerns due to the prominence of the "Accept" button for new terms and the smaller, less noticeable toggle for training permission.

Anthropic follows ethical and legal standards in data usage while advancing their AI training processes. The company stands apart from competitors by previously not using consumer chats for training. The policy change shows how companies in the AI industry prioritize the acquisition of human conversations for building smarter, safer AI.

Like other companies, Anthropic faces the same need for fresh, real-world data to make AI more capable, accurate, and competitive against giants like OpenAI and Google. The new policy change is part of a trend in the industry where companies building large language models prioritize the acquisition of real-world data for improving AI capabilities.

Anthropic's blog post states that the policy change aims to improve Claude's safety and performance. The change in data retention period for opt-in chats under the new policy is five years, compared to the previous two-year retention for policy violations. The opt-in period for this data collection ends on September 28, 2025.

Read also:

Latest