Anthropic retains Claude chat records for a period of five years unless explicit removal is requested by the user
In a significant move, Anthropic, a leading AI company, has launched a new Chrome extension for its AI model, Claude. This extension is designed specifically for search investigations.
The launch comes at a time when Anthropic is also in the running for a potentially lucrative deal with the US General Services Administration. The deal aims to integrate AI into government systems, reducing the nation's reliance on humans for citizen workloads.
The rollout of the new Chrome extension is limited to 1,000 users. However, it's important to note that the extension does not exempt Pro or Max users from Anthropic's data collection policy. These users currently pay $20 and $100 a month respectively to access Claude's AI engine, but this does not buy them out of the company's data collection practices.
Anthropic's updated data retention policy has been a topic of interest. Any conversations that a user deletes will not be retained, but if flagged for containing objectionable content, they could be retained for seven years. Discussions about nuclear weapons would trigger this retention process.
The new data retention length applies only to new or resumed chats and coding sessions. If customers opt in, the data retention window will be extended from 30 days to 1,826 days, give or take leap years. Even if customers opt out, their conversations will still be stored for 30 days.
Anthropic is aware of criminals using Claude for computer intrusions and remote worker fraud. The company is giving customers using its Free, Pro, and Max plans one month to opt out of having their chats stored for five years by default and used for training.
The new data collection policy will not affect commercial, educational, or government customers, nor will API use with Amazon Bedrock, Google Cloud's Vertex AI, and other commercial partners.
Technical issues are being addressed before a larger program rollout. New users will be prompted with a similar question during app setup. Existing users will receive a popup asking if they want to opt out of a new "Help improve Claude" function.
Anthropic has already taken steps to prevent misuse of its technology. The company has blocked one North Korean attempt to misuse its AI engine. Despite these measures, the company remains vigilant, on guard for more people trying to abuse its technology.
Interestingly, the potential customer of Anthropic unaffected by the new data retention procedure is Google. The exact details about the updated data retention length are not disclosed by Anthropic, but the company is committed to supporting model development and safety improvements.
Read also:
- Understanding Hemorrhagic Gastroenteritis: Key Facts
- Stopping Osteoporosis Treatment: Timeline Considerations
- Trump's Policies: Tariffs, AI, Surveillance, and Possible Martial Law
- Expanded Community Health Involvement by CK Birla Hospitals, Jaipur, Maintained Through Consistent Outreach Programs Across Rajasthan