Anthropic, an AI company, to dispense a hefty sum of $1.5 billion to authors
In a significant development, AI firm Anthropic has proposed a settlement of $1.5 billion to compensate authors whose works were unauthorizedly downloaded and used to train the company's AI chatbot, Claude.
The settlement, which is currently under review in a San Francisco court, aims to prevent Anthropic from being ordered to pay significantly higher amounts in a potential trial. The proposal comes in response to allegations that around 500,000 books and other texts were used via two pirated online databases to train Claude.
The proposed compensation amounts to approximately $3,000 (around 2,500 euros) per affected work. However, the settlement does not specify how the compensation will be distributed among the affected authors. Furthermore, it does not include any provisions for future prevention of unauthorized use of authors' works by Anthropic or other AI firms.
The AI company is currently embroiled in copyright lawsuits by rights holders due to training their AI model with illegally scraped books. The ongoing lawsuit does not address any potential legal consequences for the operators of the pirated online databases.
Anthropic's use of copyrighted texts has been a contentious issue. Initially, the judge in the San Francisco case considered Anthropic's use of copyrighted texts as potentially covered under "fair use." However, subsequent investigations revealed that Anthropic was aware that the databases were illegally created, potentially exposing them to penalties of up to $150,000 per book in a trial.
The settlement proposal does not include any compensation for potential damage to the authors' reputations or careers. This pressure led Anthropic to propose the settlement to avoid higher penalties in a trial.
Claude is one of the most successful competitors of the popular chatbot ChatGPT from OpenAI. AI programs are fed with large amounts of information to generate meaningful responses, making the use of unauthorized data a significant concern for authors and rights holders.
Multiple lawsuits from copyright holders target AI companies for using works in AI training, but the ongoing lawsuit against Anthropic is the most high-profile case to date. The search results do not list other specific AI companies currently named in similar legal disputes.
If approved by the judge, the settlement will mark a significant step in addressing the issue of unauthorized use of authors' works by AI firms. However, it remains to be seen whether this will deter other AI companies from engaging in similar practices and whether it will provide adequate compensation for the affected authors.