Anthropic makes some major changes to the way he handles users’ data, demanding all Claude users to decide by September 28 if their conversations want to be used for AI models training. While our company directs us blog About politics changes when asked what caused the movement, we have shaped some of our own theories.
But first, what is changing: previously, the man did not use consumer conversation data to educate models. Now, the company wants to train AI systems in users’ conversations and coding sessions and said it is expanding the data to be maintained in five years for those who do not leave.
This is a huge update. Previously, Anthropic consumer users said that the prompts and results of their conversations would be automatically deleted from the rear of Anthropic within 30 days “unless legally or policy are required to keep them more” or their contribution was marked as a breach of its policies, so the user’s inputs and exits are kept until two years.
From the consumer, we mean that new policies apply to Claude Free, Pro and Max users, including those using CLAUDE code. Business customers using Claude Gov, Claude for Work, Claude for Education or API Access will not be affected, which is the way Openai also protects business customers from data training policies.
So why is this happening? In this post about update, the anthropogenic are frameing the changes around users’ selection, saying that by not leaving, users will help us improve the safety of the model, making our systems to detect harmful contents more accurate and less likely to be able to help us. In skills such as coding, analysis and reasoning, eventually leading to better models for all users. “
In short, help us help you. But the complete truth is probably a little less selfless.
Like any other large language company, humanity needs data more than people need to have unclear feelings for its brand. Training AI models requires huge amounts of high quality conversation data and access to millions of Claude interactions should provide exactly the type of real -world content that can improve Anthropic’s competitive positioning against its opponents such as Openai and Google.
TechCrunch event
Francisco
|
27-29 October 2025
In addition to the competitive pressures of AI growth, changes also seem to reflect the broader displacement of industry in data policies, as companies such as human and Openai face increasing examination in their data preservation practices. Openai, for example, is currently struggling in a court ruling that forces the company to maintain all Consumer Chatgpt talks indefinitely, including deleted talks, due to lawsuit filed by the New York Times and other publishers.
In June, Openai Coo Brad Lightcap is called this “a sweeping and unnecessary demand“This” fundamentally conflicts with the privacy commitments we have made to our users. “The court ruling affects users Chatgpt Free, Plus, Pro and Team, although business customers and those with zero data conservation agreements are still protected.
What is alarming is how confusing all these changing use policies create for users, many of whom remain ignorant of them.
In a fair condition, everything is moving fast now, so that technology changes, privacy policies are sure to change. But many of these changes are quite sweeping and only fleeting amidst other company news. (You wouldn’t think about the changes in Tuesday’s policy on human users were very big news based on where the company put this update on the press page.)
But many users do not realize the guidelines they have agreed to have changed because the design guarantees it practically. Most users of Chatgpt continue to click on “Delete” alternations that do not technically delete anything. In the meantime, the implementation of Anthropic’s new policy follows a well -known model.
How yes? New users will choose their preference during registration, but existing users face a pop-up window with “updates on consumer terms and policies” in large text and a protruding black “acceptance button” with much smaller switch for training rights below in lesser printing.
As noted previously Today from The Verge, the design raises concerns that users may quickly click on “Accept” without observing that they agree to the data exchange.
Meanwhile, betting bets for the user’s awareness could not be higher. Privacy experts have long warned that the complexity surrounding AI makes the user’s consent almost impossible. Under Biden’s administration, the Federal Committee of Trade even entered warning that AI companies are in danger of imposing action if they are involved in “changing the terms of service or civil protection or privacy policy or burial of a revelation behind hyperlinks, legal or delicate printing”.
Either the committee – now works with simple three Of the five commissioners – still watching these practices today is an open question, one that we have asked directly to the FTC.
