Ever wanted to train an LLM for someone else, free of charge? If so, great. Users of Anthropic’s Claude large language model are being asked to consent to their chat data, including coding work, being folded into the model, to “help us deliver even more capable, useful AI models.”
Claude Free, Pro, and Max users, as well as those who use Claude Code from the previously-mentioned accounts, are being asked to confirm the usage (or non-usage) of their chat data to train up the AI. In return for a supposedly bolstered artificial intelligence, users will have their data retained for five years at a time. If they agree, of course.
A Claude around the head
Anthropic says that existing users have until 28 September to agree or disagree to the new terms of service. Those who agree will see their chats and other data immediately being retained for five years, while the same data will also be pumped back into the AI model they’re using to train it.
“By participating, you’ll help us improve model safety, making our systems for detecting harmful content more accurate and less likely to flag harmless conversations. You’ll also help future Claude models improve at skills like coding, analysis, and reasoning, ultimately leading to better models for all users.”
Those who choose to keep their chats secret will a) not have their chats used as training data, and b) will continue having their information retained for 30 days at a time. Whether the company does or doesn’t do this hinges on a ton of trust, since how is anyone going to check? Regardless, Anthropic says that users can back out at any time by heading to their Privacy Settings page and making changes there.
Those who opt not to make a choice at all will find their hands forced come 28 September. Those users will have to make a selection one way or the other before regaining access to their AI assistant. Claude Work, Gov, and Education users, and anyone accessing the service via an API, are exempt from the changes.



