Gaming: Anthropic Says It Has Identified Thousands Of 'fraudulent Accounts'...
'Industrial-scale distillation' is underway, apparently.
Keep up to date with the most important stories and the best deals, as picked by the PC Gamer team.
Your weekly update on everything you could ever want to know about the games you already love, games we know you're going to love in the near future, and tales from the communities that surround them.
Our special GTA 6 newsletter, with breaking news, insider info, and rumor analysis from the award-winning GTA 6 O'clock experts.
From the creators of Edge: A weekly videogame industry newsletter with analysis from expert writers, guidance from professionals, and insight into what's on the horizon.
Hardware nerds unite, sign up to our free tech newsletter for a weekly digest of the hottest new tech, the latest gadgets on the test bench, and much more.
Sign up to our new Switch 2 newsletter, where we bring you the latest talking points on Nintendo's new console each week, bring you up to date on the news, and recommend what games to play.
Get sneak previews, exclusive competitions and details of special events each month!
The question of what data AI models are trained on, and the legitimacy of that data, is a thorny one. Anthropic found itself defending its use of copyrighted material to train its Claude AI in the US last year, a case that eventually resulted in a ruling that its copyrighted scraping fell under fair use privileges.
However, the company eventually agreed to pay a $1.5 billion settlement in regards to claims that it pirated copies of several author's works. I mention this, because Anthropic has recently taken to X to complain about "industrial-scale distillation attacks" on Claude, perpetrated by what it says are over "24,000 fraudulent accounts" that have generated over 16 million exchanges with the AI chatbot, thereby "extracting its capabilities to train and improve their own model."
Source: PC Gamer