AI heist: Anthropic complaining about industrial-scale distillation attacks. Chinese models using Anthropic APIs to train their own. Anthropic and Google blocking OpenClaw users.

avatar
(Edited)

Anthropic beklagt einen der größten industriellen KI-Angriffe durch Distillation-Attacken. Chinesische KI-Modelle sollen ihre APIs von Anthropic dazu verwenden haben, um ihre eigenen Modelle zu trainieren.

Klingt plausibel, ist aber auch ein bisschen scheinheilig, da Anthropic ebenfalls ohne zu fragen, copyright-geschützte Inhalte aus dem Web, Social Media und Büchern nutzt, um seine KIs zu trainieren. Ist irgendwie auch eine Distillation von Wissen, aber auf einem anderen Level.

Andererseits kostet das Trainieren der KI-Modelle viel Geld und die Closed-Source-KI-Labs wollen daher nicht, dass man ihre Modelle für das Trainieren von anderen Modellen verwendet und schränken das durch restriktive Nutzungsbedingungen und aktives Monitoring ein.

Zuletzt blockieren große KI-Token-Anbieter wie Anthropic und Google auch die Nutzung von OpenClaw mit einer $200 Subscription und löschen sogar ganze Google-Accounts ohne Vorwarnung. Ist heftig.

Was sagt ihr dazu? Denkt ihr soll man KI-Modelle zum Trainieren von anderen KI-Modellen verwenden dürfen, wenn man für die Tokens bezahlt?

Anthropic complaining about distillation attacks. Modern era of AI wars.

image.png

https://x.com/AnthropicAI/status/2025997928242811253

https://x.com/bindureddy/status/2026196124516884946

https://x.com/GergelyOrosz/status/2026015104085496263

https://x.com/elonmusk/status/2026052687423562228

https://x.com/TheAhmadOsman/status/2026006905299079578

Google and Anthropic baning OpenClaw users

https://x.com/steipete/status/2025743825126273066

https://x.com/wildmindai/status/2025962010232438871

https://x.com/iamlukethedev/status/2025782621066899873

https://x.com/gothburz/status/2025979945126605061

English

Anthropic is complaining about one of the biggest industrial AI attacks through distillation attacks. Chinese AI models are said to have used their APIs from Anthropic to train their own models.

Sounds plausible, but it's also a bit hypocritical, since Anthropic also uses copyrighted content from the web, social media, and books to train its AIs without asking. This is also a kind of knowledge distillation, but on a different level.

On the other hand, training AI models costs a lot of money, so closed-source AI labs don't want their models to be used to train other models and restrict this through restrictive terms of use and active monitoring.

Recently, large AI token providers such as Anthropic and Google have also blocked the use of OpenClaw with a $200 subscription and even delete entire Google accounts without warning. That's harsh.

What do you think? Do you think people should be allowed to use AI models to train other AI models if they pay for the tokens?



0
0
0.000
9 comments
avatar

Well their problem let people use their api for this, not who uses them to train ai

0
0
0.000
avatar

It's funny to see AI companies complaining about intellectual property rights

0
0
0.000
avatar

I think they should have a clear guidelines and regulations on the limitation of the AI model training

!BBH

0
0
0.000
avatar

In my opinion, training AI models to other AI models is very beneficial as both AI models would be able to discover new things but as the competition in the AI services gets tougher this would not be very possible

0
0
0.000
avatar

Anthropic gegen OpenClaw, wer trainiert hier eigentlich wen?

0
0
0.000
avatar

If someone is paying for a $200+ subscription, they should have the right to use that compute however they see fit, including training other models.

0
0
0.000
avatar

I think it's fair game for them to do so. Most AI models were using data (possibly illegally obtained) to be trained. They can build a model using someone else's model, but I think those models will have disadvantages that the original doesn't

0
0
0.000