Good discussion of this on HN: https://news.ycombinator.com/item?id=47188473
I haven’t read anyone say that it was about retaining the IP. Anthropic says, and others agree, that it would be totally irresponsible to use current frontier AI systems for lethal autonomous weapon systems – even if you think that LAWS are okay. Current AI systems are far too error-prone.
See https://www.anthropic.com/news/statement-department-of-war and https://www.anthropic.com/news/statement-comments-secretary-war










OpenAI’s Sam Altman says today that they’re gonna take Anthropic’s place on DoW classified networks (https://xcancel.com/sama/status/2027578652477821175#m).
Still, Anthropic being designated a “supply chain risk” is good news, as it means Claude cannot officially be used anymore by the Pentagon and by all of its suppliers. That’s massive.