Pentagon Labels Anthropic a Supply Chain Risk as AI Ethics Clash with U.S. Military Demands
A decision made at the heart of the U.S. capital—inside the The Pentagon—has been described as an “administrative declaration of war” against one of the world’s most prominent artificial intelligence companies. U.S. Secretary of Defense Pete Hegseth formally designated Anthropic as a “supply chain risk.” Such a classification has traditionally been reserved for companies linked to adversarial powers, such as the Chinese tech giant Huawei.
Behind the shocking designation lies a deeper battle over principles. Anthropic and its CEO Dario Amodei refused to comply with Pentagon demands to remove what it described as “ethical safeguards” from its AI model, Claude AI.
The company insisted on maintaining restrictions that prevent its artificial intelligence from being used for mass surveillance of Americans or for fully autonomous weapons targeting. However, officials in the administration of Donald Trump reportedly viewed these safeguards as constraints that could undermine U.S. military superiority, leading to the company being classified as a national security risk. The designation effectively bars defense contractors from using Anthropic’s technologies in military projects.
While Anthropic was being pushed out, another AI giant was stepping in. The U.S. Department of State announced the adoption of models developed by OpenAI as a primary sovereign AI tool for government use, in what reports described as a “smart and opportunistic” deal.
Under the agreement, OpenAI will integrate its systems into classified networks used by the Pentagon and the State Department, offering significant flexibility to meet operational needs. The move effectively positions the company as the new digital brain of American diplomacy and defense.
