WASHINGTON — In a massive collision between Silicon Valley ethics and military supremacy, Anthropic CEO Dario Amodei has formally rejected an ultimatum from the U.S. Department of Defense to strip ethical safeguards from its advanced artificial intelligence model, Claude[1][2].

Defense Secretary Pete Hegseth had given the AI giant a strict deadline of 5:01 PM on Friday, February 27, to allow the military unrestricted "lawful use" of its technology[1][3]. However, in a defiant 800-word statement released Thursday evening, Amodei stated that the company "cannot in good conscience accede to their request," warning that unchecked AI use could undermine democratic values[1][4][5].

The Two 'Red Lines'
While Anthropic currently holds a $200 million contract with the Pentagon and permits Claude to be used for intelligence analysis and operational planning, Amodei drew two non-negotiable red lines[1][4][6]:

Mass Domestic Surveillance: Amodei warned that using AI to automatically monitor citizens at scale is fundamentally incompatible with democratic values[4][6].

Fully Autonomous Weapons: Anthropic refuses to allow its AI to make final targeting or kill decisions in warfare without human oversight[1][4][6].

Although Pentagon officials publicly claim they do not intend to use Claude for these specific purposes, they insist that contractors cannot dictate how the military operates and must provide their tools without built-in restrictions[7][8].

Severe Repercussions Looming
By refusing to bend, Anthropic now faces severe retaliation from the Trump administration. The Pentagon has threatened to cancel the company's lucrative contracts and officially label the San Francisco-based firm a "supply chain risk"[1][7]. Furthermore, defense officials have reportedly floated the idea of invoking the Defense Production Act—a wartime law that could effectively force Anthropic to hand over control of its technology against its will[1][7].

Despite these threats, Amodei remains resolute, sparking a defining debate over the future of artificial intelligence, corporate responsibility, and modern warfare[2][4].