Gossip Herald

Home / Technology

Government's AI standoff: Who controls US military tech?

Trump administration effectively blacklisted Anthropic, while OpenAI swiftly acquired defense contract

By GH Web Desk |
Government's AI standoff: Who controls US military tech?
Government's AI standoff: Who controls US military tech?

This week marked a pivotal shift in the integration of artificial intelligence (AI) into nationwide security efforts, alongside who holds the reins of power. 

The Trump administration effectively blacklisted Anthropic, while OpenAI swiftly acquired a defense contract.

By Friday evening, the Pentagon had classified Anthropic as a supply-chain risk, preventing its technology from being utilised by defense contractors. 

This move followed an order from President Donald Trump for federal entities to discontinue using Anthropic's AI tools, citing the firm's objection to the military’s proposed application of its Claude model.

Anthropic CEO Dario Amodei asserted that he couldn't "in good faith" approve the use of the technology for comprehensive domestic surveillance or to autonomously control weaponry — both scenarios contradict the ethical standards upheld by the company.

Amidst this stalemate, Anthropic’s rival, OpenAI, announced an arrangement with the Department of Defense to utilise its own AI models within secure environments.

The disagreement has evolved into a broader debate between the Pentagon and the private sector over not only contracts but also the governance and utilization of these influential systems.

Anthropic contended that the government’s draft contract failed to reflect or enforce restrictions on the system’s use for oversight and autonomous weaponry.

Defense authorities countered that they require the ability to deploy Claude for any "lawful use" — granting the military significant latitude, even though extensive domestic surveillance is prohibited by various laws.

Dean Ball, a senior fellow at the Foundation for American Innovation, referred to the impasse as "uncharted territory" stemming from conflicting values: Anthropic’s insistence on contractual boundaries for its technology and the Pentagon’s belief that defense policy should outweigh corporate interests.

"For both parties, this is a matter of principle," Ball mentioned to Business Insider.

Soon after Anthropic was marked a supply chain risk, OpenAI released a blog stating its accord with the Pentagon included a framework with safeguards akin to those proposed by Anthropic.

OpenAI argues that their pact is designed to uphold these boundaries through multifaceted protections and that any use involving autonomous weaponry or surveillance must align with existing regulations and Department directives.

In a series of Ask-Me-Anything-style discussions, OpenAI CEO Sam Altman expressed readiness for any potential disputes over the legitimacy of specific government demands. 

Nonetheless, he emphasised that OpenAI would not consent to government use of its technology for mass domestic surveillance.