Google and Pentagon allegedly reach agreement on 'any lawful' AI use

Google has entered into a confidential agreement allowing the US Department of Defense

Google and Pentagon allegedly reach agreement on 'any lawful' AI use

Google has entered into a confidential agreement allowing the US Department of Defense to employ its AI models for “any lawful government activity,” The Information reports.

This announcement came shortly after Google staff asked CEO Sundar Pichai to prevent the Pentagon from utilising its AI amid worries about its potential use for “inhumane or extremely harmful purposes.”

If verified, this agreement would align Google with OpenAI and xAI, who have also made confidential AI arrangements with the US government.

Anthropic was listed previously until it was banned by the Pentagon for declining to remove weaponry and surveillance-related safeguards from its AI models as demanded by the Department of Defense.

According to a single unnamed source “familiar with the details,” The Information notes that the contract indicates both parties have concurred that Google’s AI systems will not be used for domestic large-scale surveillance or autonomous weaponry “without proper human supervision and control.”

Nonetheless, the contract also notes it doesn’t grant Google “any authority to control or veto legitimate government operational decisions,” suggesting that the limitations agreed upon might not be legally enforceable commitments but rather recommendations.

In a statement to Reuters, a Google representative mentioned the company maintains the view that AI should not be used for domestic large-scale surveillance or autonomous weaponry without adequate human control.

“We are convinced that offering API access to our commercial models, including on Google infrastructure, with industry-standard practices and terms, signifies a responsible way to assist national security,” Google stated to the publication.