Gossip Herald

Home / Technology

Pentagon moves to Grok as Anthropic refuses to drop AI safety rules

The Pentagon is integrating Grok into secretive missions despite major safety fears

By GH Web Desk |
Pentagon moves to Grok as Anthropic refuses to drop AI safety rules
Pentagon moves to Grok as Anthropic refuses to drop AI safety rules

Government agencies have sounded the alarm over Elon Musk’s AI chatbot, Grok, as officials warn it is neither reliable nor safe enough for sensitive military applications.

Despite these grave concerns, the Wall Street Journal reports that the Pentagon is moving forward with integrating Grok into classified operations.

This shift appears to be a direct result of rival firm Anthropic refusing to lower its stringent safety guardrails for unrestricted military use.

The concerns surrounding the xAI model are significant. A General Services Administration (GSA) report recently concluded that “Grok-4 does not meet the safety and alignment expectations required for general federal use.”

The findings were particularly damning, labelling the system as “sycophantic and susceptible to corruption by biased data,” which creates safety risks that are incredibly difficult to manage.

Furthermore, a classified review by the National Security Agency (NSA) in late 2024 identified specific security vulnerabilities within Grok that were notably absent in competitors like Claude.

Beyond technical glitches, Grok has faced intense criticism for generating sexualised imagery, raising fears that bad actors could easily exploit the model for "data poisoning."

Experts remain sceptical about its readiness for the front line. Gregory Allen, a senior adviser at the CSIS think tank, stated: “I do not believe they are peers in performance right now across all of the capabilities that matter to a customer like the Department of War.”

Consequently, while the xAI logo appears on federal experimental platforms, Grok remains strictly excluded from general use due to these persistent red flags.