Home / Technology
AI 'extinction' warning prompts push for US safety rules
The warning comes amid growing fears over powerful AI
An advocacy group is demanding Donald Trump's administration screen advanced AI models for security risks to win lucrative government contracts.
According to a report by Reuters, the group Americans for Responsible Innovation sent a letter to US officials. They want methods to vet models for cyberattack and weapons capabilities.
The warning comes amid growing fears over powerful AI. Anthropic's 'Mythos' model could reportedly make complex cyberattacks easier, posing major national security risks. The group said companies must pass this review to be eligible for government contracts. This would apply to firms spending $100 million or more a year on training models.
The push follows a chilling report commissioned by the US State Department. It highlighted the possibility of AI leading to "human extinction" through weaponisation or mass cyberattacks. Some employees at top AI labs have reportedly expressed their own concerns. They believe safety is often an afterthought in the race to build more powerful systems.
For those unversed, the White House is already taking some action. The US Centre for AI Standards and Innovation has voluntary review agreements with top AI labs.
These labs include OpenAI, Google DeepMind, Microsoft, and xAI. The deals allow the government to evaluate new models for risks related to biosecurity and chemical weapons.
However, Americans for Responsible Innovation urged that CAISI should develop mandatory requirements. They want Congress to create a permanent enforcement office within the Department of Commerce. These developments build upon President Biden's 2023 executive order on AI. That order also emphasised the need for safety testing and risk management standards for the government.
More recently, the General Services Administration has issued draft contract clauses. These would impose new, detailed requirements on government contractors who use or provide AI systems.
The proposals mandate "reasonable technical, administrative, physical, and organisational safeguards". They include a 72-hour deadline for reporting security incidents.
The proposed rules also demand that AI systems must be "'truthful,' prioritise 'historical accuracy, scientific inquiry, and objectivity,' and function as a 'neutral, nonpartisan tool.'"
This reflects a wider, ongoing debate about so-called "woke AI". There are growing concerns about the potential for ideological bias in these powerful systems. The draft also includes a strict prohibition on the use of "foreign AI systems". This covers any components developed or controlled by non-US entities.
California has a similar threshold for its own safety reporting requirements. The state enacted the rules for large-scale AI developers last year.
