Home / Technology
Anthropic's Claude Code launches safer 'auto mode'
Anthropic has introduced 'auto mode' for Claude Code
Anthropic has introduced a new “auto mode” for Claude Code, allowing AI to independently make decisions about permissions on a user's behalf.
The company explains that this offering gives creators a safer option compared to constant supervision or granting AI too much freedom.
Claude Code has the ability to operate autonomously for users, which is beneficial but also carries risks, such as inadvertently removing files, sharing confidential information, or executing harmful codes or concealed commands.
This auto mode is meant to avert such risks by identifying and stopping hazardous actions before they occur, allowing the software to either make a second attempt or request user interaction.
Currently, the auto mode can be accessed as a research preview by those with Team plans. Anthropic mentions that its availability will soon extend to Enterprise and API users.
Anthropic cautions that this tool is in the experimental stage and does not completely remove risks. They advise developers to operate it within "isolated environments."
