AI provider Anthropic mandates photo ID and selfies for user access
Anthropic confirmed that identity data remains strictly for safety and legal compliance
Artificial intelligence developer Anthropic has implemented a new security measure requiring certain Claude users to submit government-issued identification before accessing specific platform features.
This move, confirmed via a Claude Support post on 18 April 2026, makes Anthropic one of the first major AI providers to utilise formal identity verification as a gatekeeping mechanism.
The process is managed by Persona Identities, a third-party infrastructure provider, and typically requires a valid passport, driving licence, or national identity card.
In many instances, users must also provide a live selfie captured via a smartphone or webcam to complete the check.
Anthropic has established strict criteria for this process, explicitly rejecting photocopies, screenshots, or digital IDs.
The company insists on physical, photo-bearing government documents, asserting that the entire procedure generally takes under five minutes.
Regarding data privacy, the organisation clarified that "verification data is used solely to confirm who you are and to meet our legal and safety obligations."
Crucially, Anthropic maintains that this sensitive information will not be used for AI model training or shared with third parties, except for legal compliance.
The company frames these requirements as essential for platform integrity and abuse prevention.
A spokesperson noted, "Being responsible with powerful technology starts with knowing who is using it," though specific features triggering these checks remain confidential.
Founded by former OpenAI executives, Anthropic has consistently marketed itself as a safety-focused AI research laboratory.
This latest policy reinforces its public commitment to stringent usage enforcement and regulatory alignment within the rapidly evolving technological landscape.