Anthropic quietly rolled out identity verification using government documents for Claude — a first in the AI industry, shocking the privacy-conscious user base that had been leaving OpenAI en masse.
This week, Anthropic quietly announced it was requiring identity verification from some Claude users, including government-issued photo IDs and real-time selfies. This is the first time a large AI chatbot has adopted this mechanism — and the silence surrounding the rollout is sparking a backlash from the very user community the company has attracted with its firm stance on privacy.
The context cannot be ignored: millions of users left OpenAI for Anthropic in February after OpenAI signed a contract to deploy AI on the Pentagon's classified classification networks, a deal Anthropic rejected due to concerns about large-scale surveillance and autonomous weapons.
Daily registrations have reached record levels, with free users increasing by 60% since January. Privacy-conscious users have found what they consider a suitable place. Now, this same group may be the first to require passports.
When will you be asked for verification and who processes your data?
According to a help center page published on April 14, Anthropic has chosen Persona Identities, a popular KYC infrastructure in the financial sector, as its verification partner. Original, intact passports, driver's licenses, or national identity cards are accepted; photocopies, mobile identification, and student IDs are not. A live selfie may also be required in some cases.
The policy has not been implemented uniformly. Anthropic stated that verification will be triggered upon accessing “certain features,” during “periodic platform integrity checks,” or as part of safety and compliance measures, but did not specify which features are restricted or which behaviors trigger the request. The company has also not responded to media requests for clarification.
Regarding data processing, Anthropic clearly separates data: documents and selfies are sent directly to Persona's servers, bypassing Anthropic's internal systems. The company claims to be the data controller, setting the terms, while Persona can use the information to verify identity and improve fraud detection. Data is encrypted during transmission and storage, not used for model training, and not Chia with third parties for marketing purposes.
However, cautious commitments often face the reality of inadequate infrastructure. The October 2025 Discord data leak, which exposed approximately 70,000 user identification documents used for age verification, is the most recent example: no system is completely immune to risk, no matter how large the provider.
This move is consistent with the direction Anthropic has been gradually building. Last December, the company implemented a classifier to detect users who identified themselves as minors, but many adult accounts were still locked and had their entire project history deleted while their complaints remained unresolved.
Accounts from territories where Anthropic does not officially provide services also face the risk of being banned, particularly affecting Chinese users accessing Claude through intermediaries, as selfie verification against physical documents is an almost insurmountable barrier using fake information.




