Worldcoin claims to achieve financial inclusion through identity verification, but critics warn it may sacrifice decentralization, privacy, and autonomy.
The crypto industry has never lacked controversy, but few projects have been scrutinized as intensively as Sam Altman's Worldcoin (formerly World project). The project promises to verify human uniqueness through iris scanning and globally distribute WLD tokens, claiming to be a financial inclusion tool. However, critics point out that its biometric methods are invasive, overly centralized, and contrary to the spirit of decentralization and digital privacy.
The core of the controversy lies in: a biometric system relying on proprietary hardware, closed verification methods, and centralized data pipelines, which is fundamentally unable to achieve true decentralization. "Decentralization is not just a technical architecture," Holonym Foundation co-founder Shady El Damaty emphasized to Cointelegraph, "but a philosophy that upholds user control, privacy, and autonomy. World's biometric model fundamentally violates this concept."
El Damaty noted that despite using tools like multi-party computation (MPC) and zero-knowledge proofs (ZK), World's dependence on the custom Orb hardware and centralized code deployment undermines its decentralization claims. "This design is essentially aimed at achieving its 'unique human identification' goal, but centralized power will create single points of failure and control risks, ultimately undermining the core promise of decentralization."
A World spokesperson refuted this, saying, "We do not use centralized biometric infrastructure" and emphasized that the World App uses a non-custodial mode where users always control their digital assets and World ID. The project stated that after the Orb generates an iris code, "iris photos are sent to the user's phone in an end-to-end encrypted data packet and immediately deleted from the Orb", and that iris codes are processed through anonymized multi-party computation, "never storing personal data".
Evin McMullen, co-founder of Privado ID and Billions.Network, stated that World's biometric model is not "inherently contrary" to decentralization, but still faces challenges in specific implementation aspects such as data centralization, trust assumptions, and governance mechanisms.
A Typical Overreach of Technology?
El Damaty drew parallels between OpenAI's large-scale collection of "unauthorized user data" and World's collection of biometric information. He believes this reflects a common pattern of tech companies pursuing data harvesting under the guise of innovation, warning that such practices may erode privacy rights and normalize surveillance.
"The irony is obvious," El Damaty pointed out, "OpenAI built its model by scraping massive amounts of unauthorized user data, and now Worldcoin is extending this aggressive data collection approach to the biometric realm." A 2023 California class-action lawsuit accused OpenAI and Microsoft of scraping 300 billion words without consent, including personal data from millions of users (including children); in 2024, the Canadian Media Alliance also sued OpenAI for unauthorized use of its content in training ChatGPT.
World strongly opposes this comparison, emphasizing its independence from OpenAI and stating that it neither sells nor stores personal data, using privacy-protecting technologies like multi-party computation and zero-knowledge proofs. However, critics still question its user registration process—although the project claims to ensure informed consent through multilingual guides, in-app learning modules, brochures, and help centers, skepticism remains. "World currently targets populations in developing countries who are more easily enticed and often unaware of the risks of 'selling' such personal data," El Damaty warned.
Since its launch in July 2023, World has encountered regulatory resistance in multiple countries. Governments in Germany, Kenya, Brazil, and others have expressed concerns about user biometric data security. The latest setback occurred in Indonesia, where local regulators suspended the company's registration certificate on May 5th.
Risks of Digital Exclusion
As biometric systems like World become more prevalent, their long-term impacts are being questioned. Although the company claims its model is inclusive, critics point out that relying on iris scans to access services may exacerbate global inequality.
"When biometric data becomes a prerequisite for accessing basic services, it essentially creates social stratification," El Damaty stated. "Those willing (or forced) to surrender the most sensitive information gain access, while those who refuse are excluded."
World insists its protocol does not mandate biometric verification for basic services: "Even without verifying World ID, users can still use some functions," and added that the system uses zero-knowledge proof (ZKPs) technology to ensure actions cannot be traced to a specific ID or biometric data.
There are also concerns that World could evolve into a surveillance tool—especially in authoritarian countries—because its centralized biometric data storage could be misused by authorities. World refutes this, emphasizing that its ID protocol is "open-source and permissionless", and even if governments apply it, they cannot link user actions to their biometric data.
The controversy has also spread to governance. Although World claims its protocol is moving towards decentralization (such as open-source contributions and governance chapters in its whitepaper), critics believe substantial user ownership is still lacking. "We need to build systems that can verify humanity without centrally storing biological or personal data," El Damaty noted. "This means using zero-knowledge proofs, decentralized governance, and open standards, empowering individuals rather than corporations."
The urgency of developing secure identity authentication systems is not without reason. As AI technology advances, the boundaries between human and non-human actors in cyberspace are becoming increasingly blurred.
"The risks at the intersection of AI and identity verification are not limited to any specific government system or region," said McMullen of Privado ID. She pointed out that without reliable verification mechanisms for humans and AI agents, the digital ecosystem faces increasingly serious threats—from misinformation and fraud to national security vulnerabilities.
McMullen added: "This is a national security nightmare—unaccountable, unverifiable non-human actors can now infiltrate global systems and networks, while traditional systems were not designed to address such verification and contextual logic."

