Author | Vitalik Buterin
compile | fire fire
Today, WorldCoin, the Web3 encryption project co-founded by OpenAI founder Sam Altman , officially announced its launch. It is reported that the Worldcoin team has designed a biometric identity verification system called World ID, which uses the eye scanner Orb to verify identity by scanning the user's iris.
On the same day, Vitalik Buterin , founder of Ethereum , published an article "What do I think about biometric proof of personality?", expounding his views on biometric proof. The following is the full translation:
People in the Ethereum community have been trying to build a decentralized solution for The Proof of Humanity , which is one of the always thorny but valuable puzzles. The Proof of Humanity, is a limited form of real-world identity that enables a given registered account to be controlled by a real person (and a different real person than every other registered account), ideally without revealing which real person it is.
There have been many previous efforts to try to solve this problem: BrightID, Idena, and Circles are representative examples. Some of these come with their own application (usually UBI Token), and some have found workarounds in Gitcoin Passport to verify which accounts are valid for secondary voting. Zero-knowledge technologies like Sismo add privacy to many similar solutions.
Only recently have we seen the rise of a much larger and more ambitious The Proof of Humanity project: Worldcoin.
Worldcoin was founded by Sam Altman, previously known as the CEO of OpenAI. The idea behind the project is simple: AI will create a lot of wealth for humans, but it could also kill a lot of people's jobs, as the system grows to the point where it becomes nearly impossible to tell who is a human and not a robot. So we need to fill this hole by:
(1) Create a really good human identification system so that humans can prove that they are actually human;
(2) Provide UBI for everyone. Worldcoin is unique in that it relies on highly sophisticated biometrics, using specialized hardware called "the Orb" to scan the iris of each user's eye.
Our goal is to produce a large number of these spheres, distribute them widely around the world, and place them in public places so that anyone can easily obtain their own ID.
To its credit, Worldcoin is also committed to decentralization. This means technical decentralization: use the Optimism stack to become L2 on Ethereum, and use ZK-SNARK and other encryption technologies to protect user privacy, including decentralized governance of the system itself.
Worldcoin has been criticized for Orb's privacy and security concerns, its "token" design, and the ethics of some of the choices the company made. In fact, the Worldcoin project itself is still under development. Others, however, have raised more fundamental concerns about whether biometrics — not just Worldcoin ’s eye-scanning biometrics, but the simpler facial video upload and verification game used in The Proof of Humanity and Idena — will gain mass acceptance?
Of course, there is no shortage of critical voices, and the risks include inevitable privacy breaches, further erosion of people's ability to browse the Internet anonymously, coercion by authoritarian governments, and the possibility of how to ensure security while decentralizing, and so on.
The rest of this post discusses these issues and walks through some arguments to help you decide if scanning your eyes in front of our new spherical tool is a good idea? Should we give up on developing The Proof of Humanity and what are the alternatives?
01
What is The Proof of Humanity and why is it important?
The Proof of Humanity It is very valuable because it solves the current problem of traditional Internet power concentration, avoids dependence on central institutions, and leaks the least personal information as possible. If The Proof of Humanity Left unaddressed, decentralized governance (including “micro-governance” like voting on social media posts) is a castle in the air.
Many major applications in the world today handle this by using government-backed identity systems such as ID cards and passports. This does solve the problem, but it makes a huge and unacceptable sacrifice in terms of privacy.

The two-sided risk faced by our current proof-of-humanity system
In many human identification projects - not only Worldcoin, but also "flagship applications" such as Circles - there is a built-in "everyone can get Token" code (also known as "UBI Token"). Every user who registers in the system will receive some fixed amount of Tokens every day (or every hour or every week). There are many other applications, including:
- Airdrop mechanism for Token distribution
- Token or NFT sales with more favorable conditions for less wealthy users
- Vote inDAO
- Quadratic voting (funding and attention payouts)
- Protection against bot/sybil attacks in social media
- Captcha alternative to prevent DoS attacks
In fact, the common desire is to create an open and democratic mechanism, avoiding the centralized control of project operators and the rule of wealthy users. The latter is especially important in decentralized governance.
In such cases, today's existing solutions rely on:
(1) Highly opaque artificial intelligence algorithm (2) Centralized ID, aka "KYC".
An effective identity proof solution would therefore be a better solution that achieves the security properties required for these applications without encountering the pitfalls of existing centralized approaches.
02
What were the early attempts at Internet identity?
There are two main forms of human identification : social graphs and biometrics.
Proofs of human identity based on social graphs rely on some form of assurance: if Alice, Bob, Charlie, and David are all verified humans, and they all say that Emily is a verified human, then Emily is probably also a verified human.
Security is often reinforced by incentives: if Alice says Emily is human, but it turns out she is not, then both Alice and Emily may be punished. Biometric proof of personality involves verifying some of Emily's physical or behavioral traits that distinguish humans from robots (and individual humans from each other). Most projects use a combination of these two techniques.
The four systems I mentioned at the beginning of the post are roughly as follows:
(1) The Proof of Humanity: You upload your own video and provide a deposit. To be approved, existing users need to vouch for you, and other challengers challenge you after a period of time. If there is a challenger, the Kleros decentralized court will verify whether your video is authentic; if there is a fake, the deposit will be forfeited and the challenger will be rewarded.
(2) BrightID: You join a video call "verification party" with other users, and everyone verifies each other. Bitu can go through a higher level of verification through this system, and if enough other Bitu verified users vouch for you, you can get verified through it.
(3) Idena: You play the captcha game at a specific point in time (to prevent people from playing multiple times); part of the captcha game involves creating and verifying captchas, which are then used to verify other people.
(4) Circle: Existing Circle users vouch for you. Circles is unique in that it doesn't try to create a "globally verifiable ID"; instead, it creates a graph of trust relationships where someone's trustworthiness can only be verified from the perspective of your own position in that graph.
03
How does Worldcoin work?
Every Worldcoin user installs an app on their phone that generates private and public keys, just like an Ethereum wallet. Then they went to visit "the Orb" in person. Users stare into the Orb's camera while showing the Orb a QR code generated by their Worldcoin app containing their public key. The Orb scans the user's eyes and uses sophisticated hardware scanning and machine learning classifiers to verify:
(1) Whether the user is a real person;
(2) The user's iris does not match the iris of any other user who has previously used the system.
If both scans pass, the Orb will sign a message approving the private hash of the user's iris scan. The hash is uploaded to the database - currently a centralized server, intended to be replaced by a decentralized on-chain system once they are sure the hash mechanism works. The system doesn't store full iris scans; it only stores hashes, which are used to check for uniqueness. From then on, the user will have a "world ID".
World ID holders are able to prove that they are a unique human being by generating a ZK-SNARK to prove that they hold a private key that corresponds to a public key in the database, without revealing which key they hold. So even if someone rescans your iris, they won't be able to see any actions you take.
04
What are the main problems in the construction of Worldcoin ?
There are four main risks people worry about:
(1) Privacy
Registries for iris scans can reveal information. At the very least, if someone else scans your iris, they can check it against a database to see if you have a world ID. An iris scan may reveal more information.
(2) Accessibility
Unless there are enough orbs readily available to anyone in the world, world IDs will not be reliably accessible.
(3) Centralization
The Orb is a hardware device and we cannot verify that it is constructed correctly and has no backdoors. So even if the software layer were perfect and fully decentralized, the Worldcoin Foundation would still have the ability to insert backdoors into the system, allowing it to create any number of false human identities.
(4) Security
Users' phones could be hacked, users could be forced to scan their own irises while showing public keys belonging to someone else, and it would be possible to 3D print "dummys" who could scan their irises and obtain a world ID.
It is important to distinguish between:
(1) Issues specific to choices made by Worldcoin ;
(2) Problems that inevitably exist in any human biometric proof;
(3) Problems that will exist in any general human identification . For example, signing "Proof of Humanity" means posting your face on the internet.
Joining a BrightID verifier doesn't quite do it, but it still exposes your identity to a lot of people. Joining Circles exposes your social graph publicly. Worldcoin is far better at protecting privacy than both of these.
On the other hand, Worldcoin relies on specialized hardware, which poses the challenge of fully trusting the Orb manufacturer to build the orb. This challenge has no parallel in The Proof of Humanity , BrightID, or Circles. Maybe in the future, someone other than Worldcoin will create a different dedicated hardware solution with different tradeoffs.
05
How can a biometric human identification scheme address privacy concerns?
The most obvious and biggest potential privacy breach of any human identity system is the linking of every action an individual takes to a real-world identity . This data leak is very large, it can be said to be unacceptably large. But luckily, it's easy to solve with zero-knowledge proof techniques.
Rather than signing directly with a private key whose corresponding public key is in a database, a user can craft a ZK-SNARK to prove that they have a private key whose corresponding public key is somewhere in the database without revealing which specific key they own. This can usually be done with tools like Sismo, Worldcoin has its own built-in implementation. Giving "crypto-native" human identity proofs is important here: this fundamental step to provide anonymity that basically all centralized identity solutions don't.
The existence of a public registry of biometric scans is a more subtle privacy breach . In the case of The Proof of Humanity , this centralizes a huge amount of data: you get a video of every The Proof of Humanity participant, making it perfectly clear to anyone in the world who is willing to survey all The Proof of Humanity participants.
In Worldcoin's case, the leaks are much more limited: Orb computes locally and publishes only the "hash" of everyone's iris scan. This hash is not a regular hash like SHA256; instead, it is a specialized algorithm based on machine learning Gabor filters that handles the imprecision inherent in any biometric scan and ensures that successive hashes of the same person's iris have similar outputs.

Blue: the percent digit difference between two scans of the iris of the same person Orange: the percent digit difference between two scans of the iris of two different people
These iris hashes leak only a small amount of data. If an adversary can brute force (or covertly) scan your iris, then they can calculate your iris hash themselves and check it against the iris hash database to see if you are participating in the system.
This ability to check if someone is already registered is necessary for the system itself to prevent people from registering multiple times, but it also has the potential to be abused . In addition, iris hashes have the potential to leak a certain amount of medical data (gender, race, and perhaps medical condition), but this leak is far smaller than that captured by almost any other massive data collection system in use today (such as street cameras). Overall, the privacy of storing iris hashes seems adequate to me.
06
What are the accessibility issues in biometric human identification systems?
Dedicated hardware introduces accessibility issues because dedicated hardware is not accessible. Between 51% and 64% of sub-Saharan Africans now own a smartphone, and this is expected to increase to 87% by 2030.
But while there are billions of smartphones, there are only a few hundred Orbs. Even with distributed manufacturing on a larger scale, it's hard to reach a world where there's an Orb within five kilometers of everyone.

But to its credit, Worldcoin has been working hard!
It's also worth noting that many other forms of human identification have worse accessibility issues. Joining a social graph-based human identification system is very difficult unless you already know someone in the social graph. This makes such systems easily confined to a single community in a single country.
Even centralized identity systems have learned this lesson: India's Aadhaar ID system is based on biometrics because that's the only way to quickly join its large population while avoiding massive fraud from duplicates and fake accounts (thus saving a lot of costs), and of course the Aadhaar system as a whole is much weaker in terms of privacy than any system proposed at scale within the crypto community.
From an accessibility standpoint, the systems that perform best are actually the ones like Proof of Existence, where you just use your smartphone to sign up. However, as we have seen and as we will see, such systems introduce various other trade-offs.
07
What is the problem of centralization in biometric human identification systems?
There are three main issues:
(1) Centralization risk in the top-level governance of the system;
(2) Centralization risks specific to systems using dedicated hardware;
(3) If a proprietary algorithm is used to determine who the real participants are, there is a risk of centralization.
Any human identity system must contend with (1), if the system uses incentives denominated in external assets (eg. ETH, USDC, DAI), then it cannot be completely subjective, so governance risk is inevitable.
It is much more risky for Worldcoin than The Proof of Humanity (or BrightID) because Worldcoin relies on specialized hardware that other systems do not.
Especially in a "logically centralized" system where there is only one system for verification, unless all algorithms are open source and we can guarantee that they are actually running the code they claim to be. For systems that rely purely on users authenticating other users, this is not a risk.
08
How does Worldcoin solve the problem of hardware centralization?
Currently, the WorldCom affiliated entity (Tools for Humanity) is the only organization making Orbs. However, the Orb's source code is mostly public: you can see the hardware specs in this github repository, and other parts of the source code are expected to be released soon.
The license is one of another “shared source, but not technically open source” licenses similar to the Uniswap BSL, and in addition to preventing forks, it also prevents what they deem unethical – they specifically list mass surveillance and the three International Declarations of Civil Rights.
The team's stated goal is to allow and encourage other organizations to create Orbs, and transition over time from Orbs created by Tools for Humanity to having some sort of DAO to approve and manage which organizations can make Orbs sanctioned by the system.
This design is flawed:
It actually failed to disperse. This may be due to a common flaw in joint agreements: one manufacturer will always end up dominating in practice, leading to a re-centralization of the system.
This distributed manufacturing mechanism has proven difficult to secure. Here, I see two risks:
(1) Emergence of bad orb makers: Even if there is a the Orb maker that is malicious or hacked, it can also generate an infinite number of fake iris scan hashes and give them world IDs.
(2) Government restrictions on the Orb: Governments that do not want their citizens to participate in the Worldcoin ecosystem can ban the Orb from entering their country. Additionally, they could even force citizens to scan their irises, giving the government access to their accounts, and citizens would be unable to respond.
In order for the system to be robust against bad Orb manufacturers, the Worldcoin team recommends regular audits of Orbs to verify that they are built correctly, that key hardware components are built to specification, and that they have not been tampered with after the fact. It's a challenging task: It's essentially akin to the Agency's nuclear inspection bureaucracy, but for Orbs. It is hoped that even a very imperfect implementation of the audit regime can greatly reduce the number of false spheres.
In order to limit the damage done by any bad Orb Maker, it makes sense to have a second mitigation. World IDs registered with different Orb Manufacturers, preferably with different Orb Registrations, should be distinguished from each other. That's fine if this information is private and only stored on the World ID holder's device, but it does require proof-on-demand. This allows the ecosystem to respond to (inevitable) attacks by removing individual orb makers, or even individual orbs, from the whitelist on demand. If we saw the North Korean government go around and force people to scan the Orb, any accounts that resulted could be immediately retroactively disabled.
09
Security concerns in general human identification?
In addition to issues specific to Worldcoin, there are issues that affect human identity proof design. The main ones I can think of are:
(1) 3D printed dummies: People can use AI to generate photos or even 3D prints of dummies that are convincing enough to be accepted by the Orb software. Even if only one group does so, they can generate an infinite number of identities.
(2) Possibility to sell identity: Someone could provide someone else's public key instead of their own when registering, giving that person control over their registration ID in exchange for money. This seems to be happening already. In addition to selling, IDs can also be leased for short-term use within an application process.
(3) Phone Hacking: If someone's phone is hacked, the hacker can steal the keys that control their World ID.
(4) Government coercion to steal identity : The government can force its citizens to verify while presenting a QR code belonging to the government. In this way, malicious governments can gain access to millions of IDs. In biometric systems, this can even be done in secret: Governments can use obfuscated orbs to extract a world ID from everyone entering their country at a passport control booth.
Specific to biometric human identification systems. (2) and (3) are common in both biometric and non-biometric designs. (4) is common to both, although the technology required in the two cases will be quite different; in this section I will focus on the issues in the biometrics case.
These are very serious weaknesses. Some have already been addressed in existing protocols, some can be addressed by future improvements, and there are still some limitations that appear to be fundamental waiting to be resolved.
How do we deal with dummies?
For Worldcoin , this is much less risky than a system like The Proof of Humanity : face-to-face scans can check many characteristics of a person and are much harder to forge than just deepfake a video. Special-purpose hardware is inherently harder to spoof than commodity hardware, which in turn is harder to spoof than the digital algorithms that verify pictures and videos sent remotely.
Can someone 3D print something that could eventually trick even dedicated hardware? possible. I predict that, at some point, we will see a growing tension between the goals of keeping mechanisms open and keeping them secure: Open source AI algorithms are inherently more susceptible to adversarial machine learning . At some point in the more distant future, even the best AI algorithm might be fooled by the best 3D printed dummy.
However, from my discussions with the Worldcoin and The Proof of Humanity teams, it currently appears that neither protocol is seeing significant deepfake attacks for the simple reason that it is very cheap and easy to hire real low-wage workers to register on your behalf.
Can we prevent the sale of ID cards?
Preventing this outsourcing is difficult in the short term because most of the world doesn't even know about human ID protocols, and if you tell them to hold up a QR code and scan their eyes to earn $30, they will.
Once more people know what the Human Identity Protocol is, a fairly simple mitigation becomes possible: allow people with a registered ID to re-register, canceling the previous ID . This makes "ID sales" much less credible because the person who sold you an ID can re-register, canceling the ID they just sold. However, getting to this point requires the protocol to be very well known, and Orbs need to be very widely accessible, for on-demand registration to be practical.
This is one of the reasons why it is valuable to integrate UBI Token into a human identity proof system: UBI Coin provides an easy-to-understand incentive for people to understand the protocol and register, and if they register on behalf of others, their own account will be canceled.
Can we prevent coercion in biometric human identification systems?
It depends on what kind of coercion we're talking about. Possible forms of coercion include:
- Governments scan people's eyes (or faces) at border control and other routine government checkpoints and use them to register their citizens.
- The government banned the Orb in the country to prevent people from independently re-registering
- The (possibly government-run) app requires people to "log in" by directly signing with a public key, allowing them to see the corresponding biometric scan, and thus see the link between the user's current ID and any future IDs they obtain from re-enrollment.
A common concern is that it makes it too easy to create "permanent records" that stay with a person for a lifetime.

Especially in the hands of immature users, it seems difficult to completely prevent these situations. Users can leave their country and (re)register with an Orb in a safer country, but this is a difficult and costly process. Finding a standalone the Orb seemed too difficult and risky in a truly hostile legal environment.
An identification method that requires a person to speak a specific phrase when registering is a good example: it is sufficient to prevent hidden scans, and more so for coercive measures, the registration phrase can even include a statement confirming that the respondent knows they have the right to independently re-register and possibly receive UBI Token or other rewards.
Devices used to perform mandatory enrollment may have their access revoked if enforcement is detected. To prevent applications from linking people's current and previous IDs and attempting to leave a "permanent record", the default human identity proof application can lock the user's key in trusted hardware, preventing any application from directly using that key without an intermediate anonymous ZK-SNARK layer. If governments or app developers want to fix this, they need to mandate the use of their own custom apps.
With a combination of these technologies and active early warning, it seems possible to target regimes that are truly hostile and keep regimes that are merely neutral (like much of the world) honest. This could be done by a project like Worldcoin or The Proof of Humanity maintaining its own bureaucracy, or by revealing more information about how the ID is registered (e.g., in Worldcoin, which Orb it came from), and leaving this task of classification to the community.
Can we prevent ID renting (such as selling votes)?
Re-registering will not prevent your ID from being rented out. This is OK in some applications: the cost of leasing out your right to receive a share of UBI Coin for the day will simply be the value of that share of UBI Coin for the day. But in applications such as voting, ticket sales are a big problem.
Systems like MACI prevent you from selling your vote, allowing you to cast another vote later, voiding your previous vote so that no one can find out if you actually cast such a vote. But it doesn't help if the briber controls the key you got when you signed up.
I see two solutions here:
(1) Run the entire application process within the MPC. This also covers the re-registration process: when a person registers with MPC, MPC assigns them an ID that is separate and unlinkable from their identification ID, and when a person re-registers, only MPC knows which account to deactivate. This prevents users from proving their actions, since every important step is done within the MPC using private information known only to the MPC.
(2) Decentralized registration ceremony. Basically, implement a protocol like a face-to-face key registration protocol that requires four randomly selected local participants to co-register someone. This ensures that registration is a "trusted" process where attackers cannot snoop.
Social graph based systems might actually perform better here, as they can automatically create native decentralized registration processes as a by-product of how they work.
10
Biometrics for Human Identity Proof vs Social Graph Based Verification
Besides biometric methods, so far the other main contender for human identity proof is social graph based verification. Verification systems based on social graphs all operate on the same principle: you get a valid verification status if you have a large set of existing verified identities that all attest to the validity of your identity.

If only a few real users (accidentally or maliciously) help verify fake users,
Then you can use basic graph theory techniques to put an upper bound on the number of fake users your system can verify.
Source: https://www.sciencedirect.com/science/article/abs/pii/S0045790622000611.
Proponents of social graph-based verification often describe it as a better alternative to biometrics for the following reasons:
- It does not depend on dedicated hardware, making it easier to deploy;
- It avoids a perpetual arms race between makers trying to make dummies and Orbs that need to be updated to reject such dummies;
- No need to collect biometric data, more privacy protection;
- It's potentially more pseudonym friendly, because if someone chooses to split their internet life into multiple identities separate from each other, both identities can potentially be verified (but maintaining multiple real and separate identities sacrifices network effects and is costly, so it's not something an attacker can easily do).
The biometric method gives a binary score of "is human" or "is not human", which is singular: People who are accidentally rejected will not be able to obtain UBI and may not be able to participate in online life. A social graph-based approach could give a more nuanced numerical score, which of course might be a bit unfair to some participants, but is less likely to judge someone who is completely "impersonal".
My take on these arguments is that I basically agree with them! These are the real strengths of social graph based approaches and should be taken seriously. However, it is also worth considering the weaknesses of social graph-based approaches: regional limitations, privacy leaks, risks of unequal centralization, etc.
11
Are human IDs compatible with pseudonyms in the real world?
In principle, human IDs are compatible with various pseudonyms. There is no ideal form of human identification. Instead, we have at least three different approach paradigms, each with their own unique strengths and weaknesses. A comparison chart might look like this:

Ideally, we should treat these three technologies as complementary and combine them . Dedicated hardware biometrics have the advantage of security at scale, as demonstrated by Aadhaar in India. They are very weak in terms of decentralization, although this can be fixed by having individual orbs in charge.
Universal biometrics are easy to adopt today, but their security is declining rapidly and may only work for another 1-2 years. Social graph-based systems steered by hundreds of people who are socially close to the founding team may face a constant trade-off of either missing large parts of the world entirely or being vulnerable to communities they cannot see. However, a social graph-based system guided by tens of millions of biometric ID holders could actually work. Biometric bootstrapping may work better in the short term, while social graph-based technologies may be more robust in the long run, and as algorithms improve, they take on greater responsibility over time.

Possible future hybrid development paths
The problem of crafting an effective and reliable human identification system, especially in the hands of someone far removed from the existing crypto community, seems to be quite challenging. I have absolutely no envy for those who attempt this task, and it may take years to find one that works.
In principle, the concept of human identity proofs seems very valuable, and while various implementations have their risks, so does not having any human identity proofs: a world without human identity proofs seems more likely to be one dominated by centralized identity solutions, money, small gated communities, or some combination of the three. I look forward to seeing more progress on all types of human identification, and hope to see the different approaches eventually form a coherent whole.



