Vitalik's new article: Envisioning the end of Web3, "open source and verifiable" will become the technical standard

This article is machine translated
Show original

Perhaps the most significant trend of this century so far can be summed up by the phrase "the internet has become real life." This trend began with email and instant messaging—private conversations that for millennia relied on word of mouth and pen and paper, now migrated to digital infrastructure. Then came the rise of digital finance, encompassing both cryptocurrency finance and the digital transformation of traditional finance itself. And then, digital technology permeated health: through smartphones, personal health trackers, and data inferred from consumer behavior, all kinds of information about our bodies is being processed by computers and computer networks. Over the next twenty years, I expect this trend to spread to even more areas, including government affairs (eventually extending to elections), monitoring physical and biological indicators and potential threats in public environments, and ultimately, through brain-computer interfaces, even reaching the level of our own minds.

I believe these trends are inevitable: the benefits are too great, and in a highly competitive global environment, civilizations that reject these technologies will first lose competitiveness and, in turn, cede sovereignty to civilizations that embrace them. However, in addition to their powerful benefits, these technologies also profoundly affect power dynamics within and between nations.

The civilization that stands to benefit most from this new wave of technology is not the "consumer" side of the technology, but the "producer" side. Centrally orchestrated equal access programs designed for closed platforms and interfaces can only realize a fraction of this value at best and often fail outside of pre-defined "normal" scenarios. Furthermore, in the future technological landscape, trust in technology will be significantly increased. A breach of this trust (e.g., through backdoors or security vulnerabilities) would cause serious problems. Even the mere possibility of a breach of trust forces people to revert to an inherently exclusionary model of social trust—the question: "Is this built by someone I trust?" This situation cascades down the technology stack: the so-called "dominants" are those who can define "special circumstances."

To circumvent these problems, all technologies in the technology stack—including software, hardware, and biological technologies—need to possess two core, interrelated characteristics: true openness (that is, open source, including free licensing) and verifiability (ideally, end users should be able to verify it directly).

The internet is real life. We want it to be a utopia, not a dystopia.

The importance of openness and verifiability in health

The consequences of unequal access to technological production methods were starkly exposed during the COVID-19 pandemic. Vaccines were produced in only a few countries, leading to significant disparities in the timing of their arrival: wealthy nations received high-quality vaccines in 2021, while others received lower-quality vaccines in 2022 or 2023. While several initiatives sought to ensure equal access to vaccines, their effectiveness was limited by the fact that vaccine production relied on capital-intensive, proprietary manufacturing processes that could only be implemented in a few regions.

The second major problem facing vaccines is the lack of transparency surrounding their scientific research and information dissemination strategies. Attempts to portray vaccines as "completely risk-free and without any side effects" are at odds with the facts, ultimately fueling public distrust of vaccines. This distrust has now escalated, even escalating into questioning the scientific achievements of half a century.

In reality, both of these problems have solutions. For example, vaccines like PopVax, funded by Balvi, not only have lower R&D costs but also a more open R&D process—this not only reduces inequality in vaccine access but also makes it easier to analyze and verify their safety and effectiveness. In the future, we could even make verifiability a core goal from the outset of vaccine design.

Similar issues exist in the digital realm of biotechnology. When you talk to longevity researchers, they almost always mention that the future of anti-aging medicine lies in personalization and data-driven approaches. To provide precise medication recommendations and nutritional adjustments to individuals today, it's essential to understand their real-time physical condition. To achieve this, large-scale, real-time digital data collection and processing are crucial.

This logic also applies to defensive biotechnology aimed at "preventing risks," such as epidemic prevention and control. The earlier an epidemic is discovered, the more likely it is to be contained at the source; even if it cannot be contained, every week of early discovery can buy more time for prevention and control preparations and the development of response measures. During the ongoing epidemic, real-time knowledge of the location of the epidemic is also of great value for the timely deployment of prevention and control measures: if ordinary people infected with the epidemic can self-isolate within 1 hour after learning of their illness, the spread of the epidemic will be reduced by 72 times compared to the situation of "moving around with the disease for 3 days and infecting others"; if it can be determined that "20% of locations cause 80% of the spread", targeted improvements in the air quality of these areas can further reduce the risk of transmission. To achieve these goals, two conditions must be met: (1) deploy a large number of sensors; (2) sensors have real-time communication capabilities and can feed information back to other systems.

If we look further ahead to the "science fiction-level" technology direction, we will see the potential of brain-computer interfaces - it can not only greatly improve human work efficiency and help people better understand each other through "telepathic communication", but also pave the way for the realization of safer and more intelligent artificial intelligence.

If the infrastructure for biometric and health tracking (both at the personal and spatial levels) is proprietary, data will automatically flow into the hands of large corporations. These companies will have the power to develop applications based on this infrastructure, while others will be excluded. While they may grant limited access through APIs (application programming interfaces), such permissions are often restricted, could be exploited for "monopolistic rent-seeking," and could even be revoked at any time. This means that a small number of individuals and companies control the core resources of a crucial 21st-century technology field, limiting the potential for other entities to profit from it.

On the other hand, if this personal health data isn't secure, hackers could exploit it for extortion, exploiting insurance and healthcare pricing. If the data includes location information, hackers could even use it to target individuals for kidnapping. Conversely, your location data (which is frequently targeted by hackers) could be used to infer your health status. And if a brain-computer interface is hacked, malicious attackers could directly "read" (or even worse, "tamper with") your thoughts. This isn't science fiction anymore: studies have shown that hacked brain-computer interfaces can cause users to lose motor control ( see related attack examples here ).

In summary, while these technologies can bring huge benefits, they also come with significant risks - and a strong emphasis on "openness" and "verifiability" is an effective way to mitigate these risks.

The importance of openness and verifiability in personal and commercial digital technologies

Earlier this month, I needed to fill out and sign a legally binding document while traveling abroad. Although my country has a national electronic signature system, I hadn't registered in advance. I ended up having to print the document, sign it by hand, then go to a nearby DHL location, spend a considerable amount of time filling out paper shipping forms, and finally pay for expedited shipping across the border. The whole process took half an hour and cost $119. That same day, I also needed to sign a digital transaction on the Ethereum blockchain—a process that took just 5 seconds and cost a mere $0.10 (to be fair, digital signatures can be completely free even without relying on blockchain).

This type of case study is very common in scenarios such as corporate and nonprofit governance and intellectual property management. Similar "efficiency comparison" examples can be found in the business plans of most blockchain startups over the past decade. Furthermore, the core application scenario for "exercising individual rights digitally" is in the payment and financial sectors.

Of course, all of this comes with a significant risk: what if the software or hardware is hacked? The cryptocurrency community has long recognized this risk. The permissionless and decentralized nature of blockchains means that once you lose access to your funds, there's no one to turn to for help: "No private keys, no ownership." For this reason, the cryptocurrency community has long explored solutions like multi-signature wallets, social recovery wallets, and hardware wallets. However, in reality, the lack of a trusted third party in many scenarios isn't an ideological choice, but rather an inherent characteristic of the scenario itself. Even in traditional finance, trusted third parties fail to protect most people—for example, only 4% of fraud victims recover their losses. And in scenarios involving the custody of personal data, once compromised, data is, in principle, irreversible. Therefore, we need true verifiability and security—both in software and, ultimately, in hardware.

Importantly, in the hardware space, the risks we seek to protect against go far beyond simply determining whether the manufacturer is malicious. The core issue is that hardware development relies on a large number of external components, most of which are closed-source. A single oversight in any one of these components could lead to unacceptable security consequences. A paper has shown that even if software is proven to be "safe" in a standalone model, the choice of microarchitecture can undermine its side-channel resistance. Security vulnerabilities like EUCLEAK (an attack method) are more difficult to detect precisely because they rely on proprietary components. Furthermore, if AI models are trained on compromised hardware, backdoors could be implanted during the training process.

Another problem is that even if closed, centralized systems are inherently secure, they can still introduce other drawbacks. Centralization creates "persistent levers of power" between individuals, companies, or nations—if your core infrastructure is built and maintained by "potentially untrustworthy companies" in a "potentially untrustworthy nation," you're vulnerable to external pressure (see, for example,Henry Farrell's work on "weaponized interdependence" ). This is precisely the problem that cryptocurrencies aim to solve—but it's a problem that extends far beyond finance.

The Importance of Openness and Verifiability in Digital Citizenship Technologies

I frequently interact with people from all walks of life, all of whom are exploring governance models better suited to diverse scenarios in the 21st century. For example, Audrey Tang is working to upgrade existing functional political systems by empowering local open source communities and adopting mechanisms like "citizen assemblies," "lottery representation," and "quadratic voting" to improve governance. Others are approaching this from the ground up. Some Russian-born political scientists have drafted a new constitution for Russia that explicitly guarantees individual freedom and local autonomy, emphasizes a "peace-oriented, anti-aggression" institutional design, and gives direct democracy unprecedented prominence. Still others, such as economists researching land value taxes and congestion pricing, are working to improve their own economies.

While different people may have different degrees of acceptance of these concepts, they all share one key point: they all require high-bandwidth participation, making any viable implementation a must-have digital solution. While pen-and-paper records may suffice for simple property registration or quadrennial elections, they are completely inadequate for scenarios requiring higher levels of participation and information transmission efficiency.

However, historically, security researchers have held varying attitudes toward digital citizen technologies like e-voting, ranging from skepticism to opposition. One study beautifully summarizes the core arguments against e-voting, stating:

"First, electronic voting technology relies on 'black box software'—the public cannot access the software code that controls voting machines. While companies claim to protect software to prevent fraud and competition, this also means the public is completely unable to understand the operating logic of voting software. It's not difficult for companies to manipulate software and fabricate false election results. Furthermore, voting machine vendors compete with each other, and there's no guarantee that they will produce equipment with the interests of voters and the accuracy of ballots in mind."

A large number of real cases have proved that this suspicion is not unreasonable.

These objections apply equally to other similar scenarios. However, I predict that as technology advances, a "complete rejection of digitalization" approach will become unrealistic in more and more areas. Technology is driving the world toward greater efficiency (for both good and bad), and if a system refuses to adapt, people will gradually circumvent it, and its influence on individual and collective affairs will gradually diminish. Therefore, we need another approach: confront the challenge head-on and explore how to make complex technological solutions "secure" and "verifiable."

In theory, "verifiable security" and "open source" are two different concepts. Proprietary technology can certainly be secure—for example, aircraft technology is highly proprietary, yet commercial aviation remains an extremely safe mode of travel. However, what the proprietary model cannot achieve is "security consensus"—the ability for mutually distrustful entities to agree on its security.

Civic systems like elections are typical scenarios where "secure consensus" is needed. Another scenario is court evidence collection. Recently, a Massachusetts court ruled that a large amount of breathalyzer evidence was invalid because the state crime lab was found to have concealed information about a widespread malfunction in the breathalyzers. The ruling document stated:

"Were all the test results faulty? No. In fact, the breathalyzers in most of the cases did not have calibration issues. But investigators subsequently discovered that the state crime lab withheld evidence that the malfunction was more extensive than alleged, and Judge Frank Gaziano found that the due process rights of all the defendants involved were violated."

"Due process" in court essentially requires not only "fairness" and "accuracy," but also "consensus on fairness and accuracy" - if the public cannot confirm that the court is "acting in accordance with the law," society is likely to fall into a chaotic situation of "private relief."

Furthermore, openness itself has inherent value. It allows local communities to design governance, identity authentication, and other systems tailored to their own goals. If voting systems were proprietary, a country (or province, or city) seeking to experiment with new voting models would face significant obstacles: either convincing companies to develop their preferred rules as a "new feature," or developing and verifying them from scratch—undoubtedly significantly increasing the cost of political innovation.

In these areas, adopting the "open source hacker ethic"—a philosophy that encourages sharing, collaboration, and innovation—can empower local implementers, whether working individually, as part of a government, or as part of a corporation. This requires two conditions: widespread availability of open source tools that facilitate building, and the adoption of free licensing for infrastructure and codebases, allowing others to build upon them. If the goal is to narrow the power gap, then copyleft is crucial.

Another key area of ​​civic technology in the coming years will be physical security. Over the past two decades, the ubiquity of surveillance cameras has raised numerous civil liberties concerns. Unfortunately, the rise of drone warfare has made avoiding high-tech security measures no longer an option. Even if a country's laws do not infringe on civil liberties, if it cannot protect its citizens from unlawful interference by other countries (or malicious companies or individuals), there is no such thing as "freedom"—and drones make such attacks much easier. Therefore, appropriate defenses are needed, potentially including a large number of "counter-drone systems," sensors, and cameras.

If these tools are proprietary, data collection will be both opaque and highly centralized. If these tools are open and verifiable, we can explore a better solution: security devices will only output limited data in limited scenarios and automatically delete the rest. In this way, the future of digital physical security will be more like a "digital watchdog" than a "digital panopticon." We can imagine a world where public surveillance equipment must be open source and verifiable, and any citizen has the legal right to "randomly select public surveillance equipment, disassemble it, and verify its compliance." University computer clubs can even use this type of verification as a teaching practice.

Open source and verifiable implementation path

We cannot avoid the deep integration of digital computing into every aspect of our individual and collective lives. If left unchecked, the future of digital technology will likely be one in which it is developed and operated by centralized corporations, serving the profit-making interests of a few, and backdoored by governments. The majority of the world's population will be unable to participate in its creation or assess its security. However, we can strive to find a better path.

Imagine a world like this:

You have a secure personal electronic device that combines the computing power of a phone with the security of a cryptographic hardware wallet. It’s not quite as auditable as a mechanical watch, but it’s pretty close.

All your instant messaging apps are encrypted, message propagation traces are hidden through mixed network technology, and all code is formally verified.

You can be confident that private conversations remain private.

Your financial assets are standardized ERC-20 tokens on-chain (or stored on a server that publishes hashes and verifies proofs to the blockchain to ensure accuracy), and are managed by a wallet controlled by your personal electronic device.

If your device is lost, you can restore access to your assets through a method of your choice (e.g., combining your other devices, the devices of family members, friends, or institutions—not necessarily government agencies: organizations like churches might offer this service if it’s easy enough).

The open-source Starlink-class infrastructure is now operational, ensuring stable and reliable global communications without relying on a few operators.

Your device runs an open-source, weighted Large Language Model (LLM) locally, scanning your actions in real time, providing suggestions, automatically completing tasks, and alerting you when you might be getting the wrong information or about to make a mistake.

The device’s operating system is also open source and formally verified.

You wear a personal health tracking device that works 24 hours a day. This device is also open source and auditable - you can access your health data at any time, and you can ensure that no one can access this information without your permission.

We have a more advanced governance model: We use mechanisms such as lottery representation, citizen parliaments, and quadratic voting to set goals through a clever combination of democratic voting methods, and use specific methods to screen expert solutions to determine the path to achieve the goals.

As a participant, you can be confident that the system is operating according to the rules you understand. Public spaces are equipped with monitoring equipment to track biological variables (such as carbon dioxide concentration, air quality index, the presence of airborne diseases, wastewater indicators, etc.).

But these devices (and all surveillance cameras and defensive drones) are open source and verifiable, and there is a legal framework in place to allow for random public inspections.

In such a world, we will have greater security, greater freedom, and more equal access to the global economy than we have today. But achieving this vision will require increased investment in a variety of technologies, including:

More advanced cryptography: I call zero-knowledge proofs (ZK-SNARKs), fully homomorphic encryption, and obfuscation the "Egyptian God Cards" of cryptography—their power lies in their ability to execute arbitrary computations on data across multiple parties, ensuring the reliability of the output while preserving the privacy of the data and computations. This lays the foundation for the development of more robust privacy-preserving applications. Cryptography-related tools, such as blockchains, which ensure data immutability and user exclusion, and differential privacy, which further enhances privacy by adding noise to data, will also play a key role in this effort.

Application and user-level security: An application can only be truly secure if its security promises can be understood and verified by users. This requires the use of software frameworks to reduce the difficulty of developing high-security applications. More importantly, browsers, operating systems, and other middleware (such as locally running large, monitored language models) must work together to verify application security, determine risk levels, and clearly present this information to users.

Formal Verification: We can use automated proof methods to algorithmically verify that programs meet key properties (such as non-data leakage and protection against unauthorized third-party modification). Lean programming languages ​​have recently become popular tools in this area. These techniques are already being used to verify zero-knowledge proof algorithms in the Ethereum Virtual Machine (EVM) and other high-value, high-risk use cases in cryptography, with similar applications more broadly. Beyond this, further breakthroughs in other more fundamental security practices are needed.

Open-source, security-focused operating systems: These are emerging, including security-focused Android derivatives like GrapheneOS, minimalist and secure kernels like Asterinas, and Huawei's HarmonyOS, which has an open-source version and utilizes formal verification techniques. Many readers may question, "Since it's a Huawei system, it must have backdoors, right?" This view misses the core logic: regardless of who develops a product, as long as it's open and verifiable, the developer's identity shouldn't be a concern. This case clearly demonstrates that openness and verifiability can effectively combat the trend toward global technological fragmentation.

Secure open source hardware: If you can't ensure that the hardware actually runs the specified software and doesn't leak data in the background, then even the most secure software is useless. In this area, I focus on two short-term goals:

a. Personal security electronic devices: These are referred to as “hardware wallets” in the blockchain field, while open source enthusiasts call them “security phones”. However, once you understand the dual needs of “security” and “versatility,” you’ll find that the core functions of these two types of devices will eventually converge.

b. Physical infrastructure in public spaces: This includes smart locks, the aforementioned biometric monitoring equipment, and various IoT technologies. To build public trust in these facilities, open source and verifiable security are essential prerequisites.

Secure open-source toolchains for building open-source hardware: Today, hardware designs rely heavily on closed-source components. This not only significantly increases hardware R&D costs and raises the barrier to entry for development permissions, but also makes hardware verification difficult. If the tools that generate chip designs are closed-source, developers cannot determine verification standards. Even established technologies like scan chains often remain unimplemented due to the closed-source nature of key supporting tools. However, this situation is not unchangeable.

Hardware verification techniques (e.g., IRIS technology, X-ray scanning): We need to scan the chip to confirm that its logic is completely consistent with the design and that there are no additional components that can be maliciously tampered with or extract data. Verification can be achieved in two ways:

a. Destructive verification: Auditors randomly purchase products containing chips as ordinary end users, disassemble the chips and verify whether their logic matches the design.

b. Non-destructive verification: With the help of IRIS or X-ray scanning technology, each chip can be inspected in theory.

To achieve a "security consensus," the ideal state is to make hardware verification technology accessible to the general public. Currently, X-ray equipment is not widely available. This can be improved in two ways: first, by optimizing verification equipment (and chip verifiability design) to lower the barrier to entry; second, by supplementing "full verification" with simpler verification methods—for example, ID tag verification, which can be performed on a smartphone, and signature verification based on keys generated by physically unclonable functions. These methods can effectively verify key information such as whether the device comes from a known manufacturer batch that has undergone detailed third-party random sampling verification.

Open-source, low-cost, local environmental and biological monitoring devices: Communities and individuals should be able to independently monitor their environment and their own health and identify biological risks. These devices could take many forms, including personal medical devices like OpenWater, air quality sensors, general airborne disease sensors like Varro, and larger-scale environmental monitoring devices.

From Vision to Implementation: Paths and Challenges

Compared to traditional visions of technological development, the vision of "full-stack open source and verifiable" differs in a key respect: it prioritizes the protection of local sovereignty, the empowerment of individual rights, and the realization of freedom. In terms of security construction logic, it shifts from pursuing the "complete elimination of all global threats" to "improving the robustness of systems at every level of the technology stack." The definition of "openness" extends beyond "centrally planned open access to APIs" to encompass "every layer of the technology stack being open to improvement, optimization, and redevelopment." Verification is no longer the exclusive right of proprietary auditing organizations (which may even be colluding with technology vendors and governments), but rather a fundamental public right and even a socially encouraged practice. Verification is open to everyone, rather than passively accepting "security promises."

This vision better suits the fragmented reality of the 21st century global landscape, but the timeframe for implementation is extremely tight. Currently, centralized security solutions are advancing at an alarming pace. Their core logic is to "increase centralized data collection nodes, create built-in backdoors, and simplify verification to a single standard: 'whether it comes from a trusted developer or manufacturer.'" In fact, attempts to replace "true open access" with centralized solutions have been ongoing for decades: from Facebook's early "Internet Project" (internet.org) to today's more complex technology monopolies, each attempt has been more deceptive than the previous one. Therefore, we face a dual task: on the one hand, we must accelerate the development and implementation of open source, verifiable technologies to compete with centralized solutions; on the other hand, we must clearly convey to the public and institutions the concept that "safer and fairer technological solutions are not just fantasies, but real possibilities."

If this vision can be realized, we will usher in a world that can be called "retro-futurism": on the one hand, we can enjoy the dividends of cutting-edge technology - improving health through more powerful tools, organizing society in a more efficient and robust way, and defending against new and old threats (such as epidemics and drone attacks); on the other hand, we can regain the core characteristics of the technology ecosystem of the 1900s - infrastructure is no longer a "black box that ordinary people cannot touch", but a tool that can be disassembled, verified, and modified to adapt to their own needs; anyone can break through the identity limitations of "consumer" or "application developer" and participate in innovation at any layer of the technology stack (whether it is optimizing chip design or improving operating system security logic); more importantly, people can truly trust technology - confident that the actual functions of the device are consistent with the publicity, and will not steal data or perform unauthorized operations in the background.

Achieving full-stack open source and verifiable security isn't free—performance optimizations for hardware and software often come at the expense of reduced understandability and increased system vulnerability. The open source model also conflicts with most traditional business models. While the impact of these issues has been exaggerated, shifting public and market perceptions of open source and verifiable security will take time and can't be achieved overnight. Therefore, we need to establish a pragmatic short-term goal: prioritize building a full-stack open source and verifiable security technology system for high-security, non-performance-critical applications, encompassing both consumer and institutional scenarios, remote and local, and in hardware, software, and biomonitoring.

The rationale for this choice lies in the fact that most scenarios with extremely high "security" requirements (such as health data storage, election voting systems, and financial key management) do not actually have demanding "performance" requirements; even if some scenarios require a certain level of performance, a balance can be achieved through a combination strategy of "high-performance untrusted components + low-performance trusted components" - for example, using high-performance chips to process ordinary data and open-source verified security chips to process sensitive information, ultimately meeting efficiency requirements while ensuring security.

We don't need to pursue "ultimate security and openness in all areas"—that's neither realistic nor necessary. But we must ensure that in core areas directly related to individual rights, social equity, and public safety (such as healthcare, democratic participation, and financial security), "open source and verifiable" technology becomes the standard, allowing everyone to enjoy secure and trustworthy digital services.

Special thanks to Ahmed Ghappour, bunnie, Daniel Genkin, Graham Liu, Michael Gao, mlsudo, Tim Ansell, Quintus Kilbourn, Tina Zhen, Balvi volunteers, and GrapheneOS developers for their feedback and discussions.

Original link

Click here to learn about BlockBeats' BlockBeats job openings

Welcome to join the BlockBeats official community:

Telegram group: https://t.me/theblockbeats

Telegram group: https://t.me/BlockBeats_App

Official Twitter account: https://twitter.com/BlockBeatsAsia

Source
Disclaimer: The content above is only the author's opinion which does not represent any position of Followin, and is not intended as, and shall not be understood or construed as, investment advice from Followin.
Like
Add to Favorites
Comments