The duality of AI: accelerating innovation and a crisis of trust
Generative AI technology is revolutionizing productivity across industries, but it also spreads negative effects, eroding the intangible asset of social trust. Deepfakes and AI-based misinformation spread with a sophistication that surpasses the limits of human cognition, dramatically increasing the cost for individuals to discern the truth. This paper categorizes the negative effects of AI into three types: "exacerbating the digital divide," "excessive reliance on digital technology," and "abuse of crime and illegal activities," aiming to diagnose these risks and explore effective countermeasures.
Three main types of negative effects of AI: chasm, over-reliance, and crime.
First, the digital divide (AI Divide) has transcended simple accessibility issues, evolving into a gap in "quality of use," exacerbating inequality through the compounding effect of productivity. Small and medium-sized enterprises (SMEs), in particular, lacking capital and data, face a significant risk of being eliminated in competition with large corporations. Second, AI overdependence leads to a "de-skilling" phenomenon, causing a decline in inherent human problem-solving abilities. Simultaneously, the process of verifying AI output and holding individuals accountable for it may lead to excessive workloads. Third, crime and Illegal Acts manifest as financial fraud and public opinion manipulation using deepfakes (such as Sora 2.0) and voice synthesis. This even gives rise to a "liar's bonus," where even genuine evidence is questioned, undermining the foundation of social truth.
Solutions for Restoring Trust: Combining Web 3.0 Technologies with Institutions
To address the negative effects of AI, a comprehensive approach encompassing technology, policy, and education is crucial. Technologically, there is an urgent need to introduce trust infrastructure based on Web 3.0. This includes using Distributed Identity Authentication (DID) to cryptographically prove the sender's identity to prevent impersonation, leveraging Zero-Knowledge Proofs (ZKP) to verify credentials without exposing personal information, and linking C2PA standards with blockchain to transparently track content sources. Policy-wise, it is necessary to strengthen penalties for AI-related crimes and impose proactive management obligations on platforms, while simultaneously bridging the gap by providing AI infrastructure support to SMEs. Educationally, a comprehensive education program covering all age groups should be implemented, focusing on cultivating the habit of verifying information sources and developing critical thinking skills to understand algorithmic biases.
Building a human-centered AI ecosystem governance
AI is a double-edged sword, bringing both limitless possibilities and immense risks to humanity. More important than the speed of technological development is how we use technology safely and responsibly. Governments, businesses, and civil society need to collaborate to build a flexible governance framework that does not hinder technological innovation while upholding human-centered values. Only when trust is guaranteed by Web 3.0 technologies and a safety net is built through law and education can AI truly become a tool for human prosperity.
※ Please refer to the full text of the submission for details.
