The U.S. Senate recently sparked a hearing storm against Meta (formerly Facebook), where former high-level executive Sarah Wynn-Williams testified in Congress, pointing out that the company not only tacitly cooperated with the Chinese Communist Party but also indirectly helped China develop AI technology similar to ChatGPT through its "Llama" AI model. The testimony has sparked widespread attention from national security, industry, and AI open-source communities.
Wynn-Williams
noted that the emergence of Chinese AI startup DeepSeek is closely related to Meta's open-source AI model Llama. DeepSeek successfully launched a generative AI model that can compete with OpenAI for just $6 million, demonstrating the strong support Llama provided for China's AI development.
This new AI force endorsed by the Chinese government has raised concerns that U.S. open-source technology might be "reverse weaponized," enabling China to make breakthrough advances in military and surveillance fields.
According to Wynn-Williams, Meta has been secretly briefing Chinese officials since 2015, including key technologies such as AI, with the purpose of "helping China defeat U.S. competitors."
She further accused Meta of executing a project called "Project Aldrin," establishing a physical data transmission channel between the U.S. and China, which was ignored by senior management despite warnings from cybersecurity experts that it could become a backdoor for the Chinese Communist Party.
She emphasized: "The only thing preventing China from accessing U.S. user data through this channel is congressional intervention."
This disclosure occurred amid the intensifying U.S.-China tech war, with the U.S. government continuously tightening export restrictions on advanced AI chips to slow China's catch-up in generative AI.
"The current challenge is how to balance national security and innovation encouragement," noted Prabhu Ram, Vice President of CyberMedia Research consulting firm.
He believes that if the allegations are true, they will cause a major blow to global AI technology confidentiality and transfer prevention mechanisms, potentially forcing the U.S. to re-examine public-private sector cooperation and even establish new international AI regulations.
Wynn-Williams revealed that Meta internally developed a "virality counter" mechanism where posts exceeding 10,000 views trigger a review process with manual filtering by "chief editors." This mechanism applies not only to China but also covers Hong Kong and Taiwan, raising concerns about freedom of speech in democratic camps.
Senator Richard Blumenthal described this system as an "Orwellian censorship tool" (referring to the totalitarianism in George Orwell's novel "1984").
From Language Classes to Business Ambitions: Zuckerberg Personally Leading Strategy Towards China?
According to testimony, Meta CEO Zuckerberg personally took charge of entering China, even attending weekly Chinese language learning classes to deepen interaction and cooperation with Chinese officials.
Wynn-Williams directly stated: "He draped the American flag on himself, claimed to be a patriot, but actually spent ten years building a $18 billion business landscape in China."
Meta Refutes Allegations: Testimony "Disconnected from Reality"
Facing a series of allegations, Meta's spokesperson refuted, saying these claims are "disconnected from reality and full of errors", emphasizing that Meta has not operated services in China, and Zuckerberg's business intentions towards China have been publicly known for years.
However, the public still generally believes this matter will further promote stricter congressional oversight of large tech companies.
Open-Source Model's Double-Edged Sword: Is Llama Innovation or National Security Risk?
Llama, as an open-source AI model launched by Meta, has always been seen as a key force in promoting global AI democratization. It allows developers to freely train and deploy AI on their own infrastructure without relying on closed commercial models, greatly lowering the entry barrier.
However, due to its openness, national security risks are difficult to manage. Greyhound Research CEO Sanchit Vir Gogia pointed out: "We need a set of regulatory tools targeting AI models themselves, not just focusing on hardware. The old framework no longer works."
Critical Moment for AI Regulation: What's the Next Step?
Wynn-Williams' revelations and ongoing congressional investigations have brought international AI cooperation and technology exports into a new phase. The US currently has frequent AI research collaborations with China, but facing concerns about China potentially militarizing technology, future cooperation may dramatically change.
"If regulations are too strict, it might actually harm the US's own innovation and leading position," Prabhu Ram warned, "We should develop precise, targeted regulation and strengthen law enforcement."
Risk Warning
Cryptocurrency investment carries high risks, and its price may fluctuate dramatically. You may lose all your principal. Please carefully assess the risks.