Introduction: Unaudited AI is a hacker's ATM.
In Stage 1, we revealed the systemic flaws in the OpenClaw architecture. Today, we'll confront three real-life "bloody scenes." These cases demonstrate a harsh reality: in the realm of AI agents, the most basic oversights often lead to the most catastrophic asset collapses.

Case 1: 1.5 Million Evidence Leaked – Unveiling the Fatal Cost of “Emotional Programming”
Background: A large-scale data breach occurred on Moltbook, the social platform that is the core AI agent of OpenClaw.
• The core of the incident: More than 1.5 million agent credentials were leaked.
• The fatal reason: not due to complex 0-day vulnerabilities, but because the most basic access control mechanism was completely ineffective.
• In-depth review: Moltbook's founder admitted that the entire system's code was automatically generated by AI (Vibe Coding), and he himself did not write a single line of code.
WEEX Labs observes: "Emotional programming" has lowered the development barrier to zero, but it has also lowered the security barrier to negative levels. AI can generate functionality, but it cannot yet generate "defense intuition." AI code lacking human security audits is essentially building a skyscraper on sand.
Case 2: Plaintext stored "treasure map" – the precise hunting of the Infostealer Trojan.
Background: In February 2026, a security company detected a new variant of the data-stealing Trojan Vidar that was specifically targeting OpenClaw users.
• Attack path: Hackers do not need to dig for code vulnerabilities; they only need to update the Trojan's "file grabbing" rules and scan the default directory ~/.openclaw.
• Scale of loss: AI identities of over one million devices are at risk of being taken over.
• Technical drawback: OpenClaw stores core sensitive information such as tokens and private keys in plaintext in a local configuration file.
WEEX Labs observes: "Open source" does not mean that confidential data can be handed over. In a Web3 environment, the security level of local configuration files should be equivalent to that of private keys. This design flaw in OpenClaw directly turned AI assistants into "guides" for hackers.
Case 3: CVE-2026-25253, a high-risk RCE vulnerability that can lead to immediate vulnerability with a single click.
Background: In January 2026, OpenClaw urgently patched a high-risk vulnerability with a CVSS score as high as 8.8 .
• Attack method: Cross-site WebSocket hijacking (CSWSH).
• The terrifying aspect: An unauthenticated remote attacker can simply trick a victim into clicking a malicious link, and then use the browser as a springboard to bypass the sandbox and execute arbitrary low-level system commands on the host machine.
• Lesson learned: “Local access” does not mean security.
WEEX Labs observes: This vulnerability once again demonstrates the importance of a "zero-trust architecture." In today's world where AI agents frequently invoke system privileges, any click on an external link could potentially lead to the entire system being compromised.
WEEX Labs Summary
These three cases demonstrate the triple threats to AI security: uncontrollable code (Case 1), unencrypted data (Case 2), and insecure boundaries (Case 3). In pursuing the explosive productivity brought by AI, if we do not uphold the bottom line of security, every AI assistant we build may become a time bomb buried in our own system.




