Can AI bots steal your cryptocurrency? Learn about the rise of digital thieves

avatar
ODAILY
03-18
This article is machine translated
Show original

Title: Can AI bots steal your crypto? The rise of digital thieves

Author: Callum Reid

Translated by: 0x deepseek, ChainCather

In the era of the dual-track frenzy of cryptocurrencies and AI technology, the security of digital assets is facing unprecedented challenges. This article reveals how AI bots, with their automated attacks, deep learning, and large-scale penetration capabilities, are turning the crypto field into a new battlefield for crime - from precise phishing to harvesting smart contract vulnerabilities, from deep fake fraud to adaptive malware, the attack methods have exceeded the limits of traditional human defense. Faced with this game of algorithms, users need to be wary of "digital thieves" empowered by AI, and also make good use of AI-driven defense tools. Only by maintaining technological vigilance and emphasizing security practices can we safeguard our wealth fortress in the turbulent waves of the crypto world.

TL;DR

  1. AI bots have self-evolution capabilities, can automate massive crypto attacks, and are far more efficient than human hackers.

  2. By 2024, AI phishing attacks have caused single-instance losses of $65 million, and fake airdrop websites can automatically empty users' wallets.

  3. GPT-3 level AI can directly analyze smart contract vulnerabilities, similar technology has led to the theft of $80 million from Fei Protocol.

  4. AI can build predictive models by brute-force analyzing password leak data, reducing the protection time for weak password wallets by 90%.

  5. Deepfake technology-generated fake CEO videos/audios are becoming a new social engineering weapon to induce transfers.

  6. The black market has already seen the emergence of AI-as-a-Service tools like WormGPT, allowing non-technical users to generate customized phishing attacks.

  7. The BlackMamba proof-of-concept malware uses AI to dynamically rewrite its code, making it 100% undetectable by mainstream security systems.

  8. Hardware wallets that store private keys offline can effectively defend against 99% of AI remote attacks (as verified in the 2022 FTX incident).

  9. AI social botnets can simultaneously control millions of accounts, the Musk deepfake video fraud case involved over $46 million.

I. What are AI bots?

AI bots are self-learning software that can automate and continuously optimize cyber attacks, making them more dangerous than traditional hacker methods.

The core of AI-driven cybercrime today lies in AI bots - these self-learning software programs are designed to process massive data, make independent decisions, and execute complex tasks without human intervention. Although these bots have become a disruptive force in industries like finance, healthcare, and customer service, they have also become weapons for cybercriminals, especially in the cryptocurrency field.

Unlike traditional hacking methods that rely on manual operation and technical expertise, AI bots can fully automate attacks, adapt to new crypto security measures, and even optimize strategies over time. This makes them far superior to human hackers who are limited by time, resources, and error-prone processes.

II. Why are AI bots so dangerous?

The biggest threat of AI-driven cybercrime lies in its scale. A single hacker's ability to infiltrate an exchange or trick users into revealing their private keys is limited, but AI bots can launch thousands of attacks simultaneously and optimize their tactics in real-time.

  • Speed: AI bots can scan millions of blockchain transactions, smart contracts, and websites in minutes to identify wallet vulnerabilities (leading to wallet hacks), DeFi protocol and exchange weaknesses.

  • Scalability: While human scammers may send hundreds of phishing emails, AI bots can send personalized, well-designed phishing emails to millions of people in the same timeframe.

  • Adaptability: Machine learning allows these bots to evolve from each failure, making them harder to detect and intercept.

This automation, adaptability, and large-scale attack capability have led to a surge in AI-driven crypto fraud, making the prevention of crypto scams more critical than ever.

In October 2024, the X account of Andy Ayrey, the developer of the AI bot Truth Terminal, was hacked. The attackers used the account to promote a fraudulent meme coin called Infinite Backrooms (IB), causing IB's market cap to skyrocket to $25 million. Within 45 minutes, the criminals sold their holdings, making a profit of over $600,000.

III. How do AI bots steal crypto assets?

AI bots not only automate fraud, but also tend to be more intelligent, precise, and stealthy. Here are the dangerous AI scams currently used to steal crypto assets:

AI-driven phishing bots

Traditional phishing attacks are not new in the crypto space, but AI has multiplied their threat. Today's AI bots can create messages that closely resemble official communications from platforms like Coinbase or MetaMask, using data harvested from database leaks, social media, and even blockchain records to make the scams highly convincing.

For example, in early 2024, an AI phishing attack targeting Coinbase users tricked them into handing over nearly $65 million through fake security alert emails. Additionally, after the release of GPT-4, scammers built fake OpenAI token airdrop websites that automatically drained users' wallets when they connected them.

These AI-enhanced phishing attacks often have no spelling errors or clumsy wording, and some even deploy AI chatbot assistants to "verify" and steal private keys or 2FA codes. In 2022, the Mars Stealer malware could steal private keys from over 40 wallet plugins and 2FA apps, often spreading through phishing links or pirated tools.

AI vulnerability scanning bots

Smart contract vulnerabilities are a goldmine for hackers, and AI bots are exploiting them at an unprecedented pace. These bots constantly scan platforms like Ethereum or the BNB Smart Chain, looking for vulnerabilities in newly deployed DeFi projects. Once a problem is detected, they automatically exploit it, often within minutes.

Researchers have already demonstrated that AI chatbots (such as those powered by GPT-3) can analyze smart contract code to identify exploitable weaknesses. For example, Zellic co-founder Stephen Tong showed an AI chatbot that detected a vulnerability in a smart contract's "withdrawal" function, similar to the one exploited in the $80 million Fei Protocol attack.

AI-enhanced brute-force attacks

Brute-force attacks used to take a long time, but AI bots have made them exceptionally efficient. By analyzing past password leaks, these bots can quickly identify patterns for cracking passwords and seed phrases, setting new speed records. A 2024 study on desktop cryptocurrency wallets (including Sparrow, Etherwall, and Bither) found that weak passwords greatly reduce resistance to brute-force attacks, underscoring the importance of strong, complex passwords for protecting digital assets.

Deepfake Impersonation Bots

Imagine seeing a video of a trusted crypto influencer or CEO asking you to invest - but it's completely fake. This is the reality of AI-driven deepfake scams. These bots create hyper-realistic videos and audio, even tricking savvy crypto holders into transferring funds.

Social Media Bot Networks

On platforms like X and Telegram, hordes of AI bots are mass-spreading crypto scams. Botnets like "Fox 8" use ChatGPT to generate hundreds of persuasive posts, aggressively promoting scam tokens and responding to users in real-time.

In one case, scammers abused Elon Musk's name and a ChatGPT-generated deepfake video to promote a fake crypto giveaway, tricking people into sending funds to the scammers.

In 2023, Sophos researchers found crypto romance scammers using ChatGPT to chat with multiple victims simultaneously, making their passionate messages more convincing and scalable.

Similarly, Meta reported a sharp rise in malware and phishing links masquerading as ChatGPT or AI tools, often tied to crypto fraud schemes. In romance scams, AI is driving so-called pig butchering - long-term cons where scammers cultivate relationships, then lure victims into fake crypto investments. In 2024, a high-profile case in Hong Kong saw police bust a crime ring that used AI-assisted romance scams to steal $46 million from Asian men.

IV. How AI Malware Fuels Crypto User Cybercrime

AI is teaching cybercriminals how to infiltrate crypto platforms, empowering less technically-skilled attackers to launch credible assaults. This helps explain the massive scale of crypto phishing and malware activity - AI tools let bad actors automate scams and continuously refine effective methods.

AI also enhances malware threats and hacking strategies targeting crypto users. A concerning issue is AI-generated malware, where malicious programs use AI to adapt and evade detection.

In 2023, researchers demonstrated a proof-of-concept called BlackMamba, a polymorphic keylogger that uses AI language models (like the tech behind ChatGPT) to rewrite its code on each execution. This means BlackMamba generates a new variant in memory every run, helping it evade antivirus and endpoint security tools.

In testing, leading endpoint detection and response solutions failed to detect this AI-crafted malware. Once activated, it can secretly capture all user input (including crypto exchange passwords or wallet seed phrases) and send the data to the attacker.

While BlackMamba is just a lab demo, it highlights a real threat: Criminals can leverage AI to create shape-shifting malware targeting crypto accounts, harder to catch than traditional viruses.

Even without exotic AI malware, threat actors abuse AI's popularity to spread classic Trojans. Scammers often set up fake "ChatGPT" or AI-related apps containing malware, knowing users may let their guard down for an AI brand. Security analysts have observed fraud sites impersonating the ChatGPT website, with a "Windows Download" button that silently installs a crypto-stealing Trojan on victims' machines.

Beyond the malware itself, AI also lowers the technical bar for hackers. Previously, criminals needed coding skills to craft phishing pages or viruses. Now, underground "AI-as-a-Service" tools can do most of the work.

Illegal AI chatbots like WormGPT and FraudGPT have emerged on dark web forums, generating phishing emails, malware code, and hacking techniques on demand. For a fee, even non-technical criminals can use these AI bots to create convincing scam sites, generate new malware variants, and scan for software vulnerabilities.

V. How to Protect Your Crypto from AI Bots

AI-driven threats are becoming increasingly sophisticated, so robust security measures are crucial to safeguarding digital assets from automated scams and hacks.

Here are the most effective ways to protect your crypto from hackers and defend against AI-powered phishing, deepfake fraud, and vulnerability-exploiting bots:

  • Use a Hardware Wallet: AI-driven malware and phishing attacks primarily target online (hot) wallets. By using a Ledger or Trezor hardware wallet, you can keep your private keys completely offline, making it virtually impossible for hackers or malicious AI bots to remotely access them. For example, during the 2022 FTX collapse, users with hardware wallets avoided the massive losses suffered by those who had funds stored on the exchange.

  • Enable Multi-Factor Authentication (MFA) and Strong Passwords: AI bots can leverage deep learning in cybercrime to crack weak passwords, using machine learning algorithms trained on leaked data to predict and exploit vulnerable credentials. To address this, always enable MFA through authenticator apps like Google Authenticator or Authy, not SMS-based codes - which are known to be vulnerable to SIM swap attacks.

  • Beware of AI-Powered Phishing Scams: AI-generated phishing emails, messages, and fake support requests are nearly indistinguishable from genuine requests. Avoid clicking links in emails or direct messages, always manually verify website URLs, and never share private keys or seed phrases, no matter how convincing the request may seem.

  • Carefully Verify Identities to Avoid Deepfake Fraud: AI-driven deepfake videos and audio can convincingly impersonate crypto influencers, executives, or even people you know. If someone is asking for funds or promoting an urgent investment opportunity via video or audio, verify their identity through multiple channels before taking action.

  • Stay Informed on the Latest Blockchain Security Threats: Regularly consult trusted blockchain security sources, such as Chainalysis or SlowMist, to stay up-to-date on emerging threats.

Source
Disclaimer: The content above is only the author's opinion which does not represent any position of Followin, and is not intended as, and shall not be understood or construed as, investment advice from Followin.
Like
Add to Favorites
Comments
Followin logo