I never expected that AI stole my wallet

This article is machine translated
Show original

Author: Azuma, Odaily Planet Daily

On the morning of November 22, Beijing time, Yuxin, the founder of Slow Fog, posted an unusual case on his personal X - a user's wallet was "hacked" by AI...

Never thought AI would steal my wallet

The details of this case are as follows.

In the early hours of today, X user r_ocky.eth revealed that he had previously hoped to use ChatGPT to conveniently deploy a pump.fun auxiliary trading bot.

r_ocky.eth provided his requirements to ChatGPT, and ChatGPT returned a code segment to him, which could indeed help r_ocky.eth deploy a bot that met his requirements, but he never expected that the code would hide a phishing content - r_ocky.eth linked his main wallet and lost $2,500 as a result.

Never thought AI would steal my wallet

From the screenshots posted by r_ocky.eth, the code segment provided by ChatGPT will send the private key to a phishing API website, which is the direct cause of the theft.

In the trap that r_ocky.eth fell into, the attacker reacted extremely quickly, and within half an hour, all the assets in r_ocky.eth's wallet were transferred to another address (FdiBGKS8noGHY2fppnDgcgCQts95Ww8HSLUvWbzv1NhX), and then r_ocky.eth traced on-chain and found the address (2jwP4cuugAAYiGMjVuqvwaRS2Axe6H6GvXv3PxMPQNeC) that was suspected to be the attacker's main wallet.

Never thought AI would steal my wallet

On-chain information shows that this address has currently accumulated more than $100,000 in "stolen goods", so r_ocky.eth suspects that this type of attack may not be an isolated case, but rather a large-scale attack incident.

Afterwards, r_ocky.eth expressed disappointment and said he had lost trust in OpenAI (the company that developed ChatGPT), and called on OpenAI to quickly clean up the abnormal phishing content.

So why would ChatGPT, the most popular AI application at the moment, provide phishing content?

In this regard, Yuxin characterized the root cause of this incident as an "AI poisoning attack", and pointed out that there are widespread deceptive behaviors in ChatGPT, Claude and other LLMs.

The so-called "AI poisoning attack" refers to the deliberate act of damaging AI training data or manipulating AI algorithms. The attackers may be insiders, such as disgruntled current or former employees, or external hackers, and their motives may include causing reputational and brand damage, tampering with the credibility of AI decisions, slowing down or disrupting AI processes, etc. Attackers can achieve this by injecting data with misleading labels or features, distorting the model's learning process, and causing the model to produce erroneous results when deployed and running.

Looking at this incident, the reason why ChatGPT provided r_ocky.eth with phishing code is most likely because the AI model was contaminated with data containing phishing content during training, but the AI apparently failed to identify the phishing content hidden in the regular data, and then provided these phishing contents to the user, causing this incident.

As AI develops and becomes more widely adopted, the threat of "poisoning attacks" is becoming increasingly serious. In this incident, although the absolute amount of loss is not large, the extended impact of such risks is enough to raise alarms - suppose it happened in other areas, such as AI-assisted driving...

Never thought AI would steal my wallet

In responding to netizens' questions, Yuxin mentioned a potential measure to mitigate such risks, which is for ChatGPT to add some kind of code review mechanism.

The victim r_ocky.eth also said he had contacted OpenAI about this matter, and although he had not yet received a response, he hoped this incident could be an opportunity for OpenAI to pay attention to such risks and propose potential solutions.

Source
Disclaimer: The content above is only the author's opinion which does not represent any position of Followin, and is not intended as, and shall not be understood or construed as, investment advice from Followin.
Like
Add to Favorites
Comments