I never expected that AI stole my wallet

avatar
ODAILY
10 hours ago
This article is machine translated
Show original

Original | Odaily Planet Daily (@OdailyChina)

Author | Azuma (@azuma_eth)

On the morning of November 22nd, Beijing time, Slowmist founder Yu Xuan posted an unusual case on his personal X - a user's wallet was "hacked" by AI...

The details of this case are as follows.

In the early hours of today, X user r_ocky.eth revealed that he had previously hoped to use ChatGPT to conveniently deploy a pump.fun auxiliary trading bot.

r_ocky.eth provided his requirements to ChatGPT, and ChatGPT returned a code segment to him, which could indeed help r_ocky.eth deploy a bot that met his needs, but he never imagined that the code would hide a phishing content - r_ocky.eth linked his main wallet and lost $2,500 as a result.

From the screenshot posted by r_ocky.eth, the code segment provided by ChatGPT will send the address private key to a phishing-like API website, which is the direct cause of the theft.

After r_ocky.eth fell into the trap, the attacker reacted extremely quickly, transferring all the assets in r_ocky.eth's wallet to another address (FdiBGKS8noGHY2fppnDgcgCQts95Ww8HSLUvWbzv1NhX) within half an hour, and then r_ocky.eth traced on-chain and found the suspected main wallet address of the attacker (2jwP4cuugAAYiGMjVuqvwaRS2Axe6H6GvXv3PxMPQNeC).

On-chain information shows that this address has currently accumulated over $100,000 in "stolen goods", so r_ocky.eth suspects that this type of attack may not be an isolated case, but rather a large-scale attack incident.

Afterwards, r_ocky.eth expressed disappointment and said he had lost trust in OpenAI (the company that developed ChatGPT), and called on OpenAI to quickly clean up the abnormal phishing content.

So why would ChatGPT, the most popular AI application at the moment, provide phishing content?

Yu Xuan characterized the root cause of this incident as an "AI poisoning attack", and pointed out that there is a widespread deceptive behavior in ChatGPT, Claude and other LLMs.

The so-called "AI poisoning attack" refers to the deliberate disruption of AI training data or manipulation of AI algorithms. The attackers could be insiders, such as disgruntled current or former employees, or external hackers, and their motives may include causing reputational and brand damage, tampering with the credibility of AI decisions, slowing down or disrupting AI processes, etc. Attackers can introduce data with misleading labels or features to distort the model's learning process, leading to erroneous results when the model is deployed and running.

Looking at this incident, the reason why ChatGPT provided r_ocky.eth with phishing code is most likely because the AI model was contaminated with data containing phishing content during its training, but the AI apparently failed to identify the phishing content hidden in the regular data, and then provided these phishing contents to the user, causing this incident.

As AI develops and becomes more widely adopted, the threat of "poisoning attacks" is becoming increasingly serious. In this incident, although the absolute amount of loss is not large, the extended impact of such risks is enough to raise alarms - suppose it happened in other areas, such as AI-assisted driving...

In responding to netizens' questions, Yu Xuan mentioned a potential measure to mitigate such risks, which is for ChatGPT to add some kind of code review mechanism.

The victim r_ocky.eth also said he had contacted OpenAI about this incident, and although he has not yet received a response, he hopes this case will become an opportunity for OpenAI to pay attention to such risks and propose potential solutions.

Source
Disclaimer: The content above is only the author's opinion which does not represent any position of Followin, and is not intended as, and shall not be understood or construed as, investment advice from Followin.
Like
2
Add to Favorites
2
Comments