"History shows that everything that can be automated will be automated. And certainly, some business and everyday life processes will be replaced by AI agents." - Andrey Grachev, Managing Partner of DWF LabsHowever, as AI development accelerates, the potential for misuse is becoming an increasing concern. Misuse of AI Agents In Web3, AI agents are already rapidly becoming mainstream. They provide a range of functions, from market analysis to autonomous cryptocurrency trading. However, their growing integration presents critical challenges. Misuse of AI by malicious actors is a major concern, from simple phishing campaigns to sophisticated ransomware attacks. The widespread availability of generative AI since late 2022 has fundamentally changed content creation, and attracted malicious actors seeking to abuse the technology. This democratization of computing power has enhanced the capabilities of adversaries and potentially lowered the barriers for less sophisticated threat actors. According to an Entrust report, document forgery facilitated by AI tools has surpassed physical forgery, increasing by 244% year-over-year in 2024. Meanwhile, deepfakes accounted for 40% of all biometric fraud.
"It's already being used for fraud. In video calls to misrepresent people and misrepresent their voices." - Andrey Grachev, Managing Partner of DWF LabsSuch misuse cases have already made headlines. Earlier this month, an employee of a multinational corporation in Hong Kong was tricked by a fraudster using deepfake technology into approving a $25 million transfer. The employee believed they were video-calling with colleagues, including the company's Chief Financial Officer. Despite initial hesitation, the employee proceeded with the payment as the other participants appeared and sounded genuine, only to later discover all participants were deepfake manipulations. From Early Adoption to Mainstream Acceptance Grachev believes such malicious use is inevitable. He noted that technological development often comes with initial flaws, and that these errors decrease as the technology matures, citing the early stages of the World Wide Web and Bitcoin as examples.
"We have to remember that the internet started with porn sites. It was like the first Bitcoin, which started with drug dealers, and then it improved." - Andrey Grachev, Managing Partner of DWF LabsSeveral reports agree with Grachev's view. The adult entertainment industry is suggested to have played a crucial role in the early adoption and development of the internet. Beyond providing a consumer base, it pioneered technologies such as VCR, video streaming, virtual reality, and all forms of communication. Porn acted as an onboarding tool. The adult entertainment industry has historically driven consumer adoption of new technologies. Particularly when successfully meeting the audience's demands, early adoption and application of innovations often lead to wider mainstream acceptance.
"It started as fun, but the fun onboarded a lot of people. Then you can build something on top of that audience." - Andrey Grachev, Managing Partner of DWF LabsOver time, safeguards have been put in place to limit the frequency and accessibility of adult entertainment. Nevertheless, it remains one of the many services the internet provides today. Bitcoin, From the Dark Net to Innovation The evolution of Bitcoin closely mirrors the early use cases of the internet. Bitcoin's initial adoption was heavily associated with the dark net market and illicit activities, such as drug trafficking, fraud, and money laundering. Bitcoin's anonymity and the ease of global fund transfers were attractive to criminals. Despite its continued use in criminal activities, Bitcoin has found numerous legitimate applications. The blockchain technology underpinning cryptocurrencies provides solutions to real-world problems and disrupts traditional financial systems.
Cryptocurrency and blockchain applications are still in the very early stages of the industry, but they will continue to evolve. According to Garchev, the same will happen with the gradual adoption of AI technology. He says we must learn from mistakes and adjust accordingly.
"We must always remember that fraud occurs first, and then people start thinking about how to prevent it. Of course, it will happen, but this is a normal process and a learning curve," Grachev said.
However, knowing that such situations may occur in the future raises questions about who should be held responsible.
Responsibility Issues
Determining responsibility when harm is caused by an agent's actions is a complex legal and ethical issue. Inevitably, questions arise about how to hold AI accountable.
The complexity of AI systems makes it difficult to determine responsibility for harm. The "black box" nature of AI, unpredictable behavior, and continuous learning capabilities make it challenging to apply the usual notions of who is at fault when problems arise.
Additionally, the involvement of multiple parties in the development and deployment of AI complicates the assessment of responsibility, making it difficult to pinpoint the responsibility for AI failures.
Responsibility could lie with the manufacturer for design or production defects, the software developer for code issues, or the user for not following instructions, not installing updates, or not maintaining security.
"I think all of this is too new, and we need to be able to learn from it. If necessary, we should be able to stop some AI agents. But from my perspective, if there was no bad intent in trying to create it, then no one is responsible. Because you are really dealing with something new," Grachev told BeInCrypto.
However, he says these situations must be carefully managed to avoid impacting continuous innovation.
"If we blame this entrepreneur, people will be afraid, and innovation will die. But if it's not working in the right way, it will eventually work. We need to have a way to stop it, learn, improve, and learn again," Grachev added.
However, the boundary is particularly thin, especially in more extreme scenarios.
Responsible AI Adoption, Addressing Trust Issues
When discussing the future of artificial intelligence, a common fear is the scenario where AI agents become more powerful than humans.
"There are a lot of movies about that. If it's about the police or government control, or the military in war, automation is a big fear. Some things can be automated to the point where they can harm humans," Grachev said.
When asked if such scenarios could occur, Grachev said it is theoretically possible. Nevertheless, he acknowledged that he cannot predict what will happen in the future.
However, these scenarios symbolize the fundamental trust issue between humans and artificial intelligence. Grachev says the best way to address this issue is to expose humans to use cases where AI can actually be helpful.
"AI can be hard for people to believe. So we have to start with simple things. Trust is not formed when you explain that the AI agent is trustworthy. People need to get used to using it. For example, if we're talking about cryptocurrency, we could launch a meme on Pump.fun, but why not launch it with a voice message? If you send a voice message to the AI agent saying 'Please launch this and that,' it will be launched. And the next step is to trust the agent with more important decisions," he said.
Ultimately, the journey towards widespread AI adoption will certainly be marked by remarkable progress and unexpected challenges.
Striking a balance between innovation and responsible implementation in this evolving field will be crucial for shaping a future where AI benefits humanity as a whole.





