Beware of "Prompt-to-Dog Attacks" When Using AI Tools

This article is machine translated
Show original

Slow Mist founder warns of automation abuse.

Beware of "Prompt-to-Dog Attacks" When Using AI Tools
Yu Xian, founder of blockchain security firm SlowMist, has issued an urgent warning about a new security threat targeting users of AI tools.

Weixian said today through the X channel that users should be especially careful about "prompt poisoning attacks" contained in files such as agents.md, skills.md, and MCP (Model Context Protocol) when using AI tools. He urged users to be vigilant, saying, "Related attack cases have already occurred."

Of particular concern is the AI tool's "Dangerous Mode" feature. When activated, this mode allows the AI to automatically control the computer without user confirmation, potentially leading to serious security breaches if malicious prompts are injected.

On the other hand, disabling risk mode requires user confirmation for every operation, which increases security but creates a dilemma where work efficiency is greatly reduced.

While the rapid advancement of AI technology is increasing the convenience of automated functions, this warning also highlights the growing number of security vulnerabilities. Experts recommend avoiding configuration files or plugins from unknown sources when using AI tools, and always performing manual verification procedures when performing critical tasks.

Food, beverage, and alcohol-related... AI technicians, agents md/skills md/mcp等里的提示词投毒攻击,已经有在野案例了👀 https://t.co/wLDvbk3gGy pic.twitter.com/LWiPVOZhse

— Cos(余弦)😶‍🌫️ (@evilcos)December 29, 2025


Joohoon Choi joohoon@blockstreet.co.kr

Source
Disclaimer: The content above is only the author's opinion which does not represent any position of Followin, and is not intended as, and shall not be understood or construed as, investment advice from Followin.
Like
58
Add to Favorites
18
Comments