Critical flaw in core AI agent technology… Langchain issues 'LangGrinch' alert

This article is machine translated
Show original

A critical security vulnerability has been discovered in "langchain-core," a library crucial for operating AI agents. This vulnerability, dubbed "LangGrinch," allows attackers to steal sensitive information from AI systems. This vulnerability is a wake-up call for the industry, as it could potentially undermine the security foundations of numerous AI applications in the long term.

AI security startup Cyata Security disclosed this vulnerability as CVE-2025-68664 and gave it a severity rating of 9.3 on the Comprehensive Vulnerability Scoring Standard (CVSS). The core of the problem is that internal helper functions included in LangChain Core can misidentify user input as a trusted object during the serialization and deserialization process. By utilizing the "prompt injection" technique, an attacker can insert an internal marker key into the structured output generated by the agent, manipulating it so that it is subsequently processed as a trusted object.

Langchain Core serves as a core component of numerous AI agent frameworks and has been downloaded tens of millions of times in the past 30 days, totaling over 847 million downloads overall. Considering the applications connected to the entire Langchain ecosystem, experts predict that the impact of this vulnerability will be far-reaching.

"What's unusual about this vulnerability is that it's not a simple deserialization issue, but rather occurs in the serialization path itself," explained Yarden Porat, a security researcher at Cyata. "The process of storing, streaming, and later restoring structured data generated by AI prompts exposes a new attack surface." In fact, Cyata revealed that it has identified 12 distinct attack vectors that can lead to various scenarios from a single prompt.

Once the attack is triggered, all environment variables can be leaked via remote HTTP requests, including sensitive information such as cloud credentials, database access URLs, VectorDB information, and LLM API keys. This vulnerability is particularly serious because it is a structural flaw that occurs solely within Langchain Core itself, without any third-party tools or external integrations. Cyata expressed its alarm, calling it a "persistent threat to the ecosystem's plumbing layer."

The security patches addressing the current issue have been released in Langchain Core versions 1.2.5 and 0.3.81. Saiata notified Langchain management of this issue before making it public, and the team reportedly took immediate action as well as measures to strengthen security in the long term.

“As AI systems are deployed in industrial settings, the key security issue is now not the code execution itself, but the ultimate authority the system will have,” said Shahar Tal, co-founder and CEO of Cyata. “In an agent ID-based architecture, reducing access rights and minimizing the impact radius are essential design elements,” he emphasized.

This incident is expected to serve as an opportunity to reexamine the fundamentals of security design in the AI industry, which is increasingly centered on agent-based automation rather than human intervention.

Get real-time news... Go to TokenPost Telegram

Copyright © TokenPost. Unauthorized reproduction and redistribution prohibited.

#AISecurity #CyataSecurity #LangchainCore

Source
Disclaimer: The content above is only the author's opinion which does not represent any position of Followin, and is not intended as, and shall not be understood or construed as, investment advice from Followin.
Like
58
Add to Favorites
18
Comments