Critical Langchain Core Vulnerability Exposes Secrets Via...

Critical Langchain Core Vulnerability Exposes Secrets Via...

A critical security flaw has been disclosed in LangChain Core that could be exploited by an attacker to steal sensitive secrets and even influence large language model (LLM) responses through prompt injection.

LangChain Core (i.e., langchain-core) is a core Python package that's part of the LangChain ecosystem, providing the core interfaces and model-agnostic abstractions for building applications powered by LLMs.

The vulnerability, tracked as CVE-2025-68664, carries a CVSS score of 9.3 out of 10.0. Security researcher Yarden Porat has been credited with reporting the vulnerability on December 4, 2025. It has been codenamed LangGrinch.

"A serialization injection vulnerability exists in LangChain's dumps() and dumpd() functions," the project maintainers said in an advisory. "The functions do not escape dictionaries with 'lc' keys when serializing free-form dictionaries."

"The 'lc' key is used internally by LangChain to mark serialized objects. When user-controlled data contains this key structure, it is treated as a legitimate LangChain object during deserialization rather than plain user data."

According to Cyata researcher Porat, the crux of the problem has to do with the two functions failing to escape user-controlled dictionaries containing "lc" keys. The "lc" marker represents LangChain objects in the framework's internal serialization format.

"So once an attacker is able to make a LangChain orchestration loop serialize and later deserialize content including an 'lc' key, they would instantiate an unsafe arbitrary object, potentially triggering many attacker-friendly paths," Porat said.

This could have various outcomes, including secret extraction from environment variables when deserialization is performed with "secrets_from_env=True" (previously set by default), instantiating classes within pre-approved trusted namespaces, such as langchain_core, langchain, and langchain_community, and potentially even leading to arbitrary code execution via Jinja2 templates.

What's more, the escaping bug enables the injection of LangChain object structures through user-controlled fields like metadata, additional_kwargs, or response_metadata via prompt injection.

The patch released by LangChain introduces new restrictive defaults in load() and loads() by means of an allowlist parameter "allowed_objects" that allows users to specify which classes can be serialized/deserialized. In addition, Jinja2 templates are blocked by default, and the "secrets_from_env" o

Source: The Hacker News