Skip to content

chore(deps): update dependency langchain-core to v1.2.28 [security]#5417

Merged
davidzhao merged 1 commit intomainfrom
renovate/pypi-langchain-core-vulnerability
Apr 11, 2026
Merged

chore(deps): update dependency langchain-core to v1.2.28 [security]#5417
davidzhao merged 1 commit intomainfrom
renovate/pypi-langchain-core-vulnerability

Conversation

@renovate
Copy link
Copy Markdown
Contributor

@renovate renovate bot commented Apr 11, 2026

This PR contains the following updates:

Package Change Age Confidence
langchain-core (changelog) 1.2.191.2.28 age confidence

Warning

Some dependencies could not be looked up. Check the Dependency Dashboard for more information.

GitHub Vulnerability Alerts

CVE-2026-34070

Summary

Multiple functions in langchain_core.prompts.loading read files from paths embedded in deserialized config dicts without validating against directory traversal or absolute path injection. When an application passes user-influenced prompt configurations to load_prompt() or load_prompt_from_config(), an attacker can read arbitrary files on the host filesystem, constrained only by file-extension checks (.txt for templates, .json/.yaml for examples).

Note: The affected functions (load_prompt, load_prompt_from_config, and the .save() method on prompt classes) are undocumented legacy APIs. They are superseded by the dumpd/dumps/load/loads serialization APIs in langchain_core.load, which do not perform filesystem reads and use an allowlist-based security model. As part of this fix, the legacy APIs have been formally deprecated and will be removed in 2.0.0.

Affected component

Package: langchain-core
File: langchain_core/prompts/loading.py
Affected functions: _load_template(), _load_examples(), _load_few_shot_prompt()

Severity

High

The score reflects the file-extension constraints that limit which files can be read.

Vulnerable code paths

Config key Loaded by Readable extensions
template_path, suffix_path, prefix_path _load_template() .txt
examples (when string) _load_examples() .json, .yaml, .yml
example_prompt_path _load_few_shot_prompt() .json, .yaml, .yml

None of these code paths validated the supplied path against absolute path injection or .. traversal sequences before reading from disk.

Impact

An attacker who controls or influences the prompt configuration dict can read files outside the intended directory:

  • .txt files: cloud-mounted secrets (/mnt/secrets/api_key.txt), requirements.txt, internal system prompts
  • .json/.yaml files: cloud credentials (~/.docker/config.json, ~/.azure/accessTokens.json), Kubernetes manifests, CI/CD configs, application settings

This is exploitable in applications that accept prompt configs from untrusted sources, including low-code AI builders and API wrappers that expose load_prompt_from_config().

Proof of concept

from langchain_core.prompts.loading import load_prompt_from_config

# Reads /tmp/secret.txt via absolute path injection
config = {
    "_type": "prompt",
    "template_path": "/tmp/secret.txt",
    "input_variables": [],
}
prompt = load_prompt_from_config(config)
print(prompt.template)  # file contents disclosed

# Reads ../../etc/secret.txt via directory traversal
config = {
    "_type": "prompt",
    "template_path": "../../etc/secret.txt",
    "input_variables": [],
}
prompt = load_prompt_from_config(config)

# Reads arbitrary .json via few-shot examples
config = {
    "_type": "few_shot",
    "examples": "../../../../.docker/config.json",
    "example_prompt": {
        "_type": "prompt",
        "input_variables": ["input", "output"],
        "template": "{input}: {output}",
    },
    "prefix": "",
    "suffix": "{query}",
    "input_variables": ["query"],
}
prompt = load_prompt_from_config(config)

Mitigation

Update langchain-core to >= 1.2.22.

The fix adds path validation that rejects absolute paths and .. traversal sequences by default. An allow_dangerous_paths=True keyword argument is available on load_prompt() and load_prompt_from_config() for trusted inputs.

As described above, these legacy APIs have been formally deprecated. Users should migrate to dumpd/dumps/load/loads from langchain_core.load.

Credit

CVE-2026-40087

LangChain's f-string prompt-template validation was incomplete in two respects.

First, some prompt template classes accepted f-string templates and formatted them without enforcing the same attribute-access validation as PromptTemplate. In particular, DictPromptTemplate and ImagePromptTemplate could accept templates containing attribute access or indexing expressions and subsequently evaluate those expressions during formatting.

Examples of the affected shape include:

"{message.additional_kwargs[secret]}"
"https://example.com/{image.__class__.__name__}.png"

Second, f-string validation based on parsed top-level field names did not reject nested replacement fields inside format specifiers. For example:

"{name:{name.__class__.__name__}}"

In this pattern, the nested replacement field appears in the format specifier rather than in the top-level field name. As a result, earlier validation based on parsed field names did not reject the template even though Python formatting would still attempt to resolve the nested expression at runtime.

Affected usage

This issue is only relevant for applications that accept untrusted template strings, rather than only untrusted template variable values.

In addition, practical impact depends on what objects are passed into template formatting:

  • If applications only format simple values such as strings and numbers, impact is limited and may only result in formatting errors.
  • If applications format richer Python objects, attribute access and indexing may interact with internal object state during formatting.

In many deployments, these conditions are not commonly present together. Applications that allow end users to author arbitrary templates often expose only a narrow set of simple template variables, while applications that work with richer internal Python objects often keep template structure under developer control. As a result, the highest-impact scenario is plausible but is not representative of all LangChain applications.

Applications that use hardcoded templates or that only allow users to provide variable values are not affected by this issue.

Impact

The direct issue in DictPromptTemplate and ImagePromptTemplate allowed attribute access and indexing expressions to survive template construction and then be evaluated during formatting. When richer Python objects were passed into formatting, this could expose internal fields or nested data to prompt output, model context, or logs.

The nested format-spec issue is narrower in scope. It bypassed the intended validation rules for f-string templates, but in simple cases it results in an invalid format specifier error rather than direct disclosure. Accordingly, its practical impact is lower than that of direct top-level attribute traversal.

Overall, the practical severity depends on deployment. Meaningful confidentiality impact requires attacker control over the template structure itself, and higher impact further depends on the surrounding application passing richer internal Python objects into formatting.

Fix

The fix consists of two changes.

First, LangChain now applies f-string safety validation consistently to DictPromptTemplate and ImagePromptTemplate, so templates containing attribute access or indexing expressions are rejected during construction and deserialization.

Second, LangChain now rejects nested replacement fields inside f-string format specifiers.

Concretely, LangChain validates parsed f-string fields and raises an error for:

  • variable names containing attribute access or indexing syntax such as . or []
  • format specifiers containing { or }

This blocks templates such as:

"{message.additional_kwargs[secret]}"
"https://example.com/{image.__class__.__name__}.png"
"{name:{name.__class__.__name__}}"

The fix preserves ordinary f-string formatting features such as standard format specifiers and conversions, including examples like:

"{value:.2f}"
"{value:>10}"
"{value!r}"

In addition, the explicit template-validation path now applies the same structural f-string checks before performing placeholder validation, ensuring that the security checks and validation checks remain aligned.


Configuration

📅 Schedule: (UTC)

  • Branch creation
    • ""
  • Automerge
    • At any time (no schedule defined)

🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied.

Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.

🔕 Ignore: Close this PR and you won't be reminded about this update again.


  • If you want to rebase/retry this PR, check this box

This PR was generated by Mend Renovate. View the repository job log.

@chenghao-mou chenghao-mou requested a review from a team April 11, 2026 20:22
Copy link
Copy Markdown
Contributor

@devin-ai-integration devin-ai-integration bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

✅ Devin Review: No Issues Found

Devin Review analyzed this PR and found no bugs or issues to report.

Open in Devin Review

@davidzhao davidzhao merged commit 6bf89a2 into main Apr 11, 2026
26 checks passed
@davidzhao davidzhao deleted the renovate/pypi-langchain-core-vulnerability branch April 11, 2026 20:30
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant