Skip to content

boltyx0/langchain-reference

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

LangChain Security Reference Architecture

Open-source reference implementations for adding policy enforcement, human-in-the-loop approval, and audit trails to LangChain + LangGraph agents. Maintained by the Interven team.

If you build production AI agents on LangChain or LangGraph, you'll eventually need: a way to block bad tool calls before they execute, a way to redact PII from outbound payloads, a way to require human approval on sensitive writes, and a SIEM-grade audit log of every decision. This repo shows you four progressively-stronger ways to wire that in, with working code you can clone and run.


What's in this repo

Pattern When to use Code
1. Callback Existing LangChain agents — add 3 lines, ship. examples/01_callback/
2. Tool wrapper When you want per-tool control + in-flight redaction. examples/02_tool_wrapper/
3. LangGraph node Custom StateGraph agents with explicit control flow. examples/03_langgraph_node/
4. Compiled-graph guard LangGraph agents where you want one wrap to cover every entry point. examples/04_compiled_graph_guard/

All four use the open-source interven-langchain package, which is MIT licensed and works against any Interven gateway (hosted or self-hosted).


Threat model

Production agent risks we hear from real teams (anonymized, paraphrased):

  • Prompt-injected tool exec — agent reads a "support ticket" containing hidden instructions to exfiltrate keys (Drift / Salesforce / Comet patterns, publicly reported 2025–2026).
  • Naive PII egress — agent summarizes an internal Drive file and posts the summary to Slack #general along with customer emails and SSNs.
  • First-time write to new destination — agent tries to create a Jira ticket in a project nobody whitelisted, or push a PR to a fork that's not the canonical repo.
  • Money-movement without human review — back-office agent tries to initiate a refund / disbursement / wire above the threshold.
  • Audit gap — engineering knows what the agent did; compliance can't produce evidence in a SOC 2 review.

The four patterns below address all of these without changing your tool code.


Pattern 1 — Callback (zero-glue)

Drop one callback into any existing LangChain or LangGraph agent. Every tool call gets scanned against your Interven policies before execution.

from langgraph.prebuilt import create_react_agent
from interven_langchain import InterventCallback

agent = create_react_agent(model, tools=[your_tools])

agent.invoke(
    {"messages": [HumanMessage("...")]},
    config={"callbacks": [InterventCallback(api_key="iv_live_...")]},
)

Strength: literally three lines. Limitation: runs alongside the tool — it can BLOCK a call but it can't rewrite arguments mid-flight (no in-flight SANITIZE redaction, only logged).

→ Full working example


Pattern 2 — Tool wrapper

Wrap each LangChain BaseTool so its _run is gated. Stronger than the callback because the wrapper intercepts the tool call — it can swap the arguments to a redacted version before forwarding.

from interven_langchain import guard

guarded_jira = guard(my_jira_tool, api_key="iv_live_...")
agent = create_react_agent(model, tools=[guarded_jira, ...])

Strength: in-flight redaction (SANITIZE actually rewrites args). Limitation: you must wrap each tool individually.

→ Full working example


Pattern 3 — LangGraph custom node

For LangGraph agents with handwritten StateGraph nodes, drop in a ready-made interven_tool_node instead of langgraph.prebuilt.ToolNode.

from langgraph.graph import StateGraph
from interven_langchain.langgraph import interven_tool_node

graph = StateGraph(MyState)
graph.add_node("agent", call_model)
graph.add_node("tools", interven_tool_node(tools, api_key="iv_live_..."))

Strength: explicit, debuggable, no config-threading required. Limitation: only your tools node is guarded — other nodes that make HTTP calls (custom REST tools, agent-managed APIs) are not.

→ Full working example


Pattern 4 — Compiled-graph guard

The "I built my own graph and I want every entry point covered" pattern. One call wraps the compiled graph; every invoke / ainvoke / stream / astream carries the Interven callback.

from interven_langchain.langgraph import guard_state_graph

graph = builder.compile()
guarded = guard_state_graph(graph, api_key="iv_live_...")

guarded.invoke({"messages": [...]})  # scan happens automatically
guarded.stream({"messages": [...]})  # also scanned

Strength: belt-and-braces. Use this when you've added custom nodes that do their own HTTP and you don't want to remember to wrap each one.

→ Full working example


How decisions translate

Every Interven scan returns one of four decisions:

Decision What you do
ALLOW Tool runs normally.
SANITIZE Tool runs with redacted arguments (in patterns 2–4); logged in pattern 1.
DENY Tool is blocked. The wrapper either raises InterventBlockedError or returns a refusal string to the LLM (configurable via on_block).
REQUIRE_APPROVAL Tool pauses. A human reviewer approves in the Interven Console (or via Slack approve/deny buttons if configured). The agent finishes the task in the same turn after approval.

LangSmith integration

Every Interven decision emits a child run under your existing LangSmith trace tree. Set LANGSMITH_API_KEY + LANGSMITH_TRACING=true and install the optional extra:

pip install 'interven-langchain[langsmith]'

You'll see runs named interven.scan.<tool> with decision + reason codes

  • risk band + a deeplink back to the Interven Console. Compliance reviewers debug security decisions in the same UI they already use for agent traces.

Detailed LangSmith integration guide


What this repo is NOT

  • Not a replacement for LangChain's built-in callbacks. It builds on top of them. If you don't have an Interven account, the patterns still work to log tool calls — they just don't enforce policy.
  • Not a guardrail / refusal layer. Interven is a policy engine, not a prompt-safety classifier. It catches PII / SECRETS / known threat indicators and lets you write content-pattern policies. For "is this LLM output toxic?" use a content-moderation model alongside.
  • Not a model gateway. Interven sits between the agent and its TOOLS, not between the agent and the LLM provider. LangSmith / Helicone / LiteLLM cover the LLM-provider side.

Run any example

Each example directory has its own requirements.txt and a run.py.

cd examples/01_callback
python -m venv .venv && source .venv/bin/activate
pip install -r requirements.txt

export INTERVEN_API_KEY=iv_live_...     # sign up at https://app.intervensecurity.com/signup
export OPENAI_API_KEY=sk-...
python run.py

Each run prints the decision tree it took — useful for understanding what happened on a DENY or REQUIRE_APPROVAL.


Contributing

Fork → write your example under examples/0X_my_pattern/ → PR.

We accept patterns that:

  • Show real LangChain / LangGraph use you've shipped
  • Run end-to-end against the published interven-langchain package on PyPI
  • Include requirements.txt and a README.md explaining the threat model the pattern addresses

We don't accept:

  • Patterns that require self-hosting Interven (we want the examples to work against the SaaS)
  • Patterns that bypass the Interven decision (e.g., "scan but always allow")

License

MIT — see LICENSE.

Maintained by

Interven Security — open-source AI agent firewall. Hosted at intervensecurity.com, self-hostable via docker-compose. Found a bug or have a question? Open an issue or email founder@intervensecurity.com.

About

Reference patterns for adding policy enforcement, PII redaction, human-in-the-loop approval, and audit logs to LangChain + LangGraph agents.

Resources

License

Contributing

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages