feat(sdk): middleware adapters for OpenAI / Anthropic / LangChain / CrewAI#32
feat(sdk): middleware adapters for OpenAI / Anthropic / LangChain / CrewAI#32
Conversation
Adds the shared primitives for runtime middleware adapters: - RuleSource reads lessons.md from a brain directory (same path Claude Code hooks use) and selects top-N RULE/PATTERN lessons. - build_brain_rules_block() renders the same <brain-rules> XML the SessionStart hook emits, for consistency across environments. - check_output() scans text against RULE-tier regex patterns derived from rule_to_hook.classify_rule (reuses existing classifier). - RuleViolation exception + GRADATA_BYPASS=1 kill switch env var. Zero changes to existing rule_engine / rule_to_hook — middleware is additive.
Wraps anthropic.Anthropic() so every client.messages.create() call gets Gradata's <brain-rules> appended to the system prompt and its response text post-checked against RULE-tier patterns. - strict=False (default) logs violations; strict=True raises RuleViolation - Handles both string and content-block-list system prompts - Lazy ImportError with 'pip install anthropic' hint when dep missing - All other client attributes delegate to the underlying client
Wraps openai.OpenAI() so every client.chat.completions.create() call gets a <brain-rules> system message prepended (or merged into an existing system message) and its response text post-checked. - Same strict / bypass semantics as AnthropicMiddleware - Lazy ImportError with 'pip install openai' hint when dep missing
Implements LangChain's BaseCallbackHandler: - on_llm_start: prepends <brain-rules> to the first prompt - on_chat_model_start: extends/inserts a SystemMessage with the block - on_llm_end: post-checks the LLMResult text against RULE-tier patterns Gracefully handles missing langchain-core — raises ImportError with 'pip install langchain-core' hint at instantiation time via __new__.
Callable that CrewAI agents register in their guardrails=[...] list. Returns (True, output) for clean text; when strict=True (default), returns (False, 'Gradata rule violation(s): ...') so CrewAI can retry. Text-coercion handles CrewAI output objects (raw/output/text/content attrs) as well as plain strings and dicts. No hard crewai dependency.
Covers common behavior (rule source, strict mode, GRADATA_BYPASS kill switch, optional-deps ImportError contract) and per-adapter usage for Anthropic, OpenAI, LangChain, CrewAI, plus the advanced custom RuleSource pattern.
There was a problem hiding this comment.
Gradata has reached the 50-review limit for trial accounts. To continue receiving code reviews, upgrade your plan.
|
Warning Rate limit exceeded
Your organization is not enrolled in usage-based pricing. Contact your admin to enable usage-based pricing to continue reviews beyond the rate limit, or try again in 0 minutes and 34 seconds. ⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. ℹ️ Review info⚙️ Run configurationConfiguration used: Organization UI Review profile: ASSERTIVE Plan: Pro Run ID: 📒 Files selected for processing (8)
📝 WalkthroughWalkthroughAdds a runtime middleware subsystem: core utilities for sourcing/injecting Changes
Sequence Diagram(s)sequenceDiagram
participant Client as Agent/Client
participant MW as Middleware Adapter
participant RS as RuleSource
participant LLM as LLM Service
Client->>MW: request(create/chat/llm_start, prompts/messages)
MW->>RS: select() / fetch eligible rules
RS-->>MW: ranked RULE/PATTERN list
MW->>MW: build_brain_rules_block()
MW->>MW: inject_into_system(modified messages)
MW->>LLM: send(modified messages)
LLM-->>MW: response
MW->>MW: extract_text(response)
MW->>RS: check_output(text, strict?)
RS-->>MW: [] or [RuleViolation]
alt violations & strict=True
MW-->>Client: raise/return failure (RuleViolation)
else clean or non-strict
MW-->>Client: return original response
end
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly related PRs
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Comment |
There was a problem hiding this comment.
Actionable comments posted: 10
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/gradata/middleware/_core.py`:
- Around line 121-137: The code in _load_from_dicts allows confidence values
outside [0.0,1.0]; clamp or validate the parsed conf before creating the
_ScoredLesson (e.g. conf = max(0.0, min(conf, 1.0)) or raise/log and skip), then
use that sanitized value when constructing _ScoredLesson; apply the identical
validation/clamping to the other lesson intake path that processes
_static_lessons (the second loader mentioned around lines 194-200) so no
out-of-range confidence is stored.
In `@src/gradata/middleware/anthropic_adapter.py`:
- Around line 50-67: The _extract_text function silently mishandles a
string-shaped response.content by iterating its characters; fix by
special-casing string content at the start of the function (when you obtain
content from response or dict) and return the string directly (or normalize it
to a single text block) before the list iteration. Update the logic in
_extract_text so it checks isinstance(content, str) and handles/returns that
string instead of treating it like an iterable of blocks.
In `@src/gradata/middleware/crewai_adapter.py`:
- Around line 47-55: The constructor for CrewAIGuard currently sets strict to
True by default which inverts the intended behavior; change the __init__
signature to default strict to False (def __init__(..., *, strict: bool = False,
...)) and keep assigning self._strict = strict so that the adapter defaults to
non-strict/pass-through behavior as documented.
In `@src/gradata/middleware/langchain_adapter.py`:
- Around line 66-76: The current on_llm_start mutates only prompts[0]; change it
to prepend build_brain_rules_block(self._source) to every entry in the batch by
iterating over all items (e.g., for i in range(len(prompts)): prompts[i] =
f"{block}\n\n{prompts[i]}") and keep the same early-return when block or prompts
is falsy; apply the same fix to the chat handler (the method handling messages
around lines 78–100) by iterating over every message in the messages list and
prepending the block to the message content, handling both dict messages with a
'content' key and message objects with a .content attribute, updating them
in-place.
- Around line 90-100: The current code naively stringifies BaseMessage.content
when msg_type == "system" (using f"{content}\n\n{block}"), which will corrupt
multimodal messages whose content is a list; modify the logic in
langchain_adapter.py to inspect the type of first.content: if it's a str,
concatenate as before; if it's a list (or other sequence of blocks), extend or
append the new block into that list (preserving structure); otherwise, fall back
to wrapping both into a list. Update the handling for first (the BaseMessage)
and for insertion into first_batch/SystemMessage so SystemMessage(content=...)
receives a string or list consistent with the message content type.
In `@src/gradata/middleware/openai_adapter.py`:
- Around line 74-79: The current logic stringifies structured system message
content (variable existing) before calling inject_into_system, which corrupts
non-string payloads; change the handling in the block that inspects out[0] so
that if existing is a str you call inject_into_system(existing, block) and
replace out[0]["content"] with that result, but if existing is not a str (e.g.,
list/dict multimodal content) do NOT call str() or mutate it—instead create a
new system message (role="system") containing just the injected text and prepend
or insert it into out (so you preserve the original structured existing
content); update references in this code path around inject_into_system, out,
and existing accordingly.
In `@tests/test_middleware_core.py`:
- Around line 50-56: The test test_build_brain_rules_block_respects_max_rules
creates synthetic lessons with confidence values that exceed 1.0; update the
lessons generation in that test so confidence stays within the valid [0.0,1.0]
range (e.g., cap each confidence with min(1.0, computed_value) or generate using
a smaller increment) so RuleSource(lessons=lessons, max_rules=5) receives only
valid confidences and the test continues to exercise max_rules.
In `@tests/test_middleware_langchain.py`:
- Around line 114-129: Combine the two tests into one parametrized test that
covers both strict=True and strict=False cases: create a single test function
(e.g., test_on_llm_end_strict_parametrized) using pytest.mark.parametrize over
strict and an expected behavior flag, instantiate LangChainCallback with
brain_path=brain_with_em_dash_rule and the parametrized strict value, call
cb.on_llm_end(_FakeLLMResult("bad \u2014 output")), and assert that
RuleViolation is raised when expected (use pytest.raises for the strict case)
and not raised for the non-strict case; reference LangChainCallback,
_FakeLLMResult and RuleViolation to locate the original test logic.
In `@tests/test_middleware_openai.py`:
- Around line 102-125: Combine the two boundary tests into a single parametrized
test to remove duplication: create one test (e.g., test_wrap_openai_strictness)
decorated with pytest.mark.parametrize over strict values (True, False) and an
expected_outcome flag (or expected_exception for True), instantiate the same
_FakeClient(reply="bad \u2014 output") and call wrap_openai with the
parametrized strict argument (using brain_with_em_dash_rule), then assert that
when strict is True a RuleViolation is raised (use pytest.raises for
RuleViolation) and when strict is False the call returns a non-None response;
update/remove the original test_wrap_openai_strict_raises_on_violation and
test_wrap_openai_non_strict_does_not_raise accordingly.
- Around line 120-125: Replace the weak truthiness check on resp with concrete
assertions that validate the adapter's returned structure and content: after
calling wrapped.chat.completions.create, assert that resp contains expected
top-level fields (e.g., "id" and "model" == "gpt-4o-mini"), that resp.choices is
a non-empty list, and that resp.choices[0] has a message with role "assistant"
and non-empty content; use the same wrapped.chat.completions.create call and
variable resp to locate the test to modify.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro
Run ID: b4e0e919-c159-40b7-b016-95df586f0e71
📒 Files selected for processing (12)
docs/middleware.mdsrc/gradata/middleware/__init__.pysrc/gradata/middleware/_core.pysrc/gradata/middleware/anthropic_adapter.pysrc/gradata/middleware/crewai_adapter.pysrc/gradata/middleware/langchain_adapter.pysrc/gradata/middleware/openai_adapter.pytests/test_middleware_anthropic.pytests/test_middleware_core.pytests/test_middleware_crewai.pytests/test_middleware_langchain.pytests/test_middleware_openai.py
📜 Review details
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Cloudflare Pages
🧰 Additional context used
📓 Path-based instructions (2)
tests/**
⚙️ CodeRabbit configuration file
tests/**: Test files. Verify: no hardcoded paths, assertions check specific values not just truthiness,
parametrized tests preferred for boundary conditions, floating point comparisons use pytest.approx.
Files:
tests/test_middleware_crewai.pytests/test_middleware_anthropic.pytests/test_middleware_core.pytests/test_middleware_openai.pytests/test_middleware_langchain.py
src/gradata/**/*.py
⚙️ CodeRabbit configuration file
src/gradata/**/*.py: This is the core SDK. Check for: type safety (from future import annotations required), no print()
statements (use logging), all functions accepting BrainContext where DB access occurs, no hardcoded paths. Severity
scoring must clamp to [0,1]. Confidence values must be in [0.0, 1.0].
Files:
src/gradata/middleware/__init__.pysrc/gradata/middleware/openai_adapter.pysrc/gradata/middleware/langchain_adapter.pysrc/gradata/middleware/anthropic_adapter.pysrc/gradata/middleware/_core.pysrc/gradata/middleware/crewai_adapter.py
🔇 Additional comments (2)
docs/middleware.md (1)
1-108: LGTM — docs are clear and aligned with adapter behavior.Examples and behavior notes (strict mode, bypass, optional deps) are coherent and actionable.
src/gradata/middleware/anthropic_adapter.py (1)
103-119: Good interception boundary increate().Injection is scoped to
system, and post-checking is cleanly applied without mutating returned response objects.
- _core.py: clamp lesson confidence to [0.0, 1.0] in both intake paths - anthropic_adapter: handle string-shaped response.content (no char-iteration) - crewai_adapter: default strict=False to match docstring/pass-through contract - langchain_adapter: inject rules into every prompt/batch, not just index 0 - langchain_adapter: preserve list/multimodal system content (no stringify) - openai_adapter: preserve structured system content (prepend fresh sys msg) - tests: parametrize openai/langchain strict boundary; cap synthetic confidences - tests: assert concrete response content, add multimodal + batch regressions
There was a problem hiding this comment.
Gradata has reached the 50-review limit for trial accounts. To continue receiving code reviews, upgrade your plan.
Deploying gradata-dashboard with
|
| Latest commit: |
25afe9d
|
| Status: | ✅ Deploy successful! |
| Preview URL: | https://71781303.gradata-dashboard.pages.dev |
| Branch Preview URL: | https://feat-sdk-middleware-adapter.gradata-dashboard.pages.dev |
|
@coderabbitai review |
✅ Actions performedReview triggered.
|
|
@coderabbitai full review |
✅ Actions performedFull review triggered. |
There was a problem hiding this comment.
Actionable comments posted: 4
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/gradata/middleware/_core.py`:
- Around line 134-148: The loop over self._static_lessons currently calls
float(lesson.get("confidence", 0.0) or 0.0) which will raise for non-numeric
strings; guard the parsing in the block that builds _ScoredLesson so that you
try to coerce confidence only if it's numeric (e.g., check
isinstance/str.isnumeric/try-except ValueError) and on failure use a safe
default (0.0) or skip the lesson; specifically update the code around
_clamp_confidence and the confidence assignment so malformed
lesson["confidence"] values do not let TypeError/ValueError escape and instead
produce a clamped default or omit the entry when constructing the _ScoredLesson
for self._static_lessons.
In `@src/gradata/middleware/crewai_adapter.py`:
- Around line 81-89: _coerce_text currently skips explicit empty strings because
it checks truthiness before returning, causing "" to fall through to
str(output); update the two places where you iterate attrs
("raw","output","text","content") over getattr(output, ...) and over
output.get(...) to return the value whenever isinstance(val, str) (i.e., remove
the truthiness check) so that empty strings are preserved; leave the final
fallback to str(output) only if no string fields are present.
In `@src/gradata/middleware/langchain_adapter.py`:
- Around line 17-20: Update the module docstring in
src/gradata/middleware/langchain_adapter.py to reflect current batch behavior:
replace the sentence claiming injection is "best-effort on the first prompt
only" with a statement that injection now mutates every prompt in a batch
(mentioning on_llm_start() mutates every prompt and on_chat_model_start()
iterates every batch entry), and advise that OpenAIMiddleware still provides
stricter per-prompt control; reference the module-level docstring and the two
methods on_llm_start and on_chat_model_start so maintainers can verify wording
against their implementations.
In `@tests/test_middleware_crewai.py`:
- Around line 30-46: Collapse the two nearly identical tests into a single
parametrized test over strict values for CrewAIGuard using the same
brain_with_em_dash_rule setup: call guard =
CrewAIGuard(brain_path=brain_with_em_dash_rule, strict=strict) and guard(text)
for text = "has em dash \u2014 here", then assert the expected outcomes per
parameter — when strict is True assert ok is False and that result contains
"em-dash" or "em dash" (case-insensitive), and when strict is False assert ok is
True and result == text; keep the test name and parameterization clear to cover
both boundary cases in one test.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro
Run ID: 44f21cc5-4296-407d-aa0a-76707d6befca
📒 Files selected for processing (12)
docs/middleware.mdsrc/gradata/middleware/__init__.pysrc/gradata/middleware/_core.pysrc/gradata/middleware/anthropic_adapter.pysrc/gradata/middleware/crewai_adapter.pysrc/gradata/middleware/langchain_adapter.pysrc/gradata/middleware/openai_adapter.pytests/test_middleware_anthropic.pytests/test_middleware_core.pytests/test_middleware_crewai.pytests/test_middleware_langchain.pytests/test_middleware_openai.py
📜 Review details
🧰 Additional context used
📓 Path-based instructions (2)
tests/**
⚙️ CodeRabbit configuration file
tests/**: Test files. Verify: no hardcoded paths, assertions check specific values not just truthiness,
parametrized tests preferred for boundary conditions, floating point comparisons use pytest.approx.
Files:
tests/test_middleware_openai.pytests/test_middleware_crewai.pytests/test_middleware_core.pytests/test_middleware_anthropic.pytests/test_middleware_langchain.py
src/gradata/**/*.py
⚙️ CodeRabbit configuration file
src/gradata/**/*.py: This is the core SDK. Check for: type safety (from future import annotations required), no print()
statements (use logging), all functions accepting BrainContext where DB access occurs, no hardcoded paths. Severity
scoring must clamp to [0,1]. Confidence values must be in [0.0, 1.0].
Files:
src/gradata/middleware/crewai_adapter.pysrc/gradata/middleware/langchain_adapter.pysrc/gradata/middleware/openai_adapter.pysrc/gradata/middleware/anthropic_adapter.pysrc/gradata/middleware/__init__.pysrc/gradata/middleware/_core.py
🔇 Additional comments (17)
docs/middleware.md (1)
1-108: LGTM!The documentation is well-structured, covers all four adapters with clear examples, and accurately documents the shared behavior (rule source, strict mode, bypass switch, optional dependencies). The advanced
RuleSourcecustomization section is a helpful addition for power users.tests/test_middleware_anthropic.py (5)
1-6: LGTM!Proper module docstring and
from __future__ import annotationsimport as required by coding guidelines.
19-59: LGTM!The stub and fake classes are well-designed to mimic the essential parts of the Anthropic SDK without requiring the real package. The
last_kwargscapture pattern enables clean assertions on injection behavior.
82-98: LGTM!Good specific assertions checking for
<brain-rules>,TONE, andem dashesstrings rather than just truthiness. The test properly validates injection behavior.
132-148: LGTM!The non-strict mode test correctly validates that violations are logged but don't raise. The assertion on line 147 properly checks for specific log content (
"rule violation") rather than just checking that any log was emitted.
190-215: LGTM!The import error test is thorough — it removes both the stub and cached adapter module, installs a custom import hook to block
anthropic, and verifies the error message contains the install hint. This ensures users get actionable guidance when the optional dependency is missing.tests/test_middleware_core.py (5)
1-16: LGTM!Proper imports with
from __future__ import annotationsas required by coding guidelines.
50-58: LGTM!The confidence clamping with
min(1.0, 0.90 + i / 200)ensures all synthetic values stay within the valid[0.0, 1.0]range, properly exercisingmax_rulestruncation while respecting the confidence contract.
61-82: LGTM!Good test coverage for enforcement behavior: validating pattern name specificity (
em-dash), strict mode raising, and the distinction between RULE and non-RULE tiers.
95-114: LGTM!The bypass tests thoroughly validate the
GRADATA_BYPASSenvironment variable behavior, including the important edge case that"0"is treated as false (line 98-99). The test on line 113 correctly verifies that strict mode returns an empty list rather than raising when bypassed.
116-135: LGTM!The filesystem-based
RuleSourcetests properly usetmp_pathfixtures (no hardcoded paths) and validate both the happy path (loading fromlessons.md) and the graceful degradation when the brain directory doesn't exist.src/gradata/middleware/anthropic_adapter.py (6)
1-36: LGTM!Proper module structure with
from __future__ import annotations, clear docstring with usage example, and appropriate imports from the core module.
38-46: LGTM!The import guard correctly raises an informative
ImportErrorwith an install hint, and properly chains the original exception usingfrom exc.
48-71: LGTM!The
_extract_textfunction robustly handles multiple response shapes: direct string content (line 57-58), SDK objects with typed content blocks, and dict-shaped responses. This defensive approach ensures compatibility across Anthropic SDK versions.
74-96: LGTM!The middleware class is well-designed: it validates the optional dependency upfront, stores the original
messagesobject for delegation, and uses__getattr__to transparently pass through all other client attributes.
98-124: LGTM!The
_MessagesProxycorrectly handles both system prompt formats (string and list of content blocks) and properly delegates non-createcalls to the original messages object. The guard on line 121 (if text:) appropriately skips enforcement when no text was extracted.
126-136: LGTM!Clean convenience constructor that mirrors the class parameters.
Consolidate the "getattr-or-dict-key" response-probing pattern into a single `_get` helper in `_core.py`. All four adapters (openai/anthropic/langchain/crewai) were re-implementing the same attr-then-fallback-to-dict lookup around response fields and content blocks, so the helper lets each extractor collapse by ~30%. Also clean up adjacent smells: - openai: drop dead `if not block: return list(messages)` branch (guarded upstream) and the double `list(messages)` copy at the call site. - openai `_inject_into_messages`: flatten the nested if/else so the string/None case is the one explicit branch and every non-string content (multimodal list, unexpected shape) falls through to "prepend a fresh system message". - crewai: pull the output text-key tuple into a module constant so it's defined once rather than repeated for attr-vs-dict passes. - __init__: replace the chained if-branch lazy dispatch with a single _LAZY_EXPORTS map + importlib. Behaviour is unchanged — all CR-motivated fixes (multimodal list/string handling, confidence clamping, batch-prompt iteration, strict default flip) are preserved. Full suite: 2111 passed / 23 skipped, ruff clean, pyright 0 errors (same 8 pre-existing warnings). Co-Authored-By: Gradata <noreply@gradata.ai>
There was a problem hiding this comment.
Gradata has reached the 50-review limit for trial accounts. To continue receiving code reviews, upgrade your plan.
|
Polite re-ping: CR review is still showing CHANGES_REQUESTED but all line-level actionables from the last review were addressed in the most recent push. Could you re-review when convenient? @coderabbitai full review |
|
✅ Actions performedFull review triggered. |
There was a problem hiding this comment.
Gradata has reached the 50-review limit for trial accounts. To continue receiving code reviews, upgrade your plan.
|
Addressed CR feedback round 3:
Skipped:
|
|
@coderabbitai review — fixes pushed, please re-verify. |
|
✅ Actions performedReview triggered.
|
|
@coderabbitai review |
✅ Actions performedReview triggered.
|
|
@coderabbitai review — fixes pushed since last review, please re-verify |
|
✅ Actions performedReview triggered.
|
|
@coderabbitai review — CR Pro active, please re-verify. |
|
✅ Actions performedReview triggered.
|
|
@coderabbitai review — re-verify latest pushes |
|
✅ Actions performedReview triggered.
|
There was a problem hiding this comment.
Gradata has reached the 50-review limit for trial accounts. To continue receiving code reviews, upgrade your plan.
|
@coderabbitai review — rebased on latest main, all prior fixes applied |
|
✅ Actions performedReview triggered.
|
Summary
Ships
gradata.middleware— runtime rule injection + enforcement for direct-SDK agents outside Claude Code.Until now, Gradata's hooks only fired inside Claude Code; agents built directly on the Anthropic, OpenAI, LangChain, or CrewAI SDKs had no runtime enforcement. This PR adds that layer as a thin, additive subpackage.
AnthropicMiddleware/wrap_anthropic(Anthropic())— injects<brain-rules>intomessages.create(system=...), post-checks response text.OpenAIMiddleware/wrap_openai(OpenAI())— same shape forchat.completions.create(messages=...)(prepends/extends the leading system message).LangChainCallback—BaseCallbackHandlerthat injects aton_llm_start/on_chat_model_startand checks aton_llm_end.CrewAIGuard— callable for CrewAI'sguardrails=[...]list.Design
RuleSourcereads from the same<brain_path>/lessons.mdClaude Code's SessionStart hook parses (viaparse_lessons).<brain-rules>XML block shape (RULE > PATTERN, confidence-ranked, max 10).rule_to_hook.classify_rule()— no new classifier, no duplicate logic. Current deterministic template: em-dash detection; others (file size, secret scan, etc.) stay as hooks.strict=False(default): violations log;strict=True: raisesgradata.middleware.RuleViolationso callers can retry.GRADATA_BYPASS=1env var: kill switch disables all injection and enforcement.ImportErrorwith an install hint.Constraint check
rule_engine/rule_to_hook— middleware is purely additive.sys.modulesmonkey-patching.tests/test_rule_to_hook.pystyle.docs/middleware.mdhas one example per adapter + advancedRuleSourceusage.Test plan
pytest tests/test_middleware*.py -v— 38/38 pass (10 core, 8 anthropic, 7 openai, 7 langchain, 6 crewai).pytest -x -q— 2108 passed, 23 skipped (no regressions).ruff check src/gradata/middleware/ tests/test_middleware*.py— clean.pyright src/gradata/— 0 errors, 2reportMissingImportswarnings for langchain (expected; optional dep).Generated with Gradata
Co-Authored-By: Gradata noreply@gradata.ai