Skip to content

fix: remove keywords from wrap_function_wrapper#4025

Open
JP-Ellis wants to merge 1 commit intotraceloop:mainfrom
JP-Ellis:fix/remove-keywords
Open

fix: remove keywords from wrap_function_wrapper#4025
JP-Ellis wants to merge 1 commit intotraceloop:mainfrom
JP-Ellis:fix/remove-keywords

Conversation

@JP-Ellis
Copy link
Copy Markdown

@JP-Ellis JP-Ellis commented Apr 18, 2026

The upstream wrapt library renamed the module keyword argument to target, though has kept the ordering the same. I have opted to remove the keywords entirely so that it is compatible with both recent and older versions of wrapt.

Fixes: #4009

  • I have added tests that cover my changes.

    Existing tests should cover this

  • If adding a new instrumentation or changing an existing one, I've added screenshots from some observability platform showing the change.

    N/A

  • PR name follows conventional commits format: feat(instrumentation): ... or fix(instrumentation): ....

  • (If applicable) I have updated the documentation accordingly.

    N/A

Summary by CodeRabbit

  • Refactor
    • Updated OpenTelemetry instrumentation patterns for Agno and Langchain integrations to improve code consistency in function wrapper invocations.

The upstream `wrapt` library renamed the `module` keyword argument to
`target`, though has kept the ordering the same. I have opted to remove
the keywords entirely so that it is compatible with both recent and
older versions of `wrapt`.

Fixes: traceloop#4009
Signed-off-by: JP-Ellis <josh@jpellis.me>
@CLAassistant
Copy link
Copy Markdown

CLAassistant commented Apr 18, 2026

CLA assistant check
All committers have signed the CLA.

@coderabbitai
Copy link
Copy Markdown

coderabbitai bot commented Apr 18, 2026

📝 Walkthrough

Walkthrough

Two OpenTelemetry instrumentation modules were updated to correct their wrap_function_wrapper function call signatures. Keyword arguments (module=, wrapper=) were converted to positional arguments to match the actual function API, resolving a TypeError encountered during auto-instrumentation.

Changes

Cohort / File(s) Summary
Agno Instrumentation API Signature Fix
packages/opentelemetry-instrumentation-agno/opentelemetry/instrumentation/agno/__init__.py
Refactored six wrap_function_wrapper calls for Agent.run, Agent.arun, Team.run, Team.arun, FunctionCall.execute, and FunctionCall.aexecute by converting module= keyword argument to positional first parameter and wrapper= keyword argument to positional third parameter.
LangChain Instrumentation API Signature Fix
packages/opentelemetry-instrumentation-langchain/opentelemetry/instrumentation/langchain/__init__.py
Refactored multiple wrap_function_wrapper calls across BaseCallbackManager.__init__, OpenAI LLM methods (BaseOpenAI, BaseChatOpenAI), LangGraph methods (Pregel.stream, Pregel.astream, Command.__init__), agent factory functions (create_react_agent, create_agent), and middleware hooks by removing wrapper= keyword argument and passing wrapper objects as positional third parameters.

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~8 minutes

Poem

🐰 Hop hop, the code now flows so right,
No keyword= confusion blocking the sight,
Positional args march in their place,
LangChain and Agno: instrumentation race!
TypeError banished, the rabbit did cheer,
Auto-instrument works—hooray, hooray, dear! 🥕✨

🚥 Pre-merge checks | ✅ 4 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 50.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (4 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The PR title clearly and concisely describes the main change: removing keyword arguments from wrap_function_wrapper calls to fix compatibility issues across wrapt versions.
Linked Issues check ✅ Passed The PR directly addresses issue #4009 by converting wrap_function_wrapper calls from keyword arguments (module=, wrapper=) to positional arguments, which resolves the TypeError and ensures compatibility across wrapt versions.
Out of Scope Changes check ✅ Passed All changes are directly scoped to fixing the wrap_function_wrapper API mismatch in the agno and langchain instrumentors; no unrelated modifications are present.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In
`@packages/opentelemetry-instrumentation-langchain/opentelemetry/instrumentation/langchain/__init__.py`:
- Around line 169-177: The uninstrument() implementation currently unwraps
"langchain_openai.chat_models.base" targets for
"BaseOpenAI._generate"/"_agenerate" while the instrumentation wraps
"BaseChatOpenAI._generate"/"_agenerate", leaving chat wrappers installed; update
the unwrapping calls to target the same symbols used during
wrapping—specifically change any unwrap/remove_wrapper references that mention
"BaseOpenAI._generate" and "BaseOpenAI._agenerate" to "BaseChatOpenAI._generate"
and "BaseChatOpenAI._agenerate" (also apply the same change in the other
occurrence around lines ~352-360) so that wrap_function_wrapper and the
corresponding uninstrument() use identical class/method names.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 05dc83f7-1430-4979-92ce-8629f550e3bf

📥 Commits

Reviewing files that changed from the base of the PR and between dcd5cb0 and df8ce55.

📒 Files selected for processing (2)
  • packages/opentelemetry-instrumentation-agno/opentelemetry/instrumentation/agno/__init__.py
  • packages/opentelemetry-instrumentation-langchain/opentelemetry/instrumentation/langchain/__init__.py

Comment on lines +169 to +177
"langchain_openai.chat_models.base",
"BaseChatOpenAI._generate",
openai_tracing_wrapper,
)

wrap_function_wrapper(
module="langchain_openai.chat_models.base",
name="BaseChatOpenAI._agenerate",
wrapper=openai_tracing_wrapper,
"langchain_openai.chat_models.base",
"BaseChatOpenAI._agenerate",
openai_tracing_wrapper,
Copy link
Copy Markdown

@coderabbitai coderabbitai bot Apr 18, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Align BaseChatOpenAI unwrapping with the wrapped targets.

These calls wrap BaseChatOpenAI._generate and _agenerate, but _uninstrument() still unwraps BaseOpenAI... in the same module, so chat wrappers can remain installed after uninstrumenting.

🛠️ Proposed fix
-                unwrap("langchain_openai.chat_models.base", "BaseOpenAI._generate")
-                unwrap("langchain_openai.chat_models.base", "BaseOpenAI._agenerate")
-                # unwrap("langchain_openai.chat_models.base", "BaseOpenAI._stream")
-                # unwrap("langchain_openai.chat_models.base", "BaseOpenAI._astream")
+                unwrap("langchain_openai.chat_models.base", "BaseChatOpenAI._generate")
+                unwrap("langchain_openai.chat_models.base", "BaseChatOpenAI._agenerate")
+                # unwrap("langchain_openai.chat_models.base", "BaseChatOpenAI._stream")
+                # unwrap("langchain_openai.chat_models.base", "BaseChatOpenAI._astream")

Also applies to: 352-360

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@packages/opentelemetry-instrumentation-langchain/opentelemetry/instrumentation/langchain/__init__.py`
around lines 169 - 177, The uninstrument() implementation currently unwraps
"langchain_openai.chat_models.base" targets for
"BaseOpenAI._generate"/"_agenerate" while the instrumentation wraps
"BaseChatOpenAI._generate"/"_agenerate", leaving chat wrappers installed; update
the unwrapping calls to target the same symbols used during
wrapping—specifically change any unwrap/remove_wrapper references that mention
"BaseOpenAI._generate" and "BaseOpenAI._agenerate" to "BaseChatOpenAI._generate"
and "BaseChatOpenAI._agenerate" (also apply the same change in the other
occurrence around lines ~352-360) so that wrap_function_wrapper and the
corresponding uninstrument() use identical class/method names.

Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not sure exactly what it means...

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@JP-Ellis — happy to clarify! Here's the concrete problem:

During instrumentation, the code correctly wraps methods on BaseChatOpenAI inside langchain_openai.chat_models.base:

# _instrument  ✅ correct class
wrap_function_wrapper("langchain_openai.chat_models.base", "BaseChatOpenAI._generate", ...)
wrap_function_wrapper("langchain_openai.chat_models.base", "BaseChatOpenAI._agenerate", ...)

But during uninstrumentation, the unwrap calls reference a different class (BaseOpenAI) in that same module:

# _uninstrument  ❌ wrong class — BaseOpenAI lives in llms.base, not chat_models.base
unwrap("langchain_openai.chat_models.base", "BaseOpenAI._generate")
unwrap("langchain_openai.chat_models.base", "BaseOpenAI._agenerate")

BaseOpenAI and BaseChatOpenAI are two separate classes. The chat_models.base module contains BaseChatOpenAI, so these unwrap calls either silently no-op or target the wrong symbol entirely — meaning the wrappers placed on BaseChatOpenAI._generate and BaseChatOpenAI._agenerate are never removed when _uninstrument() is called.

The fix is straightforward — just change the class name in those two unwrap calls:

-                unwrap("langchain_openai.chat_models.base", "BaseOpenAI._generate")
-                unwrap("langchain_openai.chat_models.base", "BaseOpenAI._agenerate")
-                # unwrap("langchain_openai.chat_models.base", "BaseOpenAI._stream")
-                # unwrap("langchain_openai.chat_models.base", "BaseOpenAI._astream")
+                unwrap("langchain_openai.chat_models.base", "BaseChatOpenAI._generate")
+                unwrap("langchain_openai.chat_models.base", "BaseChatOpenAI._agenerate")
+                # unwrap("langchain_openai.chat_models.base", "BaseChatOpenAI._stream")
+                # unwrap("langchain_openai.chat_models.base", "BaseChatOpenAI._astream")

This is a pre-existing bug (not introduced by this PR), but it's worth fixing here since the area is already being touched.


🧠 Learnings used
Learnt from: duanyutong
Repo: traceloop/openllmetry PR: 3487
File: packages/opentelemetry-instrumentation-openai/opentelemetry/instrumentation/openai/utils.py:177-178
Timestamp: 2025-12-02T21:09:48.690Z
Learning: The opentelemetry-instrumentation-openai and opentelemetry-instrumentation-openai-agents packages must remain independent and not share code, so code duplication between them is acceptable.

Learnt from: avivhalfon
Repo: traceloop/openllmetry PR: 3840
File: packages/opentelemetry-instrumentation-google-generativeai/opentelemetry/instrumentation/google_generativeai/__init__.py:206-209
Timestamp: 2026-03-23T09:16:59.824Z
Learning: In the OpenTelemetry GenAI semantic conventions, `gen_ai.operation.name` uses standard predefined values such as "chat", "text_completion", "embeddings", etc. — NOT method-specific names (e.g., not "generate_content"). The Gemini/google-generativeai and Vertex AI instrumentations in traceloop/openllmetry should use "chat" as the operation name for generate_content calls, consistent with how the LangChain instrumentation (PR `#3836`) also uses "chat".

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

🐛 Bug Report: Instrumenting of langchain failed - TypeError: wrap_function_wrapper() got an unexpected keyword argument 'module'

2 participants