Skip to content

Fix multi_frameworks example UnboundLocalError and upgrade default LLM #1661

Merged
rapids-bot[bot] merged 2 commits intoNVIDIA:release/1.5from
ericevans-nv:fix/multi-frameworks-topic-unbound-error
Feb 25, 2026
Merged

Fix multi_frameworks example UnboundLocalError and upgrade default LLM #1661
rapids-bot[bot] merged 2 commits intoNVIDIA:release/1.5from
ericevans-nv:fix/multi-frameworks-topic-unbound-error

Conversation

@ericevans-nv
Copy link
Contributor

@ericevans-nv ericevans-nv commented Feb 25, 2026

Description

Fix the multi_frameworks example workflow crashing with UnboundLocalError: cannot access local variable 'topic' when running nat run --input "what is RAG?".

Root cause: The langchain_research_tool.py used llm.with_structured_output(TopicExtract) to extract a search keyword. The NIM hosted API silently ignores the guided_json constraint, returning plain text instead of the expected JSON schema. The ForgivingPydanticOutputParser fails to parse the plain text response and returns None. The execute_tool function then crashes in the except block referencing the unassigned topic variable.

Changes:

  • langchain_research_tool.py: Replace with_structured_output(TopicExtract) with direct LLM piping (prompt | llm | execute_tool), reading the extracted keyword from AIMessage.content instead. This removes the fragile dependency on guided_json server-side enforcement. Also fixes the UnboundLocalError by extracting topic before the try block, adds type annotations, and simplifies the prompt template.
  • configs/config.yml: Upgrade default LLM from meta/llama-3.1-405b-instruct (currently degraded on NIM) to meta/llama-3.3-70b-instruct, Meta's recommended successor.

By Submitting this PR I confirm:

  • I am familiar with the Contributing Guidelines.
  • We require that all contributors "sign-off" on their commits. This certifies that the contribution is your original work, or you have rights to submit it under the same license, or a compatible license.
    • Any contribution which contains commits that are not Signed-Off will not be accepted.
  • When the PR is ready for review, new or existing tests cover these changes.
  • When the PR is ready for review, the documentation is up to date with these changes.

Summary by CodeRabbit

  • Updates

    • Standardized Llama model versions across multi-framework components (RAG, chat, and LLM services) to optimized versions for consistent performance.
  • Improvements

    • Simplified research tool execution pipeline with improved topic extraction and more reliable web search result processing.

…ult LLM to llama-3.3-70b.

Signed-off-by: Eric Evans <194135482+ericevans-nv@users.noreply.github.com>
@ericevans-nv ericevans-nv self-assigned this Feb 25, 2026
@ericevans-nv ericevans-nv requested a review from a team as a code owner February 25, 2026 22:02
@ericevans-nv ericevans-nv added improvement Improvement to existing functionality non-breaking Non-breaking change labels Feb 25, 2026
@coderabbitai
Copy link

coderabbitai bot commented Feb 25, 2026

Walkthrough

The PR updates model configurations to use newer Llama versions (3.3-70b and 3.1-70b) across multiple components, and refactors the langchain_research_tool to simplify its execution pipeline by removing Pydantic-based structured outputs in favor of direct LLM usage with straightforward prompt handling.

Changes

Cohort / File(s) Summary
Configuration Updates
examples/frameworks/multi_frameworks/src/nat_multi_frameworks/configs/config.yml
Updated model names from meta/llama-3.1-405b-instruct to newer versions: llama_index_rag and nim_llm now use meta/llama-3.3-70b-instruct, and haystack_chitchat_agent uses meta/llama-3.1-70b-instruct.
Langchain Tool Refactoring
examples/frameworks/multi_frameworks/src/nat_multi_frameworks/langchain_research_tool.py
Simplified execution pipeline by removing Pydantic-based topic model and structured-output wiring. Changed web_search return type from list[dict] to str. Updated tool to use AIMessage as accepted output type, read topic from out.content, and employ direct LLM usage with PromptTemplate. Added improved type hints throughout.

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~12 minutes

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 25.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title is concise (71 characters), descriptive, uses imperative mood, and accurately reflects the main changes: fixing an UnboundLocalError and upgrading the default LLM model.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (2)
examples/frameworks/multi_frameworks/src/nat_multi_frameworks/langchain_research_tool.py (2)

104-104: ⚠️ Potential issue | 🟡 Minor

Typo: "relevent" → "relevant"; grammar: "from search the web" → "by searching the web".

Proposed fix
-    yield FunctionInfo.from_fn(_arun, description="extract relevent information from search the web")
+    yield FunctionInfo.from_fn(_arun, description="extract relevant information by searching the web")
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@examples/frameworks/multi_frameworks/src/nat_multi_frameworks/langchain_research_tool.py`
at line 104, Update the description string passed to FunctionInfo.from_fn for
the _arun function: fix the typo "relevent" to "relevant" and rephrase "from
search the web" to "by searching the web" so the full description reads
something like "extract relevant information by searching the web"; locate the
call FunctionInfo.from_fn(_arun, description="...") and replace the description
accordingly.

83-83: ⚠️ Potential issue | 🟡 Minor

Typo in user-facing messages: "yield not results" → "yielded no results".

Both the empty-topic message (Line 83) and the exception message (Line 86) contain the same grammatical error.

Proposed fix
-                output_summary = f"this search on web search with topic:{topic} yield not results"
+                output_summary = f"Web search for topic: {topic} yielded no results"
-            output_summary = f"this search on web search with topic:{topic} yield not results with an error:{e}"
+            output_summary = f"Web search for topic: {topic} yielded no results with error: {e}"

Also applies to: 86-86

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@examples/frameworks/multi_frameworks/src/nat_multi_frameworks/langchain_research_tool.py`
at line 83, Fix the user-facing typo in the search result messages: replace the
string "yield not results" with "yielded no results" where assigned to
output_summary and in the exception message inside langchain_research_tool.py
(search for the output_summary assignment and the exception raise/log that
reference the same message) so both the empty-topic branch and the exception
branch read "yielded no results".
🧹 Nitpick comments (1)
examples/frameworks/multi_frameworks/src/nat_multi_frameworks/langchain_research_tool.py (1)

69-73: topic can never be None after .strip(); the '\n' check is also unreachable.

On Line 70, out.content is a str, so .strip() always returns a str (never None), and it already removes '\n'. The guard on Line 73 can be simplified.

Proposed simplification
-        topic: str = out.content.strip()
-        output_summary: str
-        try:
-            if topic is not None and topic not in ['', '\n']:
+        topic: str = out.content.strip()
+        output_summary: str
+        try:
+            if topic:
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@examples/frameworks/multi_frameworks/src/nat_multi_frameworks/langchain_research_tool.py`
around lines 69 - 73, The check in execute_tool that tests "topic is not None
and topic not in ['', '\n']" is redundant because out.content.strip() always
returns a str and removes newlines; replace that conditional with a simple
truthy check like "if topic:" (or "if topic != ''") after trimming to skip empty
input, keeping the rest of the function unchanged and referencing the topic
variable created from out.content.strip().
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In
`@examples/frameworks/multi_frameworks/src/nat_multi_frameworks/langchain_research_tool.py`:
- Line 99: The call to research.ainvoke currently passes a raw string (inputs)
but the PromptTemplate created with PromptTemplate(input_variables=['inputs'])
expects a dict; update the call to research.ainvoke to pass a mapping like
{"inputs": inputs} instead of the raw string so the template variable is
provided correctly (change the invocation of research.ainvoke to supply a dict
containing the key "inputs").

---

Outside diff comments:
In
`@examples/frameworks/multi_frameworks/src/nat_multi_frameworks/langchain_research_tool.py`:
- Line 104: Update the description string passed to FunctionInfo.from_fn for the
_arun function: fix the typo "relevent" to "relevant" and rephrase "from search
the web" to "by searching the web" so the full description reads something like
"extract relevant information by searching the web"; locate the call
FunctionInfo.from_fn(_arun, description="...") and replace the description
accordingly.
- Line 83: Fix the user-facing typo in the search result messages: replace the
string "yield not results" with "yielded no results" where assigned to
output_summary and in the exception message inside langchain_research_tool.py
(search for the output_summary assignment and the exception raise/log that
reference the same message) so both the empty-topic branch and the exception
branch read "yielded no results".

---

Nitpick comments:
In
`@examples/frameworks/multi_frameworks/src/nat_multi_frameworks/langchain_research_tool.py`:
- Around line 69-73: The check in execute_tool that tests "topic is not None and
topic not in ['', '\n']" is redundant because out.content.strip() always returns
a str and removes newlines; replace that conditional with a simple truthy check
like "if topic:" (or "if topic != ''") after trimming to skip empty input,
keeping the rest of the function unchanged and referencing the topic variable
created from out.content.strip().

ℹ️ Review info

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 538c3ac and 587b4a3.

📒 Files selected for processing (2)
  • examples/frameworks/multi_frameworks/src/nat_multi_frameworks/configs/config.yml
  • examples/frameworks/multi_frameworks/src/nat_multi_frameworks/langchain_research_tool.py

@ericevans-nv
Copy link
Contributor Author

/merge

@rapids-bot rapids-bot bot merged commit 3892d75 into NVIDIA:release/1.5 Feb 25, 2026
15 checks passed
Charlie-Yi-2002 pushed a commit to Charlie-Yi-2002/NeMo-Agent-Toolkit that referenced this pull request Mar 5, 2026
NVIDIA#1661)

Fix the `multi_frameworks` example workflow crashing with `UnboundLocalError: cannot access local variable 'topic'` when running `nat run --input "what is RAG?"`.

**Root cause:** The `langchain_research_tool.py` used `llm.with_structured_output(TopicExtract)` to extract a search keyword. The NIM hosted API silently ignores the `guided_json` constraint, returning plain text instead of the expected JSON schema. The `ForgivingPydanticOutputParser` fails to parse the plain text response and returns `None`. The `execute_tool` function then crashes in the `except` block referencing the unassigned `topic` variable.

**Changes:**
- **`langchain_research_tool.py`**: Replace `with_structured_output(TopicExtract)` with direct LLM piping (`prompt | llm | execute_tool`), reading the extracted keyword from `AIMessage.content` instead. This removes the fragile dependency on `guided_json` server-side enforcement. Also fixes the `UnboundLocalError` by extracting `topic` before the `try` block, adds type annotations, and simplifies the prompt template.
- **`configs/config.yml`**: Upgrade default LLM from `meta/llama-3.1-405b-instruct` (currently degraded on NIM) to `meta/llama-3.3-70b-instruct`, Meta's recommended successor.

## By Submitting this PR I confirm:
- I am familiar with the [Contributing Guidelines](https://github.com/NVIDIA/NeMo-Agent-Toolkit/blob/develop/docs/source/resources/contributing/index.md).
- We require that all contributors "sign-off" on their commits. This certifies that the contribution is your original work, or you have rights to submit it under the same license, or a compatible license.
  - Any contribution which contains commits that are not Signed-Off will not be accepted.
- When the PR is ready for review, new or existing tests cover these changes.
- When the PR is ready for review, the documentation is up to date with these changes.



## Summary by CodeRabbit

* **Updates**
  * Standardized Llama model versions across multi-framework components (RAG, chat, and LLM services) to optimized versions for consistent performance.

* **Improvements**
  * Simplified research tool execution pipeline with improved topic extraction and more reliable web search result processing.

Authors:
  - Eric Evans II (https://github.com/ericevans-nv)

Approvers:
  - Will Killian (https://github.com/willkill07)

URL: NVIDIA#1661
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

improvement Improvement to existing functionality non-breaking Non-breaking change

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants