Fix multi_frameworks example UnboundLocalError and upgrade default LLM #1661
Conversation
…ult LLM to llama-3.3-70b. Signed-off-by: Eric Evans <194135482+ericevans-nv@users.noreply.github.com>
…to fix/multi-frameworks-topic-unbound-error
WalkthroughThe PR updates model configurations to use newer Llama versions (3.3-70b and 3.1-70b) across multiple components, and refactors the langchain_research_tool to simplify its execution pipeline by removing Pydantic-based structured outputs in favor of direct LLM usage with straightforward prompt handling. Changes
Estimated code review effort🎯 2 (Simple) | ⏱️ ~12 minutes 🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Comment |
There was a problem hiding this comment.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
examples/frameworks/multi_frameworks/src/nat_multi_frameworks/langchain_research_tool.py (2)
104-104:⚠️ Potential issue | 🟡 MinorTypo: "relevent" → "relevant"; grammar: "from search the web" → "by searching the web".
Proposed fix
- yield FunctionInfo.from_fn(_arun, description="extract relevent information from search the web") + yield FunctionInfo.from_fn(_arun, description="extract relevant information by searching the web")🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@examples/frameworks/multi_frameworks/src/nat_multi_frameworks/langchain_research_tool.py` at line 104, Update the description string passed to FunctionInfo.from_fn for the _arun function: fix the typo "relevent" to "relevant" and rephrase "from search the web" to "by searching the web" so the full description reads something like "extract relevant information by searching the web"; locate the call FunctionInfo.from_fn(_arun, description="...") and replace the description accordingly.
83-83:⚠️ Potential issue | 🟡 MinorTypo in user-facing messages: "yield not results" → "yielded no results".
Both the empty-topic message (Line 83) and the exception message (Line 86) contain the same grammatical error.
Proposed fix
- output_summary = f"this search on web search with topic:{topic} yield not results" + output_summary = f"Web search for topic: {topic} yielded no results"- output_summary = f"this search on web search with topic:{topic} yield not results with an error:{e}" + output_summary = f"Web search for topic: {topic} yielded no results with error: {e}"Also applies to: 86-86
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@examples/frameworks/multi_frameworks/src/nat_multi_frameworks/langchain_research_tool.py` at line 83, Fix the user-facing typo in the search result messages: replace the string "yield not results" with "yielded no results" where assigned to output_summary and in the exception message inside langchain_research_tool.py (search for the output_summary assignment and the exception raise/log that reference the same message) so both the empty-topic branch and the exception branch read "yielded no results".
🧹 Nitpick comments (1)
examples/frameworks/multi_frameworks/src/nat_multi_frameworks/langchain_research_tool.py (1)
69-73:topiccan never beNoneafter.strip(); the'\n'check is also unreachable.On Line 70,
out.contentis astr, so.strip()always returns astr(neverNone), and it already removes'\n'. The guard on Line 73 can be simplified.Proposed simplification
- topic: str = out.content.strip() - output_summary: str - try: - if topic is not None and topic not in ['', '\n']: + topic: str = out.content.strip() + output_summary: str + try: + if topic:🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@examples/frameworks/multi_frameworks/src/nat_multi_frameworks/langchain_research_tool.py` around lines 69 - 73, The check in execute_tool that tests "topic is not None and topic not in ['', '\n']" is redundant because out.content.strip() always returns a str and removes newlines; replace that conditional with a simple truthy check like "if topic:" (or "if topic != ''") after trimming to skip empty input, keeping the rest of the function unchanged and referencing the topic variable created from out.content.strip().
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In
`@examples/frameworks/multi_frameworks/src/nat_multi_frameworks/langchain_research_tool.py`:
- Line 99: The call to research.ainvoke currently passes a raw string (inputs)
but the PromptTemplate created with PromptTemplate(input_variables=['inputs'])
expects a dict; update the call to research.ainvoke to pass a mapping like
{"inputs": inputs} instead of the raw string so the template variable is
provided correctly (change the invocation of research.ainvoke to supply a dict
containing the key "inputs").
---
Outside diff comments:
In
`@examples/frameworks/multi_frameworks/src/nat_multi_frameworks/langchain_research_tool.py`:
- Line 104: Update the description string passed to FunctionInfo.from_fn for the
_arun function: fix the typo "relevent" to "relevant" and rephrase "from search
the web" to "by searching the web" so the full description reads something like
"extract relevant information by searching the web"; locate the call
FunctionInfo.from_fn(_arun, description="...") and replace the description
accordingly.
- Line 83: Fix the user-facing typo in the search result messages: replace the
string "yield not results" with "yielded no results" where assigned to
output_summary and in the exception message inside langchain_research_tool.py
(search for the output_summary assignment and the exception raise/log that
reference the same message) so both the empty-topic branch and the exception
branch read "yielded no results".
---
Nitpick comments:
In
`@examples/frameworks/multi_frameworks/src/nat_multi_frameworks/langchain_research_tool.py`:
- Around line 69-73: The check in execute_tool that tests "topic is not None and
topic not in ['', '\n']" is redundant because out.content.strip() always returns
a str and removes newlines; replace that conditional with a simple truthy check
like "if topic:" (or "if topic != ''") after trimming to skip empty input,
keeping the rest of the function unchanged and referencing the topic variable
created from out.content.strip().
ℹ️ Review info
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
examples/frameworks/multi_frameworks/src/nat_multi_frameworks/configs/config.ymlexamples/frameworks/multi_frameworks/src/nat_multi_frameworks/langchain_research_tool.py
examples/frameworks/multi_frameworks/src/nat_multi_frameworks/langchain_research_tool.py
Show resolved
Hide resolved
|
/merge |
NVIDIA#1661) Fix the `multi_frameworks` example workflow crashing with `UnboundLocalError: cannot access local variable 'topic'` when running `nat run --input "what is RAG?"`. **Root cause:** The `langchain_research_tool.py` used `llm.with_structured_output(TopicExtract)` to extract a search keyword. The NIM hosted API silently ignores the `guided_json` constraint, returning plain text instead of the expected JSON schema. The `ForgivingPydanticOutputParser` fails to parse the plain text response and returns `None`. The `execute_tool` function then crashes in the `except` block referencing the unassigned `topic` variable. **Changes:** - **`langchain_research_tool.py`**: Replace `with_structured_output(TopicExtract)` with direct LLM piping (`prompt | llm | execute_tool`), reading the extracted keyword from `AIMessage.content` instead. This removes the fragile dependency on `guided_json` server-side enforcement. Also fixes the `UnboundLocalError` by extracting `topic` before the `try` block, adds type annotations, and simplifies the prompt template. - **`configs/config.yml`**: Upgrade default LLM from `meta/llama-3.1-405b-instruct` (currently degraded on NIM) to `meta/llama-3.3-70b-instruct`, Meta's recommended successor. ## By Submitting this PR I confirm: - I am familiar with the [Contributing Guidelines](https://github.com/NVIDIA/NeMo-Agent-Toolkit/blob/develop/docs/source/resources/contributing/index.md). - We require that all contributors "sign-off" on their commits. This certifies that the contribution is your original work, or you have rights to submit it under the same license, or a compatible license. - Any contribution which contains commits that are not Signed-Off will not be accepted. - When the PR is ready for review, new or existing tests cover these changes. - When the PR is ready for review, the documentation is up to date with these changes. ## Summary by CodeRabbit * **Updates** * Standardized Llama model versions across multi-framework components (RAG, chat, and LLM services) to optimized versions for consistent performance. * **Improvements** * Simplified research tool execution pipeline with improved topic extraction and more reliable web search result processing. Authors: - Eric Evans II (https://github.com/ericevans-nv) Approvers: - Will Killian (https://github.com/willkill07) URL: NVIDIA#1661
Description
Fix the
multi_frameworksexample workflow crashing withUnboundLocalError: cannot access local variable 'topic'when runningnat run --input "what is RAG?".Root cause: The
langchain_research_tool.pyusedllm.with_structured_output(TopicExtract)to extract a search keyword. The NIM hosted API silently ignores theguided_jsonconstraint, returning plain text instead of the expected JSON schema. TheForgivingPydanticOutputParserfails to parse the plain text response and returnsNone. Theexecute_toolfunction then crashes in theexceptblock referencing the unassignedtopicvariable.Changes:
langchain_research_tool.py: Replacewith_structured_output(TopicExtract)with direct LLM piping (prompt | llm | execute_tool), reading the extracted keyword fromAIMessage.contentinstead. This removes the fragile dependency onguided_jsonserver-side enforcement. Also fixes theUnboundLocalErrorby extractingtopicbefore thetryblock, adds type annotations, and simplifies the prompt template.configs/config.yml: Upgrade default LLM frommeta/llama-3.1-405b-instruct(currently degraded on NIM) tometa/llama-3.3-70b-instruct, Meta's recommended successor.By Submitting this PR I confirm:
Summary by CodeRabbit
Updates
Improvements