Skip to content

Add 'LangGraph' in locations where 'LangChain' appears#778

Merged
zhongxuanwang-nv merged 9 commits intoNVIDIA:developfrom
zhongxuanwang-nv:langchain_langgraph_namings
Sep 9, 2025
Merged

Add 'LangGraph' in locations where 'LangChain' appears#778
zhongxuanwang-nv merged 9 commits intoNVIDIA:developfrom
zhongxuanwang-nv:langchain_langgraph_namings

Conversation

@zhongxuanwang-nv
Copy link
Member

@zhongxuanwang-nv zhongxuanwang-nv commented Sep 9, 2025

Description

Closes AIQ-1854

I scanned the repo for mentions of Langchain and additional mention LangGraph where appropriate. For example:

Before: "This field is ignored for LangChain. (must be > 0, default: 1024)"
After: "This field is ignored for LangChain/LangGraph. (must be > 0, default: 1024)"

I did make sure to go through every single reference of LangChain and check if I should make the appropriate change.

By Submitting this PR I confirm:

  • I am familiar with the Contributing Guidelines.
  • We require that all contributors "sign-off" on their commits. This certifies that the contribution is your original work, or you have rights to submit it under the same license, or a compatible license.
    • Any contribution which contains commits that are not Signed-Off will not be accepted.
  • When the PR is ready for review, new or existing tests cover these changes.
  • When the PR is ready for review, the documentation is up to date with these changes.

Summary by CodeRabbit

  • New Features

    • Expanded message schema with new fields (response metadata, type, name, id, example, tool_call_id) for richer observability and tool-call tracking.
  • Documentation

    • Standardized terminology to “LangChain/LangGraph” across guides, tutorials, examples, quick start, plugins, and provider docs; clarified framework support; corrected LlamaIndex capitalization; updated observability and workflow content.
  • Tests

    • Updated test descriptions to reference LangChain/LangGraph; no behavioral changes.
  • Chores

    • CI path checks updated to allowlist “LangChain/LangGraph” phrasing.

Signed-off-by: Daniel Wang <daniewang@nvidia.com>
@zhongxuanwang-nv zhongxuanwang-nv self-assigned this Sep 9, 2025
@zhongxuanwang-nv zhongxuanwang-nv added doc Improvements or additions to documentation non-breaking Non-breaking change labels Sep 9, 2025
@coderabbitai
Copy link

coderabbitai bot commented Sep 9, 2025

Walkthrough

Project-wide documentation and comments update "LangChain" references to "LangChain/LangGraph". One data model (OpenAIMessage) gains new fields. CI path check allowlist expands with two entries. Minor prompt text tweak in an example. No control-flow or functional changes elsewhere.

Changes

Cohort / File(s) Summary of changes
Docs: LangChain → LangChain/LangGraph terminology
README.md, docs/source/*, .cursor/rules/*, examples/README.md, examples/frameworks/*/README.md, examples/notebooks/1_getting_started.ipynb, examples/observability/*/README.md
Textual updates to reference “LangChain/LangGraph” in descriptions, headings, examples, and notes; no functional or API changes. One tutorial shows register_function(..., framework_wrappers=[LLMFrameworkEnum.LANGCHAIN]) in example.
Non-functional comments/docstrings
src/nat/*/... (.../api_server.py, .../profiler/.../framework_wrapper.py, .../tool/chat_completion.py, .../tool/document_search.py, .../cli/commands/workflow/workflow_commands.py), packages/nvidia_nat_agno/src/nat/plugins/agno/tool_wrapper.py, packages/nvidia_nat_weave/src/nat/plugins/weave/weave_exporter.py, packages/nvidia_nat_data_flywheel/src/nat/plugins/data_flywheel/observability/processor/trace_conversion/adapter/elasticsearch/*converter.py, packages/nvidia_nat_test/src/nat/test/llm.py, tests/nat/**/*
Updated comments/docstrings to say “LangChain/LangGraph”; no code, behavior, or signatures changed.
Schema expansion: OpenAIMessage
packages/nvidia_nat_data_flywheel/src/nat/plugins/data_flywheel/observability/schema/provider/openai_message.py
Added fields: response_metadata, type, name, id, example, tool_call_id; updated comment; existing fields unchanged.
CI allowlist update
ci/scripts/path_checks.py
Added "LangChain/LangGraph" and "LangChain/LangGraph." to ALLOWLISTED_WORDS.
Package metadata wording
packages/nvidia_nat_langchain/pyproject.toml, packages/nvidia_nat_langchain/src/nat/meta/pypi.md
Descriptions updated to “LangChain/LangGraph”; no functional changes.
Example prompt text tweak
examples/frameworks/multi_frameworks/src/nat_multi_frameworks/register.py, examples/custom_functions/.../register.py
Prompt/comment wording updated to mention “LangChain/LangGraph”; logic unchanged.

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

Pre-merge checks (2 passed, 1 warning)

❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 79.17% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
✅ Passed checks (2 passed)
Check name Status Explanation
Title Check ✅ Passed The title clearly uses an imperative verb “Add,” succinctly describes the primary change of including LangGraph wherever LangChain appears, and remains under the 72-character limit.
Description Check ✅ Passed The description closes the associated JIRA issue, explains that the repository was scanned for “LangChain” references, and provides a concrete before-and-after example, directly relating to the documented changes.

📜 Recent review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 74ebd7b and f38db08.

📒 Files selected for processing (1)
  • ci/scripts/path_checks.py (1 hunks)
🧰 Additional context used
📓 Path-based instructions (5)
**/*.py

📄 CodeRabbit inference engine (.cursor/rules/general.mdc)

**/*.py: Follow PEP 8/20 style; format with yapf (column_limit=120) and use 4-space indentation; end files with a single newline
Run ruff (ruff check --fix) per pyproject.toml; fix warnings unless explicitly ignored; ruff is linter-only
Use snake_case for functions/variables, PascalCase for classes, and UPPER_CASE for constants
Treat pyright warnings as errors during development
Exception handling: preserve stack traces and avoid duplicate logging
When re-raising exceptions, use bare raise and log with logger.error(), not logger.exception()
When catching and not re-raising, log with logger.exception() to capture stack trace
Validate and sanitize all user input; prefer httpx with SSL verification and follow OWASP Top‑10
Use async/await for I/O-bound work; profile CPU-heavy paths with cProfile/mprof; cache with functools.lru_cache or external cache; leverage NumPy vectorization when beneficial

**/*.py: Programmatic use: create TestLLMConfig(response_seq=[...], delay_ms=...), add with builder.add_llm("", cfg).
When retrieving the test LLM wrapper, use builder.get_llm(name, wrapper_type=LLMFrameworkEnum.) and call the framework’s method (e.g., ainvoke, achat, call).

Files:

  • ci/scripts/path_checks.py
**/*.{py,sh,md,yml,yaml,toml,ini,json,ipynb,txt,rst}

📄 CodeRabbit inference engine (.cursor/rules/general.mdc)

**/*.{py,sh,md,yml,yaml,toml,ini,json,ipynb,txt,rst}: Every file must start with the standard SPDX Apache-2.0 header; keep copyright years up‑to‑date
All source files must include the SPDX Apache‑2.0 header; do not bypass CI header checks

Files:

  • ci/scripts/path_checks.py
**/*.{py,md}

📄 CodeRabbit inference engine (.cursor/rules/general.mdc)

Never hard‑code version numbers in code or docs; versions are derived by setuptools‑scm

Files:

  • ci/scripts/path_checks.py
**/*.{py,yaml,yml}

📄 CodeRabbit inference engine (.cursor/rules/nat-test-llm.mdc)

**/*.{py,yaml,yml}: Configure response_seq as a list of strings; values cycle per call, and [] yields an empty string.
Configure delay_ms to inject per-call artificial latency in milliseconds for nat_test_llm.

Files:

  • ci/scripts/path_checks.py
**/*

⚙️ CodeRabbit configuration file

**/*: # Code Review Instructions

  • Ensure the code follows best practices and coding standards. - For Python code, follow
    PEP 20 and
    PEP 8 for style guidelines.
  • Check for security vulnerabilities and potential issues. - Python methods should use type hints for all parameters and return values.
    Example:
    def my_function(param1: int, param2: str) -> bool:
        pass
  • For Python exception handling, ensure proper stack trace preservation:
    • When re-raising exceptions: use bare raise statements to maintain the original stack trace,
      and use logger.error() (not logger.exception()) to avoid duplicate stack trace output.
    • When catching and logging exceptions without re-raising: always use logger.exception()
      to capture the full stack trace information.

Documentation Review Instructions - Verify that documentation and comments are clear and comprehensive. - Verify that the documentation doesn't contain any TODOs, FIXMEs or placeholder text like "lorem ipsum". - Verify that the documentation doesn't contain any offensive or outdated terms. - Verify that documentation and comments are free of spelling mistakes, ensure the documentation doesn't contain any

words listed in the ci/vale/styles/config/vocabularies/nat/reject.txt file, words that might appear to be
spelling mistakes but are listed in the ci/vale/styles/config/vocabularies/nat/accept.txt file are OK.

Misc. - All code (except .mdc files that contain Cursor rules) should be licensed under the Apache License 2.0,

and should contain an Apache License 2.0 header comment at the top of each file.

  • Confirm that copyright years are up-to date whenever a file is changed.

Files:

  • ci/scripts/path_checks.py
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: CI Pipeline / Check
🔇 Additional comments (1)
ci/scripts/path_checks.py (1)

116-118: ci/scripts/path_checks.py (lines 116–118): Escape the dot and consolidate patterns
Replace the two entries

"LangChain/LangGraph"
"LangChain/LangGraph."

with

"LangChain/LangGraph\\.?"

to ensure only the exact path (with optional period) is allowed.

✨ Finishing Touches
  • 📝 Generate Docstrings
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (8)
packages/nvidia_nat_weave/src/nat/plugins/weave/weave_exporter.py (1)

133-141: Fix crash: _span_stack is a dict, not a list
In packages/nvidia_nat_weave/src/nat/plugins/weave/weave_exporter.py:133-141, _span_stack is an IsolatedAttribute[dict] (dict-like) and never a list—calling self._span_stack[-2] will raise a TypeError. Derive the parent span via the span’s own parent relationship (span.parent.span_id or span.context.parent_id) instead. Apply:

-        elif len(self._weave_calls) > 0 and len(self._span_stack) > 1:
-            # Get the parent span using stack position (one level up)
-            parent_span_id = self._span_stack[-2].context.span_id
-            # Find the corresponding weave call for this parent span
-            for call in self._weave_calls.values():
-                if getattr(call, "span_id", None) == parent_span_id:
-                    parent_call = call
-                    break
+        elif self._weave_calls:
+            # Resolve parent by span relationship (not by stack indexing)
+            parent_span_id = (
+                getattr(getattr(span, "parent", None), "span_id", None)
+                or getattr(getattr(span, "context", None), "parent_id", None)
+            )
+            if parent_span_id:
+                for call in self._weave_calls.values():
+                    if getattr(call, "span_id", None) == parent_span_id:
+                        parent_call = call
+                        break
examples/frameworks/multi_frameworks/src/nat_multi_frameworks/register.py (1)

119-121: Router returns an invalid next node (“supevisor”), breaking control flow.

The conditional edge mapping only recognizes "workers" and "end". Returning a misspelled, unmapped node prevents intended routing.

-            route_to = "supevisor"
+            route_to = "workers"

If you intended to loop back to the supervisor, add it to the conditional edges map and fix the spelling.

docs/source/extend/adding-an-llm-provider.md (1)

35-35: Correct spelling: “retriever.”

-In NeMo Agent toolkit, there are three provider types: `llm`, `embedder`, and `retreiver`. The three provider types are defined by their respective base configuration classes: {class}`nat.data_models.llm.LLMBaseConfig`, {class}`nat.data_models.embedder.EmbedderBaseConfig`, and {class}`nat.data_models.retriever.RetrieverBaseConfig`. This guide focuses on adding an LLM provider. However, the process for adding an embedder or retriever provider is similar.
+In NeMo Agent toolkit, there are three provider types: `llm`, `embedder`, and `retriever`. The three provider types are defined by their respective base configuration classes: {class}`nat.data_models.llm.LLMBaseConfig`, {class}`nat.data_models.embedder.EmbedderBaseConfig`, and {class}`nat.data_models.retriever.RetrieverBaseConfig`. This guide focuses on adding an LLM provider. However, the process for adding an embedder or retriever provider is similar.
examples/frameworks/multi_frameworks/README.md (1)

20-21: First use must be “NVIDIA NeMo Agent toolkit”; normalize “LangChain/LangGraph.”

  • Per docs rules, first mention should be “NVIDIA NeMo Agent toolkit,” subsequent “NeMo Agent toolkit.”
  • Use “LangChain/LangGraph” (no spaces around the slash).
-This example demonstrates how to integrate multiple AI frameworks seamlessly using a set of LangChain / LangGraph agents, in NeMo Agent toolkit.
+This example demonstrates how to integrate multiple AI frameworks seamlessly using a set of LangChain/LangGraph agents, in the NVIDIA NeMo Agent toolkit (NeMo Agent toolkit).
docs/source/tutorials/create-a-new-workflow.md (1)

128-132: Remove LangGraph mention and obsolete TODOs

  • Update the tutorial prose and code snippet (lines 128–132) to only reference LangChain—no LLMFrameworkEnum.LANGGRAPH exists in the code.
  • Delete the HTML TODO comments at lines 41 (“This section needs to be updated once #559 is completed”) and 85 (“Remove this once #559 is completed”) in docs/source/tutorials/create-a-new-workflow.md.
examples/notebooks/1_getting_started.ipynb (2)

21-22: Fix link to the official repo.

Avoid pointing to a personal fork.

-3. NeMo-Agent-Toolkit installed from source following [these instructions](https://github.com/cdgamarose-nv/NeMo-Agent-Toolkit/tree/develop?tab=readme-ov-file#install-from-source)
+3. NeMo Agent toolkit installed from source following [these instructions](https://github.com/NVIDIA/NeMo-Agent-Toolkit#install-from-source)

1-1: Add SPDX Apache-2.0 header as the first cell in examples/notebooks/1_getting_started.ipynb. Insert a Markdown cell at the top containing:

<!--
SPDX-FileCopyrightText: Copyright (c) 2025, NVIDIA CORPORATION & AFFILIATES.
SPDX-License-Identifier: Apache-2.0
-->
packages/nvidia_nat_data_flywheel/src/nat/plugins/data_flywheel/observability/schema/provider/openai_message.py (1)

26-31: Breaking change risk: type is now required; make new fields backward compatible.

Adding a required field to a public Pydantic model is breaking. Make new fields optional or provide defaults to avoid failures when existing code constructs OpenAIMessage(content=...).

-class OpenAIMessage(BaseModel):
-    content: str | None = Field(default=None, description="The content of the message.")
-    additional_kwargs: dict[str, Any] = Field(default_factory=dict, description="Additional kwargs for the message.")
-    response_metadata: dict[str, Any] = Field(default_factory=dict, description="Response metadata for the message.")
-    type: str = Field(description="The type of the message.")
-    name: str | None = Field(default=None, description="The name of the message.")
-    id: str | None = None
-    example: bool = Field(default=False, description="Whether the message is an example.")
-    tool_call_id: str | None = Field(default=None, description="The tool call ID for the message.")
+class OpenAIMessage(BaseModel):
+    content: str | None = Field(default=None, description="The content of the message.")
+    additional_kwargs: dict[str, Any] = Field(default_factory=dict, description="Additional kwargs for the message.")
+    response_metadata: dict[str, Any] = Field(default_factory=dict, description="Response metadata for the message.")
+    type: str | None = Field(default=None, description="The type of the message.")  # optional for BC
+    name: str | None = Field(default=None, description="The name of the message.")
+    id: str | None = Field(default=None)
+    example: bool = Field(default=False, description="Whether the message is an example.")
+    tool_call_id: str | None = Field(default=None, description="The tool call ID for the message.")

Optionally add a brief class docstring describing provenance (LangChain/LangGraph message schema) and intended usage.

🧹 Nitpick comments (36)
packages/nvidia_nat_weave/src/nat/plugins/weave/weave_exporter.py (5)

151-158: Log serialization failures per guidelines.

When swallowing exceptions, log with logger.exception() to capture stack traces.

         if step.payload.data and step.payload.data.input is not None:
             try:
                 # Add the input to the Weave call
                 inputs["input"] = step.payload.data.input
-            except Exception:
-                # If serialization fails, use string representation
-                inputs["input"] = str(step.payload.data.input)
+            except Exception:
+                logger.exception("Failed to serialize input for Weave call; using str().")
+                inputs["input"] = str(step.payload.data.input)

195-202: Also log output serialization failures.

Mirror input handling for outputs.

         if step.payload.data and step.payload.data.output is not None:
             try:
                 # Add the output to the Weave call
                 outputs["output"] = step.payload.data.output
-            except Exception:
-                # If serialization fails, use string representation
-                outputs["output"] = str(step.payload.data.output)
+            except Exception:
+                logger.exception("Failed to serialize output for Weave call; using str().")
+                outputs["output"] = str(step.payload.data.output)

72-84: Add explicit return type annotation.

Align with repo guidance: annotate return types.

-    def _process_start_event(self, event: IntermediateStep):
+    def _process_start_event(self, event: IntermediateStep) -> None:

85-93: Add explicit return type annotation.

Same as above.

-    def _process_end_event(self, event: IntermediateStep):
+    def _process_end_event(self, event: IntermediateStep) -> None:

95-107: Tighten Generator type parameters.

Use Generator[None, None, None] for accuracy.

-    def parent_call(self, trace_id: str, parent_call_id: str) -> Generator[None]:
+    def parent_call(self, trace_id: str, parent_call_id: str) -> Generator[None, None, None]:
tests/nat/profiler/test_profiler.py (2)

245-247: Remove stray }] in the comment.

Tiny cleanup to avoid confusion while reading the example.

-# => average latency across requests = 5.75           }]
+# => average latency across requests = 5.75

225-225: Fix assertion message to match the filename and grammar.

The path uses inference_optimization.json; the failure message mentions simple_inference_metrics.json.

-assert os.path.exists(metrics_path), "ProfilerRunner did not produce an simple_inference_metrics.json file."
+assert os.path.exists(metrics_path), "ProfilerRunner did not produce an inference_optimization.json file."

Also applies to: 298-298

docs/source/extend/integrating-aws-bedrock-models.md (1)

20-20: Conform to style guides: first mention naming + markdown bullet style.

  • First mention should be “NVIDIA NeMo Agent toolkit”.
  • markdownlint MD004 prefers dashes for unordered lists; switch asterisks to dashes for consistency.
-The NeMo Agent toolkit supports integration with multiple LLM providers, including AWS Bedrock. This documentation provides a comprehensive guide on how to integrate AWS Bedrock models into your NeMo Agent toolkit workflow. To view the full list of supported LLM providers, run `nat info components -t llm_provider`.
+The NVIDIA NeMo Agent toolkit supports integration with multiple LLM providers, including AWS Bedrock. This documentation provides a comprehensive guide on how to integrate AWS Bedrock models into your NeMo Agent toolkit workflow. To view the full list of supported LLM providers, run `nat info components -t llm_provider`.
@@
-* `model_name`: The name of the AWS Bedrock model to use (required)
-* `temperature`: Controls randomness in the output (0.0 to 1.0, default: 0.0)
-* `max_tokens`: Maximum number of tokens to generate (must be > 0, default: 1024)
-* `top_p`: The top-p value to use for the model. This field is ignored for LlamaIndex. (0.0 to 1.0, default: 1.0)
-* `context_size`: The maximum number of tokens available for input. This is only required for LlamaIndex. This field is ignored for LangChain/LangGraph. (must be > 0, default: 1024)
-* `region_name`: AWS region where your Bedrock service is hosted (default: "None")
-* `base_url`: Custom Bedrock endpoint URL (default: None, needed if you don't want to use the default us-east-1 endpoint)
-* `credentials_profile_name`: AWS credentials profile name from ~/.aws/credentials or ~/.aws/config files (default: None)
-* `max_retries`: The maximum number of retries for the request
+- `model_name`: The name of the AWS Bedrock model to use (required)
+- `temperature`: Controls randomness in the output (0.0 to 1.0, default: 0.0)
+- `max_tokens`: Maximum number of tokens to generate (must be > 0, default: 1024)
+- `top_p`: The top-p value to use for the model. This field is ignored for LlamaIndex. (0.0 to 1.0, default: 1.0)
+- `context_size`: The maximum number of tokens available for input. This is only required for LlamaIndex. This field is ignored for LangChain/LangGraph. (must be > 0, default: 1024)
+- `region_name`: AWS region where your Bedrock service is hosted (default: "None")
+- `base_url`: Custom Bedrock endpoint URL (default: None, needed if you don't want to use the default us-east-1 endpoint)
+- `credentials_profile_name`: AWS credentials profile name from ~/.aws/credentials or ~/.aws/config files (default: None)
+- `max_retries`: The maximum number of retries for the request

Also applies to: 47-56

packages/nvidia_nat_data_flywheel/src/nat/plugins/data_flywheel/observability/processor/trace_conversion/adapter/elasticsearch/openai_converter.py (1)

285-285: Docstring update LGTM.

If desired, consider renaming the function to reflect both frameworks for clarity (keeping the registered adapter behavior intact).

src/nat/llm/aws_bedrock_llm.py (1)

58-58: Grammar nit: “A AWS” → “An AWS”.

Tiny user-facing text fix in provider description.

-    yield LLMProviderInfo(config=llm_config, description="A AWS Bedrock model for use with an LLM client.")
+    yield LLMProviderInfo(config=llm_config, description="An AWS Bedrock model for use with an LLM client.")
docs/source/store-and-retrieve/retrievers.md (1)

108-110: Doc/code mismatch: example shows LANGCHAIN while text says LangChain/LangGraph.

If LangGraph integration is accessed via the LangChain wrapper in NAT, add a one-line note clarifying that users should pass LLMFrameworkEnum.LANGCHAIN for both. If a separate wrapper enum exists, provide a second snippet.

I can draft the clarifying sentence or dual examples if you confirm the intended usage.

src/nat/data_models/api_server.py (1)

395-397: Replace FIXME with actionable reference or issue link.

Inline “(fixme)” invites drift. Convert to a clear TODO with a tracking issue or remove if no longer needed.

Apply this diff:

-    # (fixme) define the intermediate step model
+    # TODO(aiq-issue): Define a concrete intermediate step model (link to issue)
src/nat/cli/commands/workflow/workflow_commands.py (1)

40-42: Confirm extra naming aligns with LangGraph support.

Comment now says default is LangChain/LangGraph while dependency remains nvidia-nat[langchain]. If this extra indeed covers both, consider adding a note in the template context or help text; else expose a selectable extra.

I can wire a --framework-extra option and propagate it into the Jinja context if desired.

src/nat/tool/document_search.py (1)

122-122: Clarify comment to avoid implying a non-existent “LangGraph Document” type.

These are JSON chunks from the RAG endpoint, not LangChain/LangGraph “Document” objects.

-            # parse docs from LangChain/LangGraph Document object to string
+            # Parse search results into a serialized <Document> string representation
docs/source/tutorials/add-tools-to-a-workflow.md (1)

38-41: Fix comma splice for readability.

-However, the workflow is unaware of some related technologies, such as LangChain/LangGraph, if you run:
+However, the workflow is unaware of some related technologies, such as LangChain/LangGraph. If you run:
examples/frameworks/multi_frameworks/src/nat_multi_frameworks/register.py (1)

66-73: Polish classification prompt grammar and capitalization.

-    'Research' - any question related to a need to do research on arxiv papers and get a summary. such as "find research papers about RAG for me" or " what is Compound AI?"...etc
-    'Retrieve' - any question related to the topic of NAT or its workflows, especially concerning the particular workflow called multi_frameworks which show case using multiple frameworks such as LangChain/LangGraph, llama-index ..etc
+    'Research' - any question related to a need to do research on arXiv papers and get a summary, such as "find research papers about RAG for me" or "what is Compound AI?", etc.
+    'Retrieve' - any question related to the topic of NAT or its workflows, especially concerning the particular workflow called multi_frameworks which showcases using multiple frameworks such as LangChain/LangGraph and LlamaIndex, etc.
@@
-    Classifcation topic:
+    Classification topic:
examples/frameworks/semantic_kernel_demo/README.md (1)

32-32: Use the official first-mention naming (“NVIDIA NeMo Agent toolkit”).

-- **Semantic Kernel Framework Integration:** Demonstrates NeMo Agent toolkit support for Microsoft's Semantic Kernel framework alongside other frameworks like LangChain/LangGraph.
+- **Semantic Kernel Framework Integration:** Demonstrates NVIDIA NeMo Agent toolkit support for Microsoft's Semantic Kernel framework alongside other frameworks like LangChain/LangGraph.
docs/source/extend/adding-an-llm-provider.md (1)

25-26: Fix framework list grammar.

-In NeMo Agent toolkit there are LLM providers, like NIM and OpenAI, and there are frameworks which need to use those providers, such as LangChain/LangGraph LlamaIndex with a client defined for each. To add support, we need to cover the combinations of providers to clients.
+In NeMo Agent toolkit there are LLM providers, like NIM and OpenAI, and there are frameworks which need to use those providers, such as LangChain/LangGraph and LlamaIndex, with a client defined for each. To add support, we need to cover the combinations of providers to clients.
src/nat/tool/chat_completion.py (1)

47-49: Comment and enum mismatch; verify LangGraph routing.

Comment says LangChain/LangGraph, but wrapper_type=LLMFrameworkEnum.LANGCHAIN. If LangGraph is routed via the LangChain wrapper, clarify the comment; otherwise, adjust the enum accordingly.

Apply if LangGraph routes through LangChain:

-    # Use LangChain/LangGraph framework wrapper since we're using LangChain/LangGraph-based LLM
+    # Use the LangChain wrapper (LangGraph models are routed through this wrapper as well)
README.md (1)

81-85: Extras name vs. label is consistent.

Refers to “LangChain/LangGraph” while installing [langchain], which aligns with plugin naming. Consider adding a parenthetical “(LangGraph supported via this extra)” to preempt confusion.

.cursor/rules/nat-setup/nat-toolkit-installation.mdc (1)

206-216: Section title updated; OK.

Minor nit: consider mirroring “Available Plugin Options” phrasing style for consistency across rule docs, but not required.

docs/source/extend/plugins.md (6)

36-36: Grammar: “an LLM framework,” not “a LLM framework.”

Fix the article for correctness.

- - **Embedder Clients**: Embedder Clients are implementations of embedder providers, which are specific to a LLM framework.
+ - **Embedder Clients**: Embedder Clients are implementations of embedder providers, which are specific to an LLM framework.

41-41: Decorator path consistency and article fix.

  • Prefer “an LLM framework.”
  • Confirm the correct Sphinx target for the decorator. Most others use nat.cli.register_workflow.register_*; this one drops register_workflow. Adjust if needed to avoid a broken cross-ref.
- - **LLM Clients**: LLM Clients are implementations of LLM providers that are specific to a LLM framework. ... {py:deco}`nat.cli.register_llm_client`
+ - **LLM Clients**: LLM Clients are implementations of LLM providers that are specific to an LLM framework. ... {py:deco}`nat.cli.register_workflow.register_llm_client`

If nat.cli.register_llm_client is correct, keep it but align the others.


46-46: Grammar: “an LLM framework,” not “a LLM framework.”

- - **Retriever Clients**: Retriever clients are implementations of retriever providers, which are specific to a LLM framework.
+ - **Retriever Clients**: Retriever clients are implementations of retriever providers, which are specific to an LLM framework.

49-49: Clarity: reference the class name precisely and article fix.

Name the concrete class and use “an LLM framework.”

- - **Tool Wrappers**: Tool wrappers are used to wrap functions in a way that is specific to a LLM framework. For example, when using the LangChain/LangGraph framework, NeMo Agent toolkit functions need to be wrapped in `BaseTool` class to be compatible with LangChain/LangGraph.
+ - **Tool Wrappers**: Tool wrappers wrap functions in a way that is specific to an LLM framework. For example, when using the LangChain/LangGraph framework, wrap functions in the `langchain_core.tools.BaseTool` class to be compatible with LangChain/LangGraph.

74-83: LangGraph wrapper-type clarity and MyST admonitions.

  • Code shows wrapper_type=LLMFrameworkEnum.LANGCHAIN while the prose says “LangChain/LangGraph.” If LangGraph uses the same enum value, add a note; otherwise show the LangGraph value or framework_wrappers=[...].
  • Replace GitHub-style > [!NOTE] admonitions in this file with MyST (:::{note}) for Sphinx consistency.

Example (if both are supported):

-@register_llm_client(config_type=OpenAIModelConfig, wrapper_type=LLMFrameworkEnum.LANGCHAIN)
+@register_llm_client(
+    config_type=OpenAIModelConfig,
+    wrapper_type=LLMFrameworkEnum.LANGCHAIN,  # Also used for LangGraph wrappers
+)

Or:

-@register_llm_client(config_type=OpenAIModelConfig, wrapper_type=LLMFrameworkEnum.LANGCHAIN)
+@register_llm_client(config_type=OpenAIModelConfig, framework_wrappers=[LLMFrameworkEnum.LANGCHAIN])

107-108: Distribution naming consistency.

We now position the package as LangChain/LangGraph, but the example distribution remains nvidia-nat-langchain. If the same distribution intentionally covers both, add a brief note clarifying that it includes LangGraph wrappers to avoid confusion.

examples/frameworks/multi_frameworks/README.md (3)

36-38: Minor copy edit for flow.

-LangChain/LangGraph is incredibly flexible, LlamaIndex is incredibly powerful for building RAG pipelines;
-different AI frameworks excel at different tasks.
+LangChain/LangGraph is incredibly flexible and LlamaIndex is powerful for building RAG pipelines;
+different AI frameworks excel at different tasks.

47-48: Consistency: lowercase “tools.”

Align with surrounding style.

-In this example, we wrap all three of the above tools as LangChain/LangGraph Tools.
+In this example, we wrap all three of the above tools as LangChain/LangGraph tools.

69-70: Hyphenation and clarity.

Use “multi‑agent” and present tense.

-- (2) a `research_agent` made out of a LangChain/LangGraph runnable chain with tool calling capability, able to call arXiv as a tool and return summarized found research papers
+- (2) a `research_agent` built as a LangChain/LangGraph runnable chain with tool-calling capability, able to call arXiv as a tool and return summarized research
docs/source/tutorials/create-a-new-workflow.md (1)

249-250: Avoid hard-coded versions; use placeholders.

Docs rule forbids hard-coded versions. Suggest a placeholder and an instruction to derive from nat --version.

-In this example, you have been using NeMo Agent toolkit with LangChain/LangGraph. This is why the dependency is declared on `nvidia-nat[langchain]`, ...
+In this example, you have been using NeMo Agent toolkit with LangChain/LangGraph. This is why the dependency is declared on `nvidia-nat[langchain]`, ...
+Use your installed major.minor from `nat --version` (e.g., `~=X.Y`) instead of hard-coding a specific value.
docs/source/quick-start/installing.md (2)

221-232: Do not hard‑code versions in docs; use placeholders or constraints.

Replace fixed 1.0 examples with placeholders and an instruction to derive the constraint from nat --version.

-Example dependency for NeMo Agent toolkit using the LangChain/LangGraph plugin for projects using a `pyproject.toml` file:
+Example dependency for NeMo Agent toolkit using the LangChain/LangGraph plugin (replace {MAJOR.MINOR} with your installed version from `nat --version`):
 ```toml
 dependencies = [
-"nvidia-nat[langchain]~=1.0",
+"nvidia-nat[langchain]~={MAJOR.MINOR}",
 # Add any additional dependencies your workflow needs
 ]

Alternately for projects using a requirements.txt file:

-nvidia-nat[langchain]==1.0.*
+nvidia-nat[langchain]=={MAJOR}.*

---

`18-21`: **Product naming in headings.**

First use should be “NVIDIA NeMo Agent toolkit” (lowercase “toolkit”). The H1 currently uses “Toolkit.”

```diff
-# Installing NVIDIA NeMo Agent Toolkit
+# Installing NVIDIA NeMo Agent toolkit
examples/notebooks/1_getting_started.ipynb (3)

59-66: Copyedits: naming, spelling, and consistency.

  • Use “NVIDIA NeMo Agent toolkit” on first mention.
  • Fix “resuability” → “reusability.”
  • Normalize “LangChain/LangGraph.”
-NeMo Agent toolkit works side-by-side ... such as LangChain/LangGraph, LlamaIndex, CrewAI, and Microsoft Semantic Kernel, as well as customer enterprise frameworks ...
+The NVIDIA NeMo Agent toolkit works side-by-side ... such as LangChain/LangGraph, LlamaIndex, CrewAI, and Microsoft Semantic Kernel, as well as customer enterprise frameworks ...
-... configurability, resuability, and easy user experience.
+... configurability, reusability, and easy user experience.
-Run the following two cells to create the LangChain/LangGraph agent ...
+Run the following two cells to create the LangChain/LangGraph agent ...

146-148: Grammar: contraction and consistency.

-Now its time to define the same LangChain/LangGraph agent ...
+Now it's time to define the same LangChain/LangGraph agent ...
-Add LangChain/LangGraph framework wrappers ...
+Add LangChain/LangGraph framework wrappers ...

211-212: Consistency: normalize “LangChain/LangGraph.”

-As shown above, this will return the same output as your previously created LangChain/LangGraph agent.
+As shown above, this returns the same output as your previously created LangChain/LangGraph agent.
📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between f5836c0 and 5b22e12.

📒 Files selected for processing (39)
  • .cursor/rules/nat-setup/nat-toolkit-installation.mdc (2 hunks)
  • .cursor/rules/nat-workflows/add-tools.mdc (1 hunks)
  • README.md (1 hunks)
  • docs/source/extend/adding-an-llm-provider.md (3 hunks)
  • docs/source/extend/integrating-aws-bedrock-models.md (1 hunks)
  • docs/source/extend/plugins.md (3 hunks)
  • docs/source/quick-start/installing.md (4 hunks)
  • docs/source/reference/evaluate.md (2 hunks)
  • docs/source/store-and-retrieve/retrievers.md (1 hunks)
  • docs/source/tutorials/add-tools-to-a-workflow.md (1 hunks)
  • docs/source/tutorials/create-a-new-workflow.md (2 hunks)
  • docs/source/workflows/llms/index.md (1 hunks)
  • docs/source/workflows/observe/index.md (1 hunks)
  • docs/source/workflows/profiler.md (1 hunks)
  • examples/README.md (2 hunks)
  • examples/custom_functions/automated_description_generation/src/nat_automated_description_generation/register.py (1 hunks)
  • examples/frameworks/multi_frameworks/README.md (2 hunks)
  • examples/frameworks/multi_frameworks/src/nat_multi_frameworks/register.py (1 hunks)
  • examples/frameworks/semantic_kernel_demo/README.md (1 hunks)
  • examples/notebooks/1_getting_started.ipynb (5 hunks)
  • examples/observability/simple_calculator_observability/README.md (2 hunks)
  • packages/nvidia_nat_agno/src/nat/plugins/agno/tool_wrapper.py (1 hunks)
  • packages/nvidia_nat_data_flywheel/src/nat/plugins/data_flywheel/observability/processor/trace_conversion/adapter/elasticsearch/nim_converter.py (1 hunks)
  • packages/nvidia_nat_data_flywheel/src/nat/plugins/data_flywheel/observability/processor/trace_conversion/adapter/elasticsearch/openai_converter.py (1 hunks)
  • packages/nvidia_nat_data_flywheel/src/nat/plugins/data_flywheel/observability/schema/provider/openai_message.py (1 hunks)
  • packages/nvidia_nat_langchain/pyproject.toml (1 hunks)
  • packages/nvidia_nat_langchain/src/nat/meta/pypi.md (1 hunks)
  • packages/nvidia_nat_test/src/nat/test/llm.py (1 hunks)
  • packages/nvidia_nat_weave/src/nat/plugins/weave/weave_exporter.py (1 hunks)
  • src/nat/cli/commands/workflow/workflow_commands.py (1 hunks)
  • src/nat/data_models/api_server.py (1 hunks)
  • src/nat/llm/aws_bedrock_llm.py (1 hunks)
  • src/nat/profiler/decorators/framework_wrapper.py (1 hunks)
  • src/nat/tool/chat_completion.py (1 hunks)
  • src/nat/tool/document_search.py (1 hunks)
  • tests/nat/agent/test_react.py (3 hunks)
  • tests/nat/llm_providers/test_langchain_agents.py (4 hunks)
  • tests/nat/llm_providers/test_llama_index_agents.py (1 hunks)
  • tests/nat/profiler/test_profiler.py (1 hunks)
🧰 Additional context used
📓 Path-based instructions (17)
packages/*/src/**/*.py

📄 CodeRabbit inference engine (.cursor/rules/general.mdc)

packages/*/src/**/*.py: All importable Python code in packages must live under packages//src/
All public APIs in packaged code require Python 3.11+ type hints; prefer typing/collections.abc; use typing.Annotated when useful
Provide Google-style docstrings for public APIs in packages; first line concise with a period; use backticks for code entities

Files:

  • packages/nvidia_nat_weave/src/nat/plugins/weave/weave_exporter.py
  • packages/nvidia_nat_test/src/nat/test/llm.py
  • packages/nvidia_nat_agno/src/nat/plugins/agno/tool_wrapper.py
  • packages/nvidia_nat_data_flywheel/src/nat/plugins/data_flywheel/observability/processor/trace_conversion/adapter/elasticsearch/openai_converter.py
  • packages/nvidia_nat_data_flywheel/src/nat/plugins/data_flywheel/observability/processor/trace_conversion/adapter/elasticsearch/nim_converter.py
  • packages/nvidia_nat_data_flywheel/src/nat/plugins/data_flywheel/observability/schema/provider/openai_message.py
**/*.py

📄 CodeRabbit inference engine (.cursor/rules/general.mdc)

**/*.py: Follow PEP 8/20 style; format with yapf (column_limit=120) and use 4-space indentation; end files with a single newline
Run ruff (ruff check --fix) per pyproject.toml; fix warnings unless explicitly ignored; ruff is linter-only
Use snake_case for functions/variables, PascalCase for classes, and UPPER_CASE for constants
Treat pyright warnings as errors during development
Exception handling: preserve stack traces and avoid duplicate logging
When re-raising exceptions, use bare raise and log with logger.error(), not logger.exception()
When catching and not re-raising, log with logger.exception() to capture stack trace
Validate and sanitize all user input; prefer httpx with SSL verification and follow OWASP Top‑10
Use async/await for I/O-bound work; profile CPU-heavy paths with cProfile/mprof; cache with functools.lru_cache or external cache; leverage NumPy vectorization when beneficial

Files:

  • packages/nvidia_nat_weave/src/nat/plugins/weave/weave_exporter.py
  • examples/custom_functions/automated_description_generation/src/nat_automated_description_generation/register.py
  • src/nat/cli/commands/workflow/workflow_commands.py
  • src/nat/tool/document_search.py
  • packages/nvidia_nat_test/src/nat/test/llm.py
  • packages/nvidia_nat_agno/src/nat/plugins/agno/tool_wrapper.py
  • tests/nat/llm_providers/test_llama_index_agents.py
  • tests/nat/llm_providers/test_langchain_agents.py
  • src/nat/data_models/api_server.py
  • packages/nvidia_nat_data_flywheel/src/nat/plugins/data_flywheel/observability/processor/trace_conversion/adapter/elasticsearch/openai_converter.py
  • examples/frameworks/multi_frameworks/src/nat_multi_frameworks/register.py
  • tests/nat/profiler/test_profiler.py
  • packages/nvidia_nat_data_flywheel/src/nat/plugins/data_flywheel/observability/processor/trace_conversion/adapter/elasticsearch/nim_converter.py
  • tests/nat/agent/test_react.py
  • src/nat/profiler/decorators/framework_wrapper.py
  • src/nat/llm/aws_bedrock_llm.py
  • src/nat/tool/chat_completion.py
  • packages/nvidia_nat_data_flywheel/src/nat/plugins/data_flywheel/observability/schema/provider/openai_message.py
**/*.{py,sh,md,yml,yaml,toml,ini,json,ipynb,txt,rst}

📄 CodeRabbit inference engine (.cursor/rules/general.mdc)

**/*.{py,sh,md,yml,yaml,toml,ini,json,ipynb,txt,rst}: Every file must start with the standard SPDX Apache-2.0 header; keep copyright years up‑to‑date
All source files must include the SPDX Apache‑2.0 header; do not bypass CI header checks

Files:

  • packages/nvidia_nat_weave/src/nat/plugins/weave/weave_exporter.py
  • docs/source/reference/evaluate.md
  • examples/custom_functions/automated_description_generation/src/nat_automated_description_generation/register.py
  • docs/source/tutorials/add-tools-to-a-workflow.md
  • src/nat/cli/commands/workflow/workflow_commands.py
  • src/nat/tool/document_search.py
  • packages/nvidia_nat_test/src/nat/test/llm.py
  • packages/nvidia_nat_agno/src/nat/plugins/agno/tool_wrapper.py
  • tests/nat/llm_providers/test_llama_index_agents.py
  • tests/nat/llm_providers/test_langchain_agents.py
  • docs/source/workflows/llms/index.md
  • src/nat/data_models/api_server.py
  • packages/nvidia_nat_data_flywheel/src/nat/plugins/data_flywheel/observability/processor/trace_conversion/adapter/elasticsearch/openai_converter.py
  • examples/frameworks/multi_frameworks/src/nat_multi_frameworks/register.py
  • tests/nat/profiler/test_profiler.py
  • docs/source/extend/integrating-aws-bedrock-models.md
  • packages/nvidia_nat_langchain/src/nat/meta/pypi.md
  • packages/nvidia_nat_data_flywheel/src/nat/plugins/data_flywheel/observability/processor/trace_conversion/adapter/elasticsearch/nim_converter.py
  • examples/frameworks/semantic_kernel_demo/README.md
  • tests/nat/agent/test_react.py
  • docs/source/extend/adding-an-llm-provider.md
  • src/nat/profiler/decorators/framework_wrapper.py
  • docs/source/extend/plugins.md
  • src/nat/llm/aws_bedrock_llm.py
  • packages/nvidia_nat_langchain/pyproject.toml
  • docs/source/store-and-retrieve/retrievers.md
  • src/nat/tool/chat_completion.py
  • examples/frameworks/multi_frameworks/README.md
  • docs/source/workflows/observe/index.md
  • docs/source/workflows/profiler.md
  • examples/README.md
  • docs/source/quick-start/installing.md
  • examples/notebooks/1_getting_started.ipynb
  • README.md
  • docs/source/tutorials/create-a-new-workflow.md
  • examples/observability/simple_calculator_observability/README.md
  • packages/nvidia_nat_data_flywheel/src/nat/plugins/data_flywheel/observability/schema/provider/openai_message.py
**/*.{py,md}

📄 CodeRabbit inference engine (.cursor/rules/general.mdc)

Never hard‑code version numbers in code or docs; versions are derived by setuptools‑scm

Files:

  • packages/nvidia_nat_weave/src/nat/plugins/weave/weave_exporter.py
  • docs/source/reference/evaluate.md
  • examples/custom_functions/automated_description_generation/src/nat_automated_description_generation/register.py
  • docs/source/tutorials/add-tools-to-a-workflow.md
  • src/nat/cli/commands/workflow/workflow_commands.py
  • src/nat/tool/document_search.py
  • packages/nvidia_nat_test/src/nat/test/llm.py
  • packages/nvidia_nat_agno/src/nat/plugins/agno/tool_wrapper.py
  • tests/nat/llm_providers/test_llama_index_agents.py
  • tests/nat/llm_providers/test_langchain_agents.py
  • docs/source/workflows/llms/index.md
  • src/nat/data_models/api_server.py
  • packages/nvidia_nat_data_flywheel/src/nat/plugins/data_flywheel/observability/processor/trace_conversion/adapter/elasticsearch/openai_converter.py
  • examples/frameworks/multi_frameworks/src/nat_multi_frameworks/register.py
  • tests/nat/profiler/test_profiler.py
  • docs/source/extend/integrating-aws-bedrock-models.md
  • packages/nvidia_nat_langchain/src/nat/meta/pypi.md
  • packages/nvidia_nat_data_flywheel/src/nat/plugins/data_flywheel/observability/processor/trace_conversion/adapter/elasticsearch/nim_converter.py
  • examples/frameworks/semantic_kernel_demo/README.md
  • tests/nat/agent/test_react.py
  • docs/source/extend/adding-an-llm-provider.md
  • src/nat/profiler/decorators/framework_wrapper.py
  • docs/source/extend/plugins.md
  • src/nat/llm/aws_bedrock_llm.py
  • docs/source/store-and-retrieve/retrievers.md
  • src/nat/tool/chat_completion.py
  • examples/frameworks/multi_frameworks/README.md
  • docs/source/workflows/observe/index.md
  • docs/source/workflows/profiler.md
  • examples/README.md
  • docs/source/quick-start/installing.md
  • README.md
  • docs/source/tutorials/create-a-new-workflow.md
  • examples/observability/simple_calculator_observability/README.md
  • packages/nvidia_nat_data_flywheel/src/nat/plugins/data_flywheel/observability/schema/provider/openai_message.py
**/*

⚙️ CodeRabbit configuration file

**/*: # Code Review Instructions

  • Ensure the code follows best practices and coding standards. - For Python code, follow
    PEP 20 and
    PEP 8 for style guidelines.
  • Check for security vulnerabilities and potential issues. - Python methods should use type hints for all parameters and return values.
    Example:
    def my_function(param1: int, param2: str) -> bool:
        pass
  • For Python exception handling, ensure proper stack trace preservation:
    • When re-raising exceptions: use bare raise statements to maintain the original stack trace,
      and use logger.error() (not logger.exception()) to avoid duplicate stack trace output.
    • When catching and logging exceptions without re-raising: always use logger.exception()
      to capture the full stack trace information.

Documentation Review Instructions - Verify that documentation and comments are clear and comprehensive. - Verify that the documentation doesn't contain any TODOs, FIXMEs or placeholder text like "lorem ipsum". - Verify that the documentation doesn't contain any offensive or outdated terms. - Verify that documentation and comments are free of spelling mistakes, ensure the documentation doesn't contain any

words listed in the ci/vale/styles/config/vocabularies/nat/reject.txt file, words that might appear to be
spelling mistakes but are listed in the ci/vale/styles/config/vocabularies/nat/accept.txt file are OK.

Misc. - All code (except .mdc files that contain Cursor rules) should be licensed under the Apache License 2.0,

and should contain an Apache License 2.0 header comment at the top of each file.

  • Confirm that copyright years are up-to date whenever a file is changed.

Files:

  • packages/nvidia_nat_weave/src/nat/plugins/weave/weave_exporter.py
  • docs/source/reference/evaluate.md
  • examples/custom_functions/automated_description_generation/src/nat_automated_description_generation/register.py
  • docs/source/tutorials/add-tools-to-a-workflow.md
  • src/nat/cli/commands/workflow/workflow_commands.py
  • src/nat/tool/document_search.py
  • packages/nvidia_nat_test/src/nat/test/llm.py
  • packages/nvidia_nat_agno/src/nat/plugins/agno/tool_wrapper.py
  • tests/nat/llm_providers/test_llama_index_agents.py
  • tests/nat/llm_providers/test_langchain_agents.py
  • docs/source/workflows/llms/index.md
  • src/nat/data_models/api_server.py
  • packages/nvidia_nat_data_flywheel/src/nat/plugins/data_flywheel/observability/processor/trace_conversion/adapter/elasticsearch/openai_converter.py
  • examples/frameworks/multi_frameworks/src/nat_multi_frameworks/register.py
  • tests/nat/profiler/test_profiler.py
  • docs/source/extend/integrating-aws-bedrock-models.md
  • packages/nvidia_nat_langchain/src/nat/meta/pypi.md
  • packages/nvidia_nat_data_flywheel/src/nat/plugins/data_flywheel/observability/processor/trace_conversion/adapter/elasticsearch/nim_converter.py
  • examples/frameworks/semantic_kernel_demo/README.md
  • tests/nat/agent/test_react.py
  • docs/source/extend/adding-an-llm-provider.md
  • src/nat/profiler/decorators/framework_wrapper.py
  • docs/source/extend/plugins.md
  • src/nat/llm/aws_bedrock_llm.py
  • packages/nvidia_nat_langchain/pyproject.toml
  • docs/source/store-and-retrieve/retrievers.md
  • src/nat/tool/chat_completion.py
  • examples/frameworks/multi_frameworks/README.md
  • docs/source/workflows/observe/index.md
  • docs/source/workflows/profiler.md
  • examples/README.md
  • docs/source/quick-start/installing.md
  • examples/notebooks/1_getting_started.ipynb
  • README.md
  • docs/source/tutorials/create-a-new-workflow.md
  • examples/observability/simple_calculator_observability/README.md
  • packages/nvidia_nat_data_flywheel/src/nat/plugins/data_flywheel/observability/schema/provider/openai_message.py
packages/**/*

⚙️ CodeRabbit configuration file

packages/**/*: - This directory contains optional plugin packages for the toolkit, each should contain a pyproject.toml file. - The pyproject.toml file should declare a dependency on nvidia-nat or another package with a name starting
with nvidia-nat-. This dependency should be declared using ~=<version>, and the version should be a two
digit version (ex: ~=1.0).

  • Not all packages contain Python code, if they do they should also contain their own set of tests, in a
    tests/ directory at the same level as the pyproject.toml file.

Files:

  • packages/nvidia_nat_weave/src/nat/plugins/weave/weave_exporter.py
  • packages/nvidia_nat_test/src/nat/test/llm.py
  • packages/nvidia_nat_agno/src/nat/plugins/agno/tool_wrapper.py
  • packages/nvidia_nat_data_flywheel/src/nat/plugins/data_flywheel/observability/processor/trace_conversion/adapter/elasticsearch/openai_converter.py
  • packages/nvidia_nat_langchain/src/nat/meta/pypi.md
  • packages/nvidia_nat_data_flywheel/src/nat/plugins/data_flywheel/observability/processor/trace_conversion/adapter/elasticsearch/nim_converter.py
  • packages/nvidia_nat_langchain/pyproject.toml
  • packages/nvidia_nat_data_flywheel/src/nat/plugins/data_flywheel/observability/schema/provider/openai_message.py
docs/source/**/*.md

📄 CodeRabbit inference engine (.cursor/rules/general.mdc)

docs/source/**/*.md: Use the official naming: first use “NVIDIA NeMo Agent toolkit”; subsequent uses “NeMo Agent toolkit”; never use deprecated names in documentation
Documentation sources must be Markdown under docs/source; keep docs in sync and fix Sphinx errors/broken links
Documentation must be clear, comprehensive, free of TODO/FIXME/placeholders/offensive/outdated terms; fix spelling; adhere to Vale vocab allow/reject lists

Files:

  • docs/source/reference/evaluate.md
  • docs/source/tutorials/add-tools-to-a-workflow.md
  • docs/source/workflows/llms/index.md
  • docs/source/extend/integrating-aws-bedrock-models.md
  • docs/source/extend/adding-an-llm-provider.md
  • docs/source/extend/plugins.md
  • docs/source/store-and-retrieve/retrievers.md
  • docs/source/workflows/observe/index.md
  • docs/source/workflows/profiler.md
  • docs/source/quick-start/installing.md
  • docs/source/tutorials/create-a-new-workflow.md
docs/source/**/*

⚙️ CodeRabbit configuration file

This directory contains the source code for the documentation. All documentation should be written in Markdown format. Any image files should be placed in the docs/source/_static directory.

Files:

  • docs/source/reference/evaluate.md
  • docs/source/tutorials/add-tools-to-a-workflow.md
  • docs/source/workflows/llms/index.md
  • docs/source/extend/integrating-aws-bedrock-models.md
  • docs/source/extend/adding-an-llm-provider.md
  • docs/source/extend/plugins.md
  • docs/source/store-and-retrieve/retrievers.md
  • docs/source/workflows/observe/index.md
  • docs/source/workflows/profiler.md
  • docs/source/quick-start/installing.md
  • docs/source/tutorials/create-a-new-workflow.md
examples/**/*

⚙️ CodeRabbit configuration file

examples/**/*: - This directory contains example code and usage scenarios for the toolkit, at a minimum an example should
contain a README.md or file README.ipynb.

  • If an example contains Python code, it should be placed in a subdirectory named src/ and should
    contain a pyproject.toml file. Optionally, it might also contain scripts in a scripts/ directory.
  • If an example contains YAML files, they should be placed in a subdirectory named configs/. - If an example contains sample data files, they should be placed in a subdirectory named data/, and should
    be checked into git-lfs.

Files:

  • examples/custom_functions/automated_description_generation/src/nat_automated_description_generation/register.py
  • examples/frameworks/multi_frameworks/src/nat_multi_frameworks/register.py
  • examples/frameworks/semantic_kernel_demo/README.md
  • examples/frameworks/multi_frameworks/README.md
  • examples/README.md
  • examples/notebooks/1_getting_started.ipynb
  • examples/observability/simple_calculator_observability/README.md
src/**/*.py

📄 CodeRabbit inference engine (.cursor/rules/general.mdc)

src/**/*.py: All importable Python code must live under src/
All public APIs in src/ require Python 3.11+ type hints on parameters and return values; prefer typing/collections.abc abstractions; use typing.Annotated when useful
Provide Google-style docstrings for every public module, class, function, and CLI command; first line concise with a period; surround code entities with backticks

Files:

  • src/nat/cli/commands/workflow/workflow_commands.py
  • src/nat/tool/document_search.py
  • src/nat/data_models/api_server.py
  • src/nat/profiler/decorators/framework_wrapper.py
  • src/nat/llm/aws_bedrock_llm.py
  • src/nat/tool/chat_completion.py
src/nat/**/*

📄 CodeRabbit inference engine (.cursor/rules/general.mdc)

Core functionality under src/nat should prioritize backward compatibility when changed

Files:

  • src/nat/cli/commands/workflow/workflow_commands.py
  • src/nat/tool/document_search.py
  • src/nat/data_models/api_server.py
  • src/nat/profiler/decorators/framework_wrapper.py
  • src/nat/llm/aws_bedrock_llm.py
  • src/nat/tool/chat_completion.py

⚙️ CodeRabbit configuration file

This directory contains the core functionality of the toolkit. Changes should prioritize backward compatibility.

Files:

  • src/nat/cli/commands/workflow/workflow_commands.py
  • src/nat/tool/document_search.py
  • src/nat/data_models/api_server.py
  • src/nat/profiler/decorators/framework_wrapper.py
  • src/nat/llm/aws_bedrock_llm.py
  • src/nat/tool/chat_completion.py
tests/**/*.py

📄 CodeRabbit inference engine (.cursor/rules/general.mdc)

Unit tests must live under tests/ and use configured markers (e2e, integration, etc.)

Files:

  • tests/nat/llm_providers/test_llama_index_agents.py
  • tests/nat/llm_providers/test_langchain_agents.py
  • tests/nat/profiler/test_profiler.py
  • tests/nat/agent/test_react.py

⚙️ CodeRabbit configuration file

tests/**/*.py: - Ensure that tests are comprehensive, cover edge cases, and validate the functionality of the code. - Test functions should be named using the test_ prefix, using snake_case. - Any frequently repeated code should be extracted into pytest fixtures. - Pytest fixtures should define the name argument when applying the pytest.fixture decorator. The fixture
function being decorated should be named using the fixture_ prefix, using snake_case. Example:
@pytest.fixture(name="my_fixture")
def fixture_my_fixture():
pass

Files:

  • tests/nat/llm_providers/test_llama_index_agents.py
  • tests/nat/llm_providers/test_langchain_agents.py
  • tests/nat/profiler/test_profiler.py
  • tests/nat/agent/test_react.py
**/tests/**/*.py

📄 CodeRabbit inference engine (.cursor/rules/general.mdc)

**/tests/**/*.py: Test functions must use the test_ prefix and snake_case
Extract repeated test code into pytest fixtures; fixtures should set name=... in @pytest.fixture and functions named with fixture_ prefix
Mark expensive tests with @pytest.mark.slow or @pytest.mark.integration
Use pytest with pytest-asyncio for async code; mock external services with pytest_httpserver or unittest.mock

Files:

  • tests/nat/llm_providers/test_llama_index_agents.py
  • tests/nat/llm_providers/test_langchain_agents.py
  • tests/nat/profiler/test_profiler.py
  • tests/nat/agent/test_react.py
.cursor/rules/**/*.mdc

📄 CodeRabbit inference engine (.cursor/rules/cursor-rules.mdc)

.cursor/rules/**/*.mdc: Place all Cursor rule files under PROJECT_ROOT/.cursor/rules/
Name rule files in kebab-case, always using the .mdc extension, with descriptive filenames
Rule descriptions must start with the phrase: 'Follow these rules when'
Descriptions should specify clear trigger conditions (e.g., when the user's request meets certain criteria)
Use precise action verbs in descriptions (e.g., creating, modifying, implementing, configuring, adding, installing, evaluating)
Descriptions should be comprehensive but concise
Use consistent project terminology in descriptions (e.g., NAT workflows, NAT CLI commands)
Proofread descriptions for typos and grammar
Avoid overly narrow descriptions when rules cover multiple related scenarios
Prefer the 'user's request involves' phrasing pattern in descriptions
Rule files must include the specified frontmatter structure: description (string), optional globs, and alwaysApply (boolean), followed by markdown content

Files:

  • .cursor/rules/nat-workflows/add-tools.mdc
  • .cursor/rules/nat-setup/nat-toolkit-installation.mdc
.cursor/rules/*/!(general).mdc

📄 CodeRabbit inference engine (.cursor/rules/cursor-rules.mdc)

Place specific rules within a topic as separate .mdc files alongside general.mdc

Files:

  • .cursor/rules/nat-workflows/add-tools.mdc
  • .cursor/rules/nat-setup/nat-toolkit-installation.mdc
**/README.{md,ipynb}

📄 CodeRabbit inference engine (.cursor/rules/general.mdc)

Each documentation README must follow the NeMo Agent toolkit naming rules and must not use deprecated names

Files:

  • examples/frameworks/semantic_kernel_demo/README.md
  • examples/frameworks/multi_frameworks/README.md
  • examples/README.md
  • README.md
  • examples/observability/simple_calculator_observability/README.md
packages/*/pyproject.toml

📄 CodeRabbit inference engine (.cursor/rules/general.mdc)

packages/*/pyproject.toml: Each package under packages/ must include a pyproject.toml
Package pyproject.toml must depend on nvidia-nat or a package starting with nvidia-nat- using ~= versions (e.g., ~=1.0)

Files:

  • packages/nvidia_nat_langchain/pyproject.toml
🧠 Learnings (3)
📚 Learning: 2025-08-28T23:22:41.742Z
Learnt from: CR
PR: NVIDIA/NeMo-Agent-Toolkit#0
File: .cursor/rules/general.mdc:0-0
Timestamp: 2025-08-28T23:22:41.742Z
Learning: Applies to docs/source/**/*.md : Use the official naming: first use “NVIDIA NeMo Agent toolkit”; subsequent uses “NeMo Agent toolkit”; never use deprecated names in documentation

Applied to files:

  • packages/nvidia_nat_langchain/src/nat/meta/pypi.md
📚 Learning: 2025-08-28T23:22:41.742Z
Learnt from: CR
PR: NVIDIA/NeMo-Agent-Toolkit#0
File: .cursor/rules/general.mdc:0-0
Timestamp: 2025-08-28T23:22:41.742Z
Learning: Applies to packages/*/pyproject.toml : Package pyproject.toml must depend on nvidia-nat or a package starting with nvidia-nat- using ~=<two-digit> versions (e.g., ~=1.0)

Applied to files:

  • packages/nvidia_nat_langchain/pyproject.toml
  • docs/source/tutorials/create-a-new-workflow.md
📚 Learning: 2025-08-22T20:18:28.041Z
Learnt from: CR
PR: NVIDIA/NeMo-Agent-Toolkit#0
File: .cursor/rules/cursor-rules.mdc:0-0
Timestamp: 2025-08-22T20:18:28.041Z
Learning: Applies to .cursor/rules/**/*.mdc : Use consistent project terminology in descriptions (e.g., NAT workflows, NAT CLI commands)

Applied to files:

  • .cursor/rules/nat-setup/nat-toolkit-installation.mdc
🪛 markdownlint-cli2 (0.17.2)
docs/source/extend/integrating-aws-bedrock-models.md

51-51: Unordered list style
Expected: dash; Actual: asterisk

(MD004, ul-style)

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: CI Pipeline / Check
🔇 Additional comments (30)
packages/nvidia_nat_weave/src/nat/plugins/weave/weave_exporter.py (1)

127-131: Doc update looks good.

"LangChain/LangGraph" wording is correct and consistent with the PR intent.

tests/nat/profiler/test_profiler.py (1)

241-241: Terminology update LGTM.

Docstring correctly reflects LangChain/LangGraph. Nothing else to change here.

packages/nvidia_nat_test/src/nat/test/llm.py (1)

77-77: Docstring update LGTM.

Matches the repo-wide LangChain/LangGraph terminology; no code changes needed.

docs/source/extend/integrating-aws-bedrock-models.md (1)

51-51: Correct scope of “ignored” setting LGTM.

“context_size … ignored for LangChain/LangGraph” aligns with provider behavior.

examples/custom_functions/automated_description_generation/src/nat_automated_description_generation/register.py (1)

64-64: Comment update LGTM.

Terminology is consistent with the rest of the repo.

src/nat/llm/aws_bedrock_llm.py (1)

44-45: Field description update LGTM.

Matches docs; no runtime impact.

packages/nvidia_nat_agno/src/nat/plugins/agno/tool_wrapper.py (1)

334-334: Comment update LGTM.

Accurately describes the schema style source.

tests/nat/llm_providers/test_llama_index_agents.py (1)

94-97: Docstring update LGTM.

Terminology aligned; no behavioral changes.

.cursor/rules/nat-workflows/add-tools.mdc (1)

73-75: Clarify whether '[langchain]' extra also covers LangGraph.

The command still installs the 'langchain' extra while the prose now mentions LangChain/LangGraph. If LangGraph support is bundled into the same extra, add a short note here; otherwise, provide the correct extra name or an additional install line.

Do you want me to scan the repo for extras and update this snippet accordingly?

examples/observability/simple_calculator_observability/README.md (1)

98-98: LGTM: naming update is clear and consistent.

Doc wording matches the PR objective without changing behavior.

src/nat/profiler/decorators/framework_wrapper.py (1)

63-63: LGTM: log message aligned with naming.

Non-functional change; keeps log level and semantics intact.

src/nat/data_models/api_server.py (1)

691-701: LGTM: docstring naming aligned.

No behavior change; converter remains OpenAI-compatible.

tests/nat/llm_providers/test_langchain_agents.py (4)

33-34: LGTM: updated docstring.

Doc-only; test behavior unchanged.


57-58: LGTM: updated docstring.


80-83: LGTM: updated docstring.


108-112: LGTM: updated docstring.

docs/source/workflows/llms/index.md (1)

103-104: LGTM: wording correctly includes LangGraph.

packages/nvidia_nat_data_flywheel/src/nat/plugins/data_flywheel/observability/processor/trace_conversion/adapter/elasticsearch/nim_converter.py (1)

36-44: LGTM: docstring aligns with LangChain/LangGraph naming; no functional changes.

docs/source/workflows/observe/index.md (1)

149-150: LGTM: updated framework list includes LangGraph.

packages/nvidia_nat_langchain/src/nat/meta/pypi.md (1)

21-21: Wording LGTM; consistent with repo-wide rename.

tests/nat/agent/test_react.py (1)

606-607: Comment-only updates look good.

No test logic changes; terminology made consistent.

Also applies to: 625-626, 715-716

docs/source/reference/evaluate.md (1)

229-230: Evaluator terminology updated correctly.

Matches the PR objective; no content drift.

Also applies to: 272-273

.cursor/rules/nat-setup/nat-toolkit-installation.mdc (1)

109-110: Plugin description aligned with naming.

packages/nvidia_nat_langchain/pyproject.toml (1)

32-32: Description updated; dependencies conform to project rules.

Depends on nvidia-nat~=1.3 (two-digit) as required.

examples/README.md (1)

80-80: Docs wording updated; OK.

Terminology now reads LangChain/LangGraph; no functional impact.

Also applies to: 97-97

examples/frameworks/multi_frameworks/README.md (1)

59-63: Solid content.

The LangChain/LangGraph updates across Key Features read clearly.

docs/source/quick-start/installing.md (3)

33-41: LGTM: integration wording and extras list.

The LangChain/LangGraph wording and extras mapping read correctly.


91-96: LGTM: extras install example.

Clear example for LangChain/LangGraph support.


148-152: LGTM: editable install of the LangChain/LangGraph extra.

packages/nvidia_nat_data_flywheel/src/nat/plugins/data_flywheel/observability/schema/provider/openai_message.py (1)

22-22: Comment update LGTM.

LangChain/LangGraph wording is accurate.

Signed-off-by: Daniel Wang <daniewang@nvidia.com>
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

♻️ Duplicate comments (3)
examples/observability/simple_calculator_observability/README.md (2)

24-24: Confirm vendor spelling: RagAI vs RagaAI (ensure consistency).

Prior feedback flagged this exact issue. Please verify the official brand spelling and make it consistent across the file (here and in the Catalyst section/table).


147-150: Confirm vendor spelling in Catalyst section.

Double-check the official product/brand name (“RagAI” vs “RagaAI”) and standardize throughout (heading, description, and config table).

docs/source/workflows/profiler.md (1)

52-53: LGTM: “LangChain/LangGraph” update and “LlamaIndex” capitalization are correct.

This addresses the prior naming consistency issue.

🧹 Nitpick comments (5)
examples/observability/simple_calculator_observability/README.md (1)

24-24: List Galileo to match platforms covered below.

The “Multi-Platform Observability Integration” list omits Galileo, which has a dedicated section later. Add it for consistency.

-- **Multi-Platform Observability Integration:** Demonstrates integration with multiple observability platforms including Phoenix (local), Langfuse, LangSmith, Weave, Patronus, and RagaAI Catalyst for comprehensive monitoring options.
+- **Multi-Platform Observability Integration:** Demonstrates integration with multiple observability platforms including Phoenix (local), Langfuse, LangSmith, Weave, Patronus, RagaAI Catalyst, and Galileo for comprehensive monitoring options.
docs/source/workflows/profiler.md (4)

19-19: Follow style guide: lowercase “toolkit” in first mention.

Per docs naming: first use should be “NVIDIA NeMo Agent toolkit”.

-# Profiling and Performance Monitoring of NVIDIA NeMo Agent Toolkit Workflows
+# Profiling and Performance Monitoring of NVIDIA NeMo Agent toolkit Workflows

145-148: Fix typo: “Idenitfy” → “Identify”.

-      # Idenitfy common prompt prefixes
+      # Identify common prompt prefixes

441-442: Grammar fix: “our” → “or”.

-... less relevant our grounded. Let us explore ...
+... less relevant or grounded. Let us explore ...

77-80: Optional: Example could mention LangGraph if supported by the enum.

If LLMFrameworkEnum.LANGGRAPH exists, show both; otherwise keep as-is.

#!/bin/bash
# Check whether LANGGRAPH is available in the enum
rg -nP 'enum\s+LLMFrameworkEnum|class\s+LLMFrameworkEnum|LLMFrameworkEnum\.[A-Z_]+' -C2
rg -n 'LLMFrameworkEnum\.LANGGRAPH'

If available, consider:

-@register_function(config_type=WebQueryToolConfig, framework_wrappers=[LLMFrameworkEnum.LANGCHAIN])
+@register_function(
+    config_type=WebQueryToolConfig,
+    framework_wrappers=[LLMFrameworkEnum.LANGCHAIN, LLMFrameworkEnum.LANGGRAPH],
+)
📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 5b22e12 and 7bc1309.

📒 Files selected for processing (2)
  • docs/source/workflows/profiler.md (1 hunks)
  • examples/observability/simple_calculator_observability/README.md (4 hunks)
🧰 Additional context used
📓 Path-based instructions (7)
**/README.{md,ipynb}

📄 CodeRabbit inference engine (.cursor/rules/general.mdc)

Each documentation README must follow the NeMo Agent toolkit naming rules and must not use deprecated names

Files:

  • examples/observability/simple_calculator_observability/README.md
**/*.{py,sh,md,yml,yaml,toml,ini,json,ipynb,txt,rst}

📄 CodeRabbit inference engine (.cursor/rules/general.mdc)

**/*.{py,sh,md,yml,yaml,toml,ini,json,ipynb,txt,rst}: Every file must start with the standard SPDX Apache-2.0 header; keep copyright years up‑to‑date
All source files must include the SPDX Apache‑2.0 header; do not bypass CI header checks

Files:

  • examples/observability/simple_calculator_observability/README.md
  • docs/source/workflows/profiler.md
**/*.{py,md}

📄 CodeRabbit inference engine (.cursor/rules/general.mdc)

Never hard‑code version numbers in code or docs; versions are derived by setuptools‑scm

Files:

  • examples/observability/simple_calculator_observability/README.md
  • docs/source/workflows/profiler.md
**/*

⚙️ CodeRabbit configuration file

**/*: # Code Review Instructions

  • Ensure the code follows best practices and coding standards. - For Python code, follow
    PEP 20 and
    PEP 8 for style guidelines.
  • Check for security vulnerabilities and potential issues. - Python methods should use type hints for all parameters and return values.
    Example:
    def my_function(param1: int, param2: str) -> bool:
        pass
  • For Python exception handling, ensure proper stack trace preservation:
    • When re-raising exceptions: use bare raise statements to maintain the original stack trace,
      and use logger.error() (not logger.exception()) to avoid duplicate stack trace output.
    • When catching and logging exceptions without re-raising: always use logger.exception()
      to capture the full stack trace information.

Documentation Review Instructions - Verify that documentation and comments are clear and comprehensive. - Verify that the documentation doesn't contain any TODOs, FIXMEs or placeholder text like "lorem ipsum". - Verify that the documentation doesn't contain any offensive or outdated terms. - Verify that documentation and comments are free of spelling mistakes, ensure the documentation doesn't contain any

words listed in the ci/vale/styles/config/vocabularies/nat/reject.txt file, words that might appear to be
spelling mistakes but are listed in the ci/vale/styles/config/vocabularies/nat/accept.txt file are OK.

Misc. - All code (except .mdc files that contain Cursor rules) should be licensed under the Apache License 2.0,

and should contain an Apache License 2.0 header comment at the top of each file.

  • Confirm that copyright years are up-to date whenever a file is changed.

Files:

  • examples/observability/simple_calculator_observability/README.md
  • docs/source/workflows/profiler.md
examples/**/*

⚙️ CodeRabbit configuration file

examples/**/*: - This directory contains example code and usage scenarios for the toolkit, at a minimum an example should
contain a README.md or file README.ipynb.

  • If an example contains Python code, it should be placed in a subdirectory named src/ and should
    contain a pyproject.toml file. Optionally, it might also contain scripts in a scripts/ directory.
  • If an example contains YAML files, they should be placed in a subdirectory named configs/. - If an example contains sample data files, they should be placed in a subdirectory named data/, and should
    be checked into git-lfs.

Files:

  • examples/observability/simple_calculator_observability/README.md
docs/source/**/*.md

📄 CodeRabbit inference engine (.cursor/rules/general.mdc)

docs/source/**/*.md: Use the official naming: first use “NVIDIA NeMo Agent toolkit”; subsequent uses “NeMo Agent toolkit”; never use deprecated names in documentation
Documentation sources must be Markdown under docs/source; keep docs in sync and fix Sphinx errors/broken links
Documentation must be clear, comprehensive, free of TODO/FIXME/placeholders/offensive/outdated terms; fix spelling; adhere to Vale vocab allow/reject lists

Files:

  • docs/source/workflows/profiler.md
docs/source/**/*

⚙️ CodeRabbit configuration file

This directory contains the source code for the documentation. All documentation should be written in Markdown format. Any image files should be placed in the docs/source/_static directory.

Files:

  • docs/source/workflows/profiler.md
🔇 Additional comments (3)
examples/observability/simple_calculator_observability/README.md (2)

98-98: LGTM: LangChain/LangGraph terminology aligned.

The updated phrasing correctly reflects LangChain/LangGraph ecosystem support.


195-195: LGTM: Table updated to “LangChain/LangGraph” ecosystem.

Matches the PR objective and keeps terminology consistent.

docs/source/workflows/profiler.md (1)

437-442: Inconsistent model version: 3.1 vs 3.3. Pick one and use consistently.

#!/bin/bash
# Verify inconsistent model versions in this file
rg -n 'llama-3\.(1|3)-70b-instruct' docs/source/workflows/profiler.md -n -C1

Signed-off-by: Daniel Wang <daniewang@nvidia.com>
Copy link
Member

@willkill07 willkill07 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

One nit; otherwise LGTM

zhongxuanwang-nv and others added 2 commits September 9, 2025 12:40
Co-authored-by: Will Killian <2007799+willkill07@users.noreply.github.com>
Signed-off-by: Zhongxuan (Daniel) Wang <zxwang2004@gmail.com>
Signed-off-by: Daniel Wang <daniewang@nvidia.com>
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
ci/vale/styles/config/vocabularies/nat/accept.txt (1)

1-4: Add the required SPDX header at file top to satisfy CI.

This .txt file is missing the standard SPDX Apache-2.0 header mandated by the repo guidelines and CI checks.

Apply at the very top:

+# SPDX-FileCopyrightText: 2025 NVIDIA CORPORATION & AFFILIATES
+# SPDX-License-Identifier: Apache-2.0
 # List of case-sensitive regular expressions matching words that should be accepted by Vale. For product names like
 # "cuDF" or "cuML", we want to ensure that they are capitalized the same way they're written by the product owners.
 # Regular expressions are parsed according to the Go syntax: https://golang.org/pkg/regexp/syntax/
🧹 Nitpick comments (1)
ci/vale/styles/config/vocabularies/nat/accept.txt (1)

145-147: Fix/Remove likely typo: “[We]ebSocket”.

Line 146 appears malformed and redundant given the preceding “[Ww]eb[Ss]ocket” pattern.

Apply either of the following (prefer removal due to redundancy):

-[We]ebSocket

or, if you want a dedicated rule, correct the character class:

-[We]ebSocket
+[Ww]ebSocket
📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between bf0ff77 and 74ebd7b.

📒 Files selected for processing (1)
  • ci/vale/styles/config/vocabularies/nat/accept.txt (1 hunks)
🧰 Additional context used
📓 Path-based instructions (2)
**/*.{py,sh,md,yml,yaml,toml,ini,json,ipynb,txt,rst}

📄 CodeRabbit inference engine (.cursor/rules/general.mdc)

**/*.{py,sh,md,yml,yaml,toml,ini,json,ipynb,txt,rst}: Every file must start with the standard SPDX Apache-2.0 header; keep copyright years up‑to‑date
All source files must include the SPDX Apache‑2.0 header; do not bypass CI header checks

Files:

  • ci/vale/styles/config/vocabularies/nat/accept.txt
**/*

⚙️ CodeRabbit configuration file

**/*: # Code Review Instructions

  • Ensure the code follows best practices and coding standards. - For Python code, follow
    PEP 20 and
    PEP 8 for style guidelines.
  • Check for security vulnerabilities and potential issues. - Python methods should use type hints for all parameters and return values.
    Example:
    def my_function(param1: int, param2: str) -> bool:
        pass
  • For Python exception handling, ensure proper stack trace preservation:
    • When re-raising exceptions: use bare raise statements to maintain the original stack trace,
      and use logger.error() (not logger.exception()) to avoid duplicate stack trace output.
    • When catching and logging exceptions without re-raising: always use logger.exception()
      to capture the full stack trace information.

Documentation Review Instructions - Verify that documentation and comments are clear and comprehensive. - Verify that the documentation doesn't contain any TODOs, FIXMEs or placeholder text like "lorem ipsum". - Verify that the documentation doesn't contain any offensive or outdated terms. - Verify that documentation and comments are free of spelling mistakes, ensure the documentation doesn't contain any

words listed in the ci/vale/styles/config/vocabularies/nat/reject.txt file, words that might appear to be
spelling mistakes but are listed in the ci/vale/styles/config/vocabularies/nat/accept.txt file are OK.

Misc. - All code (except .mdc files that contain Cursor rules) should be licensed under the Apache License 2.0,

and should contain an Apache License 2.0 header comment at the top of each file.

  • Confirm that copyright years are up-to date whenever a file is changed.

Files:

  • ci/vale/styles/config/vocabularies/nat/accept.txt
🔇 Additional comments (1)
ci/vale/styles/config/vocabularies/nat/accept.txt (1)

74-74: LGTM: New accepted token matches PR intent.

Accepting “LangChain/LangGraph” is consistent with the repo-wide terminology update and Vale regex semantics; no escaping needed.

Signed-off-by: Daniel Wang <daniewang@nvidia.com>
Signed-off-by: Daniel Wang <daniewang@nvidia.com>
@zhongxuanwang-nv zhongxuanwang-nv merged commit f55fa24 into NVIDIA:develop Sep 9, 2025
17 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

doc Improvements or additions to documentation non-breaking Non-breaking change

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants