fix: apply coderabbit suggestions from forward merge#1089
Conversation
|
Warning Rate limit exceeded@willkill07 has exceeded the limit for the number of commits or files that can be reviewed per hour. Please wait 0 minutes and 52 seconds before requesting another review. ⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. 📒 Files selected for processing (9)
WalkthroughRefactors evaluator metric names across documentation and configuration files by removing RAG prefixes (rag_accuracy → accuracy, rag_groundedness → groundedness, rag_relevance → relevance) and reorders evaluator key mappings in example configuration files, with supporting documentation and reference updates. Changes
Estimated code review effort🎯 2 (Simple) | ⏱️ ~10 minutes Pre-merge checks and finishing touches✅ Passed checks (3 passed)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 1
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (9)
docs/source/reference/evaluate.md(1 hunks)docs/source/workflows/evaluate.md(2 hunks)docs/source/workflows/mcp/mcp-client.md(1 hunks)docs/source/workflows/profiler.md(1 hunks)examples/agents/react/configs/config.yml(1 hunks)examples/agents/rewoo/configs/config.yml(1 hunks)examples/agents/tool_calling/configs/config.yml(1 hunks)examples/evaluation_and_profiling/simple_web_query_eval/tests/test_simple_web_query_eval.py(1 hunks)examples/evaluation_and_profiling/swe_bench/tests/test_swe_bench_eval.py(1 hunks)
🧰 Additional context used
📓 Path-based instructions (8)
{docs/source/**/*.md,**/README.@(md|ipynb)}
📄 CodeRabbit inference engine (.cursor/rules/general.mdc)
{docs/source/**/*.md,**/README.@(md|ipynb)}: Use the full name “NVIDIA NeMo Agent toolkit” on first use in documentation, then “NeMo Agent toolkit”; in headings use “NeMo Agent Toolkit” (capital T)
Do not use deprecated names (Agent Intelligence toolkit, aiqtoolkit, AgentIQ, AIQ/aiq) in documentation unless explicitly referring to deprecated names
Never use “NAT”/“nat” abbreviations in documentation
Documentation must be clear/comprehensive; avoid TODOs/FIXMEs/placeholders; avoid offensive/outdated terms; ensure spelling is correct
Files:
docs/source/workflows/profiler.mddocs/source/reference/evaluate.mddocs/source/workflows/mcp/mcp-client.mddocs/source/workflows/evaluate.md
docs/source/**/*.md
📄 CodeRabbit inference engine (.cursor/rules/general.mdc)
docs/source/**/*.md: Documentation sources are Markdown files under docs/source
Surround code entities with backticks in docs to avoid Vale false positives
Do not use words listed in ci/vale/styles/config/vocabularies/nat/reject.txt; accepted words in accept.txt are allowed
Files:
docs/source/workflows/profiler.mddocs/source/reference/evaluate.mddocs/source/workflows/mcp/mcp-client.mddocs/source/workflows/evaluate.md
{**/*.py,**/*.sh,**/*.md,**/*.toml,**/*.y?(a)ml,**/*.json,**/*.txt,**/*.ini,**/*.cfg,**/*.ipynb}
📄 CodeRabbit inference engine (.cursor/rules/general.mdc)
{**/*.py,**/*.sh,**/*.md,**/*.toml,**/*.y?(a)ml,**/*.json,**/*.txt,**/*.ini,**/*.cfg,**/*.ipynb}: Every file must start with the standard SPDX Apache-2.0 header
Confirm copyright years are up to date when a file is changed
All source files must include the SPDX Apache-2.0 header template (copy from an existing file)
Files:
docs/source/workflows/profiler.mddocs/source/reference/evaluate.mdexamples/evaluation_and_profiling/simple_web_query_eval/tests/test_simple_web_query_eval.pydocs/source/workflows/mcp/mcp-client.mddocs/source/workflows/evaluate.mdexamples/agents/react/configs/config.ymlexamples/agents/rewoo/configs/config.ymlexamples/evaluation_and_profiling/swe_bench/tests/test_swe_bench_eval.pyexamples/agents/tool_calling/configs/config.yml
**/*
⚙️ CodeRabbit configuration file
**/*: # Code Review Instructions
- Ensure the code follows best practices and coding standards. - For Python code, follow
PEP 20 and
PEP 8 for style guidelines.- Check for security vulnerabilities and potential issues. - Python methods should use type hints for all parameters and return values.
Example:def my_function(param1: int, param2: str) -> bool: pass- For Python exception handling, ensure proper stack trace preservation:
- When re-raising exceptions: use bare
raisestatements to maintain the original stack trace,
and uselogger.error()(notlogger.exception()) to avoid duplicate stack trace output.- When catching and logging exceptions without re-raising: always use
logger.exception()
to capture the full stack trace information.Documentation Review Instructions - Verify that documentation and comments are clear and comprehensive. - Verify that the documentation doesn't contain any TODOs, FIXMEs or placeholder text like "lorem ipsum". - Verify that the documentation doesn't contain any offensive or outdated terms. - Verify that documentation and comments are free of spelling mistakes, ensure the documentation doesn't contain any
words listed in the
ci/vale/styles/config/vocabularies/nat/reject.txtfile, words that might appear to be
spelling mistakes but are listed in theci/vale/styles/config/vocabularies/nat/accept.txtfile are OK.Misc. - All code (except .mdc files that contain Cursor rules) should be licensed under the Apache License 2.0,
and should contain an Apache License 2.0 header comment at the top of each file.
- Confirm that copyright years are up-to date whenever a file is changed.
Files:
docs/source/workflows/profiler.mddocs/source/reference/evaluate.mdexamples/evaluation_and_profiling/simple_web_query_eval/tests/test_simple_web_query_eval.pydocs/source/workflows/mcp/mcp-client.mddocs/source/workflows/evaluate.mdexamples/agents/react/configs/config.ymlexamples/agents/rewoo/configs/config.ymlexamples/evaluation_and_profiling/swe_bench/tests/test_swe_bench_eval.pyexamples/agents/tool_calling/configs/config.yml
docs/source/**/*
⚙️ CodeRabbit configuration file
This directory contains the source code for the documentation. All documentation should be written in Markdown format. Any image files should be placed in the
docs/source/_staticdirectory.
Files:
docs/source/workflows/profiler.mddocs/source/reference/evaluate.mddocs/source/workflows/mcp/mcp-client.mddocs/source/workflows/evaluate.md
**/*.py
📄 CodeRabbit inference engine (.cursor/rules/general.mdc)
**/*.py: In code comments use the abbreviations: nat (API namespace/CLI), nvidia-nat (package), NAT (env var prefixes); never use these abbreviations in documentation
Follow PEP 20 and PEP 8 for Python style
Run yapf with column_limit=120; yapf is used for formatting (run second)
Indent with 4 spaces (no tabs) and end each file with a single trailing newline
Use ruff (ruff check --fix) as a linter (not formatter) per pyproject.toml; fix warnings unless explicitly ignored
Respect Python naming schemes: snake_case for functions/variables, PascalCase for classes, UPPER_CASE for constants
When re-raising exceptions, use bare raise to preserve stack trace; log with logger.error(), not logger.exception()
When catching and logging without re-raising, use logger.exception() to capture full stack trace
Provide Google-style docstrings for every public module, class, function, and CLI command
Docstring first line must be a concise description ending with a period
Validate and sanitize all user input, especially in web or CLI interfaces
Prefer httpx with SSL verification enabled by default and follow OWASP Top-10 recommendations
Use async/await for I/O-bound work (HTTP, DB, file I/O)
Cache expensive computations with functools.lru_cache or an external cache when appropriate
Leverage NumPy vectorized operations when beneficial and feasible
Files:
examples/evaluation_and_profiling/simple_web_query_eval/tests/test_simple_web_query_eval.pyexamples/evaluation_and_profiling/swe_bench/tests/test_swe_bench_eval.py
examples/**/*
⚙️ CodeRabbit configuration file
examples/**/*: - This directory contains example code and usage scenarios for the toolkit, at a minimum an example should
contain a README.md or file README.ipynb.
- If an example contains Python code, it should be placed in a subdirectory named
src/and should
contain apyproject.tomlfile. Optionally, it might also contain scripts in ascripts/directory.- If an example contains YAML files, they should be placed in a subdirectory named
configs/. - If an example contains sample data files, they should be placed in a subdirectory nameddata/, and should
be checked into git-lfs.
Files:
examples/evaluation_and_profiling/simple_web_query_eval/tests/test_simple_web_query_eval.pyexamples/agents/react/configs/config.ymlexamples/agents/rewoo/configs/config.ymlexamples/evaluation_and_profiling/swe_bench/tests/test_swe_bench_eval.pyexamples/agents/tool_calling/configs/config.yml
**/configs/**
📄 CodeRabbit inference engine (.cursor/rules/general.mdc)
Configuration files consumed by code should be stored next to that code in a configs/ folder
Files:
examples/agents/react/configs/config.ymlexamples/agents/rewoo/configs/config.ymlexamples/agents/tool_calling/configs/config.yml
🔇 Additional comments (9)
examples/evaluation_and_profiling/simple_web_query_eval/tests/test_simple_web_query_eval.py (1)
139-139: Typo fix in assertion message is appropriate.Correcting "whould" to "should" improves clarity in test failure messages.
docs/source/reference/evaluate.md (1)
443-443: Evaluator metric naming clarification.Updating the sample output label from
rag_accuracytoaccuracyaligns with the PR's standardization of evaluator metric names. The underlying JSON structure remains unchanged, so this is a documentation-only update for clarity.docs/source/workflows/mcp/mcp-client.md (1)
59-59: Verify YAML configuration key name matches schema.The example changes
workflows:toworkflow:on line 59. Please confirm this aligns with the actual configuration schema used by the MCP client component. If the schema was recently changed in source code, ensure all documentation examples are consistently updated.examples/agents/rewoo/configs/config.yml (1)
57-64: Evaluator key reordering aligns with metric naming updates.The evaluator configuration is reordered so that
relevancebinds toContextRelevanceandgroundednessbinds toResponseGroundedness. This appears intentional and consistent with the PR's metric naming standardization, assuming output file names align (e.g.,relevance_output.json).examples/agents/react/configs/config.yml (1)
55-62: Evaluator naming consistent with other agent configurations.The reordering mirrors changes in other agent configs (rewoo, tool_calling), maintaining consistency across the example suite.
docs/source/workflows/evaluate.md (2)
264-266: Documentation formatting improvement.Converting the cleanup behavior explanation to a reusable
:::{note}admonition block improves consistency with other admonitions in the documentation and enhances readability.
295-295: Evaluator output filename updated to match metric naming convention.Changing the sample path from
rag_accuracy_output.jsontoaccuracy_output.jsonaligns with the standardized metric naming across the documentation suite.examples/agents/tool_calling/configs/config.yml (1)
55-62: Evaluator naming consistent across agent configurations.This change mirrors the evaluator key reordering applied to react and rewoo agents, maintaining consistency across the example suite.
docs/source/workflows/profiler.md (1)
272-274: Metric renaming correctly applied and consistent with configuration examples.The changes correctly update the evaluator metric descriptions from rag-prefixed names (
rag_accuracy,rag_groundedness,rag_relevance) to their non-prefixed equivalents (accuracy,groundedness,relevance). This aligns with the configuration examples shown earlier in the file (lines 156-168) and the PR objectives. Code entities are properly wrapped in backticks per documentation guidelines.
examples/evaluation_and_profiling/swe_bench/tests/test_swe_bench_eval.py
Outdated
Show resolved
Hide resolved
Signed-off-by: Will Killian <wkillian@nvidia.com>
32c5a17 to
e9bc4a8
Compare
|
/merge |
Description
This PR contains documentation and example fixes that were mentioned in the forward merge PR from post 1.3.0-rc6 -> develop.
Closes
By Submitting this PR I confirm:
Summary by CodeRabbit
Documentation
Chores
Bug Fixes