Enable running tests for examples#838
Conversation
Signed-off-by: David Gardner <dagardner@nvidia.com>
Signed-off-by: David Gardner <dagardner@nvidia.com>
Signed-off-by: David Gardner <dagardner@nvidia.com>
Signed-off-by: David Gardner <dagardner@nvidia.com>
WalkthroughAdjusts test discovery and markers, adds session fixtures for API keys, replaces some e2e markers with integration/usefixtures or skip decorators, introduces autouse prerequisites for the profiler tests, and refactors an object-store user-report configuration while skipping its tests. Minor test input/assertion tweaks included. Changes
Sequence Diagram(s)sequenceDiagram
autonumber
participant Pytest
participant Fixture_profiler as require_profiler_agent (autouse)
participant Fixture_phoenix as require_phoenix_server (autouse)
participant Tests
Pytest->>Fixture_profiler: setup autouse
alt PROFILER_AGENT_AVAILABLE == false
Fixture_profiler-->>Pytest: skip or raise (based on fail_missing)
else
Fixture_profiler-->>Pytest: proceed
Pytest->>Fixture_phoenix: setup autouse
alt Phoenix reachable (HTTP 200 /v1/traces)
Fixture_phoenix-->>Pytest: proceed
Pytest->>Tests: run tests (some tests marked @skip,@integration)
else
Fixture_phoenix-->>Pytest: skip or raise (based on fail_missing)
end
end
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes Suggested labels
Pre-merge checks and finishing touches❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✨ Finishing touches
🧪 Generate unit tests
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
… skip to be fixed later Signed-off-by: David Gardner <dagardner@nvidia.com>
Signed-off-by: David Gardner <dagardner@nvidia.com>
Signed-off-by: David Gardner <dagardner@nvidia.com>
Signed-off-by: David Gardner <dagardner@nvidia.com>
Signed-off-by: David Gardner <dagardner@nvidia.com>
Signed-off-by: David Gardner <dagardner@nvidia.com>
Signed-off-by: David Gardner <dagardner@nvidia.com>
Signed-off-by: David Gardner <dagardner@nvidia.com>
… api key and a tavily key Signed-off-by: David Gardner <dagardner@nvidia.com>
Signed-off-by: David Gardner <dagardner@nvidia.com>
…nly passing as the output was '[401] unauthorized' Signed-off-by: David Gardner <dagardner@nvidia.com>
Signed-off-by: David Gardner <dagardner@nvidia.com>
There was a problem hiding this comment.
Actionable comments posted: 3
🧹 Nitpick comments (12)
examples/object_store/user_report/tests/test_objext_store_example_user_report_tool.py (3)
50-71: Update function lookups to match the group’s exposed keys.With the new FunctionGroup, functions are exposed as "get", "put", "update", "delete" (not "get_user_report", etc.). Adjust fixtures accordingly.
Example update:
@pytest.fixture async def get_fn(builder): return builder.get_function("get") @pytest.fixture async def put_fn(builder): return builder.get_function("put") @pytest.fixture async def update_fn(builder): return builder.get_function("update") @pytest.fixture async def delete_fn(builder): return builder.get_function("delete")
74-74: Prefer xfail over skip to keep these tests visible in discovery.Use xfail to surface when the tests start passing after updates, while not failing CI.
@pytest.mark.xfail(reason="Tests need to be updated to match group changes", strict=False) class TestUserReportTools: ...
1-1: Filename typo: rename to test_object_store_example_user_report_tool.py.Improves discoverability and avoids confusion.
examples/advanced_agents/alert_triage_agent/tests/test_alert_triage_agent_workflow.py (1)
32-33: Prefer xfail (tracked) over unconditional skip; add e2e gating.Unconditional skip hides regressions even when running with --run_e2e. Use xfail to keep visibility and add an e2e marker.
-@pytest.mark.skip(reason="Test hangs, needs investigation") +@pytest.mark.e2e +@pytest.mark.xfail(reason="Test hangs, needs investigation (tracked)", strict=False) async def test_full_workflow():examples/evaluation_and_profiling/simple_web_query_eval/tests/test_simple_web_query_eval.py (1)
104-106: Use xfail to surface status in CI instead of hard skip.Keeps the test discoverable; still won’t fail the suite.
-@pytest.mark.skip(reason="Failing accuracy checks, need to verify/update") +@pytest.mark.xfail(reason="Failing accuracy checks, need to verify/update (tracked)", strict=False) @pytest.mark.e2e async def test_eval():examples/evaluation_and_profiling/swe_bench/tests/test_swe_bench_eval.py (1)
81-84: Switch to xfail to track the TypeError without masking the test.This keeps CI signal while avoiding suite failures.
-@pytest.mark.skip(reason="Raises a TypeError") +@pytest.mark.xfail(reason="Currently raises a TypeError (tracked)", strict=False) @pytest.mark.e2e async def test_eval():examples/custom_functions/plot_charts/tests/test_plot_charts_workflow.py (1)
44-45: Relax assertion to reduce brittleness.Substring check is less fragile than startswith for minor message changes.
- assert result.startswith("successfully created line chart") + assert "successfully created line chart" in resultpackages/nvidia_nat_test/src/nat/test/plugin.py (1)
158-189: New API‑key fixtures look consistent and reusable.Pattern matches existing fixtures; session scope and fail_missing integration are correct.
If desired, dedupe via a small helper factory to generate similar fixtures for env‑backed keys.
examples/getting_started/simple_calculator/tests/test_simple_calculator_workflow.py (1)
33-34: DRY the repeated usefixtures via a module-level marker.Reduce duplication by applying the fixture once for the module.
Apply within the selected ranges:
-@pytest.mark.usefixtures("nvidia_api_key") -async def test_inequality_tool_workflow(): +async def test_inequality_tool_workflow():-@pytest.mark.usefixtures("nvidia_api_key") -async def test_multiply_tool_workflow(): +async def test_multiply_tool_workflow():-@pytest.mark.usefixtures("nvidia_api_key") -async def test_division_tool_workflow(): +async def test_division_tool_workflow():Add at module level (place near imports):
pytestmark = pytest.mark.usefixtures("nvidia_api_key")Also applies to: 51-52, 69-70
examples/advanced_agents/profiler_agent/tests/test_profiler_agent.py (3)
63-65: Mark async tests with pytest-asyncio.Unless asyncio_mode=auto is set in pytest.ini, these will error. Add the marker.
@pytest.mark.skip(reason="Raises a ValueError") @pytest.mark.integration +@pytest.mark.asyncio async def test_flow_chart_tool():Please confirm if pytest-asyncio is configured with asyncio_mode=auto; if so, we can drop the marker.
76-78: Add asyncio marker to second async test.@pytest.mark.skip(reason="Raises a ValueError") @pytest.mark.integration +@pytest.mark.asyncio async def test_token_usage_tool():Also, are these permanent skips? If not, consider linking a tracking issue in the reason.
34-61: Align fixture naming with project convention (fixture_ prefix + name=...).Matches our retrieved learnings for tests.
The diffs above already rename to fixture_* and set name=...; keep if you adopt them.
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (15)
examples/advanced_agents/alert_triage_agent/tests/test_alert_triage_agent_workflow.py(2 hunks)examples/advanced_agents/profiler_agent/tests/test_profiler_agent.py(2 hunks)examples/agents/rewoo/tests/test_rewoo_agent.py(1 hunks)examples/control_flow/router_agent/tests/test_control_flow_example_router_agent.py(1 hunks)examples/custom_functions/plot_charts/tests/test_plot_charts_workflow.py(1 hunks)examples/evaluation_and_profiling/simple_web_query_eval/tests/test_simple_web_query_eval.py(1 hunks)examples/evaluation_and_profiling/swe_bench/tests/test_swe_bench_eval.py(1 hunks)examples/frameworks/agno_personal_finance/tests/test_agno_personal_finance_workflow.py(1 hunks)examples/frameworks/multi_frameworks/tests/test_multi_frameworks_workflow.py(1 hunks)examples/frameworks/semantic_kernel_demo/tests/test_semantic_kernel_workflow.py(1 hunks)examples/getting_started/simple_calculator/tests/test_simple_calculator_workflow.py(4 hunks)examples/getting_started/simple_web_query/tests/test_simple_web_query_workflow.py(1 hunks)examples/object_store/user_report/tests/test_objext_store_example_user_report_tool.py(2 hunks)packages/nvidia_nat_test/src/nat/test/plugin.py(1 hunks)pyproject.toml(1 hunks)
🧰 Additional context used
📓 Path-based instructions (11)
**/*.py
📄 CodeRabbit inference engine (.cursor/rules/general.mdc)
**/*.py: Follow PEP 8/20 style; format with yapf (column_limit=120) and use 4-space indentation; end files with a single newline
Run ruff (ruff check --fix) per pyproject.toml; fix warnings unless explicitly ignored; ruff is linter-only
Use snake_case for functions/variables, PascalCase for classes, and UPPER_CASE for constants
Treat pyright warnings as errors during development
Exception handling: preserve stack traces and avoid duplicate logging
When re-raising exceptions, use bareraiseand log with logger.error(), not logger.exception()
When catching and not re-raising, log with logger.exception() to capture stack trace
Validate and sanitize all user input; prefer httpx with SSL verification and follow OWASP Top‑10
Use async/await for I/O-bound work; profile CPU-heavy paths with cProfile/mprof; cache with functools.lru_cache or external cache; leverage NumPy vectorization when beneficial
**/*.py: Programmatic use: create TestLLMConfig(response_seq=[...], delay_ms=...), add with builder.add_llm("", cfg).
When retrieving the test LLM wrapper, use builder.get_llm(name, wrapper_type=LLMFrameworkEnum.) and call the framework’s method (e.g., ainvoke, achat, call).
Files:
examples/frameworks/semantic_kernel_demo/tests/test_semantic_kernel_workflow.pyexamples/custom_functions/plot_charts/tests/test_plot_charts_workflow.pyexamples/getting_started/simple_web_query/tests/test_simple_web_query_workflow.pypackages/nvidia_nat_test/src/nat/test/plugin.pyexamples/frameworks/multi_frameworks/tests/test_multi_frameworks_workflow.pyexamples/getting_started/simple_calculator/tests/test_simple_calculator_workflow.pyexamples/agents/rewoo/tests/test_rewoo_agent.pyexamples/advanced_agents/profiler_agent/tests/test_profiler_agent.pyexamples/advanced_agents/alert_triage_agent/tests/test_alert_triage_agent_workflow.pyexamples/frameworks/agno_personal_finance/tests/test_agno_personal_finance_workflow.pyexamples/control_flow/router_agent/tests/test_control_flow_example_router_agent.pyexamples/evaluation_and_profiling/simple_web_query_eval/tests/test_simple_web_query_eval.pyexamples/evaluation_and_profiling/swe_bench/tests/test_swe_bench_eval.pyexamples/object_store/user_report/tests/test_objext_store_example_user_report_tool.py
**/tests/test_*.py
📄 CodeRabbit inference engine (.cursor/rules/general.mdc)
Test files must be named test_*.py and placed under a tests/ folder
Files:
examples/frameworks/semantic_kernel_demo/tests/test_semantic_kernel_workflow.pyexamples/custom_functions/plot_charts/tests/test_plot_charts_workflow.pyexamples/getting_started/simple_web_query/tests/test_simple_web_query_workflow.pyexamples/frameworks/multi_frameworks/tests/test_multi_frameworks_workflow.pyexamples/getting_started/simple_calculator/tests/test_simple_calculator_workflow.pyexamples/agents/rewoo/tests/test_rewoo_agent.pyexamples/advanced_agents/profiler_agent/tests/test_profiler_agent.pyexamples/advanced_agents/alert_triage_agent/tests/test_alert_triage_agent_workflow.pyexamples/frameworks/agno_personal_finance/tests/test_agno_personal_finance_workflow.pyexamples/control_flow/router_agent/tests/test_control_flow_example_router_agent.pyexamples/evaluation_and_profiling/simple_web_query_eval/tests/test_simple_web_query_eval.pyexamples/evaluation_and_profiling/swe_bench/tests/test_swe_bench_eval.pyexamples/object_store/user_report/tests/test_objext_store_example_user_report_tool.py
**/tests/**/*.py
📄 CodeRabbit inference engine (.cursor/rules/general.mdc)
**/tests/**/*.py: Test functions must use the test_ prefix and snake_case
Extract repeated test code into pytest fixtures; fixtures should set name=... in @pytest.fixture and functions named with fixture_ prefix
Mark expensive tests with @pytest.mark.slow or @pytest.mark.integration
Use pytest with pytest-asyncio for async code; mock external services with pytest_httpserver or unittest.mock
Files:
examples/frameworks/semantic_kernel_demo/tests/test_semantic_kernel_workflow.pyexamples/custom_functions/plot_charts/tests/test_plot_charts_workflow.pyexamples/getting_started/simple_web_query/tests/test_simple_web_query_workflow.pyexamples/frameworks/multi_frameworks/tests/test_multi_frameworks_workflow.pyexamples/getting_started/simple_calculator/tests/test_simple_calculator_workflow.pyexamples/agents/rewoo/tests/test_rewoo_agent.pyexamples/advanced_agents/profiler_agent/tests/test_profiler_agent.pyexamples/advanced_agents/alert_triage_agent/tests/test_alert_triage_agent_workflow.pyexamples/frameworks/agno_personal_finance/tests/test_agno_personal_finance_workflow.pyexamples/control_flow/router_agent/tests/test_control_flow_example_router_agent.pyexamples/evaluation_and_profiling/simple_web_query_eval/tests/test_simple_web_query_eval.pyexamples/evaluation_and_profiling/swe_bench/tests/test_swe_bench_eval.pyexamples/object_store/user_report/tests/test_objext_store_example_user_report_tool.py
**/*.{py,sh,md,yml,yaml,toml,ini,json,ipynb,txt,rst}
📄 CodeRabbit inference engine (.cursor/rules/general.mdc)
**/*.{py,sh,md,yml,yaml,toml,ini,json,ipynb,txt,rst}: Every file must start with the standard SPDX Apache-2.0 header; keep copyright years up‑to‑date
All source files must include the SPDX Apache‑2.0 header; do not bypass CI header checks
Files:
examples/frameworks/semantic_kernel_demo/tests/test_semantic_kernel_workflow.pyexamples/custom_functions/plot_charts/tests/test_plot_charts_workflow.pyexamples/getting_started/simple_web_query/tests/test_simple_web_query_workflow.pypyproject.tomlpackages/nvidia_nat_test/src/nat/test/plugin.pyexamples/frameworks/multi_frameworks/tests/test_multi_frameworks_workflow.pyexamples/getting_started/simple_calculator/tests/test_simple_calculator_workflow.pyexamples/agents/rewoo/tests/test_rewoo_agent.pyexamples/advanced_agents/profiler_agent/tests/test_profiler_agent.pyexamples/advanced_agents/alert_triage_agent/tests/test_alert_triage_agent_workflow.pyexamples/frameworks/agno_personal_finance/tests/test_agno_personal_finance_workflow.pyexamples/control_flow/router_agent/tests/test_control_flow_example_router_agent.pyexamples/evaluation_and_profiling/simple_web_query_eval/tests/test_simple_web_query_eval.pyexamples/evaluation_and_profiling/swe_bench/tests/test_swe_bench_eval.pyexamples/object_store/user_report/tests/test_objext_store_example_user_report_tool.py
**/*.{py,md}
📄 CodeRabbit inference engine (.cursor/rules/general.mdc)
Never hard‑code version numbers in code or docs; versions are derived by setuptools‑scm
Files:
examples/frameworks/semantic_kernel_demo/tests/test_semantic_kernel_workflow.pyexamples/custom_functions/plot_charts/tests/test_plot_charts_workflow.pyexamples/getting_started/simple_web_query/tests/test_simple_web_query_workflow.pypackages/nvidia_nat_test/src/nat/test/plugin.pyexamples/frameworks/multi_frameworks/tests/test_multi_frameworks_workflow.pyexamples/getting_started/simple_calculator/tests/test_simple_calculator_workflow.pyexamples/agents/rewoo/tests/test_rewoo_agent.pyexamples/advanced_agents/profiler_agent/tests/test_profiler_agent.pyexamples/advanced_agents/alert_triage_agent/tests/test_alert_triage_agent_workflow.pyexamples/frameworks/agno_personal_finance/tests/test_agno_personal_finance_workflow.pyexamples/control_flow/router_agent/tests/test_control_flow_example_router_agent.pyexamples/evaluation_and_profiling/simple_web_query_eval/tests/test_simple_web_query_eval.pyexamples/evaluation_and_profiling/swe_bench/tests/test_swe_bench_eval.pyexamples/object_store/user_report/tests/test_objext_store_example_user_report_tool.py
**/*.{py,yaml,yml}
📄 CodeRabbit inference engine (.cursor/rules/nat-test-llm.mdc)
**/*.{py,yaml,yml}: Configure response_seq as a list of strings; values cycle per call, and [] yields an empty string.
Configure delay_ms to inject per-call artificial latency in milliseconds for nat_test_llm.
Files:
examples/frameworks/semantic_kernel_demo/tests/test_semantic_kernel_workflow.pyexamples/custom_functions/plot_charts/tests/test_plot_charts_workflow.pyexamples/getting_started/simple_web_query/tests/test_simple_web_query_workflow.pypackages/nvidia_nat_test/src/nat/test/plugin.pyexamples/frameworks/multi_frameworks/tests/test_multi_frameworks_workflow.pyexamples/getting_started/simple_calculator/tests/test_simple_calculator_workflow.pyexamples/agents/rewoo/tests/test_rewoo_agent.pyexamples/advanced_agents/profiler_agent/tests/test_profiler_agent.pyexamples/advanced_agents/alert_triage_agent/tests/test_alert_triage_agent_workflow.pyexamples/frameworks/agno_personal_finance/tests/test_agno_personal_finance_workflow.pyexamples/control_flow/router_agent/tests/test_control_flow_example_router_agent.pyexamples/evaluation_and_profiling/simple_web_query_eval/tests/test_simple_web_query_eval.pyexamples/evaluation_and_profiling/swe_bench/tests/test_swe_bench_eval.pyexamples/object_store/user_report/tests/test_objext_store_example_user_report_tool.py
**/*
⚙️ CodeRabbit configuration file
**/*: # Code Review Instructions
- Ensure the code follows best practices and coding standards. - For Python code, follow
PEP 20 and
PEP 8 for style guidelines.- Check for security vulnerabilities and potential issues. - Python methods should use type hints for all parameters and return values.
Example:def my_function(param1: int, param2: str) -> bool: pass- For Python exception handling, ensure proper stack trace preservation:
- When re-raising exceptions: use bare
raisestatements to maintain the original stack trace,
and uselogger.error()(notlogger.exception()) to avoid duplicate stack trace output.- When catching and logging exceptions without re-raising: always use
logger.exception()
to capture the full stack trace information.Documentation Review Instructions - Verify that documentation and comments are clear and comprehensive. - Verify that the documentation doesn't contain any TODOs, FIXMEs or placeholder text like "lorem ipsum". - Verify that the documentation doesn't contain any offensive or outdated terms. - Verify that documentation and comments are free of spelling mistakes, ensure the documentation doesn't contain any
words listed in the
ci/vale/styles/config/vocabularies/nat/reject.txtfile, words that might appear to be
spelling mistakes but are listed in theci/vale/styles/config/vocabularies/nat/accept.txtfile are OK.Misc. - All code (except .mdc files that contain Cursor rules) should be licensed under the Apache License 2.0,
and should contain an Apache License 2.0 header comment at the top of each file.
- Confirm that copyright years are up-to date whenever a file is changed.
Files:
examples/frameworks/semantic_kernel_demo/tests/test_semantic_kernel_workflow.pyexamples/custom_functions/plot_charts/tests/test_plot_charts_workflow.pyexamples/getting_started/simple_web_query/tests/test_simple_web_query_workflow.pypyproject.tomlpackages/nvidia_nat_test/src/nat/test/plugin.pyexamples/frameworks/multi_frameworks/tests/test_multi_frameworks_workflow.pyexamples/getting_started/simple_calculator/tests/test_simple_calculator_workflow.pyexamples/agents/rewoo/tests/test_rewoo_agent.pyexamples/advanced_agents/profiler_agent/tests/test_profiler_agent.pyexamples/advanced_agents/alert_triage_agent/tests/test_alert_triage_agent_workflow.pyexamples/frameworks/agno_personal_finance/tests/test_agno_personal_finance_workflow.pyexamples/control_flow/router_agent/tests/test_control_flow_example_router_agent.pyexamples/evaluation_and_profiling/simple_web_query_eval/tests/test_simple_web_query_eval.pyexamples/evaluation_and_profiling/swe_bench/tests/test_swe_bench_eval.pyexamples/object_store/user_report/tests/test_objext_store_example_user_report_tool.py
examples/**/*
⚙️ CodeRabbit configuration file
examples/**/*: - This directory contains example code and usage scenarios for the toolkit, at a minimum an example should
contain a README.md or file README.ipynb.
- If an example contains Python code, it should be placed in a subdirectory named
src/and should
contain apyproject.tomlfile. Optionally, it might also contain scripts in ascripts/directory.- If an example contains YAML files, they should be placed in a subdirectory named
configs/. - If an example contains sample data files, they should be placed in a subdirectory nameddata/, and should
be checked into git-lfs.
Files:
examples/frameworks/semantic_kernel_demo/tests/test_semantic_kernel_workflow.pyexamples/custom_functions/plot_charts/tests/test_plot_charts_workflow.pyexamples/getting_started/simple_web_query/tests/test_simple_web_query_workflow.pyexamples/frameworks/multi_frameworks/tests/test_multi_frameworks_workflow.pyexamples/getting_started/simple_calculator/tests/test_simple_calculator_workflow.pyexamples/agents/rewoo/tests/test_rewoo_agent.pyexamples/advanced_agents/profiler_agent/tests/test_profiler_agent.pyexamples/advanced_agents/alert_triage_agent/tests/test_alert_triage_agent_workflow.pyexamples/frameworks/agno_personal_finance/tests/test_agno_personal_finance_workflow.pyexamples/control_flow/router_agent/tests/test_control_flow_example_router_agent.pyexamples/evaluation_and_profiling/simple_web_query_eval/tests/test_simple_web_query_eval.pyexamples/evaluation_and_profiling/swe_bench/tests/test_swe_bench_eval.pyexamples/object_store/user_report/tests/test_objext_store_example_user_report_tool.py
pyproject.toml
📄 CodeRabbit inference engine (.cursor/rules/general.mdc)
Add new dependencies to pyproject.toml in alphabetical order
Files:
pyproject.toml
packages/*/src/**/*.py
📄 CodeRabbit inference engine (.cursor/rules/general.mdc)
packages/*/src/**/*.py: All importable Python code in packages must live under packages//src/
All public APIs in packaged code require Python 3.11+ type hints; prefer typing/collections.abc; use typing.Annotated when useful
Provide Google-style docstrings for public APIs in packages; first line concise with a period; use backticks for code entities
Files:
packages/nvidia_nat_test/src/nat/test/plugin.py
packages/**/*
⚙️ CodeRabbit configuration file
packages/**/*: - This directory contains optional plugin packages for the toolkit, each should contain apyproject.tomlfile. - Thepyproject.tomlfile should declare a dependency onnvidia-nator another package with a name starting
withnvidia-nat-. This dependency should be declared using~=<version>, and the version should be a two
digit version (ex:~=1.0).
- Not all packages contain Python code, if they do they should also contain their own set of tests, in a
tests/directory at the same level as thepyproject.tomlfile.
Files:
packages/nvidia_nat_test/src/nat/test/plugin.py
🧠 Learnings (2)
📓 Common learnings
Learnt from: CR
PR: NVIDIA/NeMo-Agent-Toolkit#0
File: .cursor/rules/general.mdc:0-0
Timestamp: 2025-08-28T23:22:41.742Z
Learning: Applies to **/tests/**/*.py : Extract repeated test code into pytest fixtures; fixtures should set name=... in pytest.fixture and functions named with fixture_ prefix
📚 Learning: 2025-08-28T23:22:41.742Z
Learnt from: CR
PR: NVIDIA/NeMo-Agent-Toolkit#0
File: .cursor/rules/general.mdc:0-0
Timestamp: 2025-08-28T23:22:41.742Z
Learning: Applies to **/tests/**/*.py : Extract repeated test code into pytest fixtures; fixtures should set name=... in pytest.fixture and functions named with fixture_ prefix
Applied to files:
packages/nvidia_nat_test/src/nat/test/plugin.pyexamples/frameworks/multi_frameworks/tests/test_multi_frameworks_workflow.pyexamples/frameworks/agno_personal_finance/tests/test_agno_personal_finance_workflow.py
🧬 Code graph analysis (1)
examples/object_store/user_report/tests/test_objext_store_example_user_report_tool.py (2)
examples/object_store/user_report/src/nat_user_report/user_report_tools.py (5)
UserReportConfig(31-43)user_report_group(47-104)put(68-79)update(81-89)delete(91-97)src/nat/data_models/component_ref.py (1)
ObjectStoreRef(138-146)
🪛 Ruff (0.13.1)
examples/advanced_agents/profiler_agent/tests/test_profiler_agent.py
53-53: Probable use of requests call without timeout
(S113)
55-55: Abstract raise to an inner function
(TRY301)
55-55: Avoid specifying long messages outside the exception class
(TRY003)
56-56: Do not catch blind exception: Exception
(BLE001)
59-59: Within an except clause, raise exceptions with raise ... from err or raise ... from None to distinguish them from errors in exception handling
(B904)
🔇 Additional comments (8)
examples/control_flow/router_agent/tests/test_control_flow_example_router_agent.py (1)
36-38: Good fixture gating for credentials.Using @pytest.mark.usefixtures("nvidia_api_key") with @pytest.mark.e2e aligns with repo test gating.
Please confirm this test actually hits NVIDIA endpoints; if purely local, you can drop the fixture to avoid unnecessary env coupling.
examples/frameworks/agno_personal_finance/tests/test_agno_personal_finance_workflow.py (1)
31-32: Appropriate use of shared fixtures.Injecting serp_api_key and openai_api_key via usefixtures is consistent with the test plugin’s contract.
pyproject.toml (1)
280-281: Incorrect — pytest supports glob patterns in testpathspytest treats testpaths entries as glob patterns (uses glob.iglob with recursive=True), so "examples//tests" and "packages//tests" are valid; no change required unless you want different discovery semantics.
Likely an incorrect or invalid review comment.
examples/frameworks/semantic_kernel_demo/tests/test_semantic_kernel_workflow.py (1)
30-32: LGTM; confirm mem0 fixture availability/skip semantics.If mem0 is optional for this workflow, ensure the fixture skips when env isn’t set.
examples/frameworks/multi_frameworks/tests/test_multi_frameworks_workflow.py (1)
30-32: LGTM; consistent with fixture-driven setup.No issues noted.
examples/getting_started/simple_calculator/tests/test_simple_calculator_workflow.py (1)
78-84: LGTM: updated prompt/assert stay consistent.Change from 8/2→4 to 12/2→6 looks correct.
examples/getting_started/simple_web_query/tests/test_simple_web_query_workflow.py (1)
31-32: LGTM on usefixtures.Aligned with the project-wide fixture approach.
examples/agents/rewoo/tests/test_rewoo_agent.py (1)
36-38: LGTM — fixtures auto-registered and missing API keys are skipped by default.packages/nvidia_nat_test/pyproject.toml registers a pytest11 plugin (nat.test.plugin) and packages/nvidia_nat_test/src/nat/test/plugin.py defines the nvidia_api_key/tavily_api_key fixtures which call require_env_variables -> pytest.skip() on missing keys (unless --fail_missing).
examples/object_store/user_report/tests/test_objext_store_example_user_report_tool.py
Show resolved
Hide resolved
…r what the difference was Signed-off-by: David Gardner <dagardner@nvidia.com>
Signed-off-by: David Gardner <dagardner@nvidia.com>
There was a problem hiding this comment.
Actionable comments posted: 0
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
.cursor/rules/general.mdc (1)
3-4: Fix inert frontmatter: empty globs + alwaysApply=false means this rule never triggersProvide explicit globs and/or set alwaysApply=true so the rule actually applies.
Apply:
-globs: -alwaysApply: false +globs: + - "**/*" +alwaysApply: true
🧹 Nitpick comments (7)
examples/custom_functions/plot_charts/tests/test_plot_charts_workflow.py (1)
44-45: Relax assertion to reduce brittlenessAllow prefixes/suffixes; keep case-insensitive check.
Apply:
- assert result.startswith("successfully created line chart") + assert "successfully created line chart" in resultexamples/advanced_agents/profiler_agent/tests/test_profiler_agent.py (2)
63-65: Mark async tests with pytest-asyncio (when unskipping).Ensures async tests run under pytest-asyncio once skips are removed.
@pytest.mark.skip(reason="Raises a ValueError") @pytest.mark.integration +@pytest.mark.asyncio async def test_flow_chart_tool():
76-78: Mark async tests with pytest-asyncio (when unskipping).Same as above for the token usage test.
@pytest.mark.skip(reason="Raises a ValueError") @pytest.mark.integration +@pytest.mark.asyncio async def test_token_usage_tool():examples/agents/rewoo/tests/test_rewoo_agent.py (2)
26-26: Add explicit return type for async helperAnnotate return type to satisfy typing guidelines.
-async def _test_workflow(config_file: str, question: str, answer: str): +async def _test_workflow(config_file: str, question: str, answer: str) -> None:
38-38: Annotate test function return typeExplicit -> None keeps tests consistent with typing rules.
-async def test_full_workflow(): +async def test_full_workflow() -> None:ci/scripts/gitlab/tests.sh (1)
28-46: Restoreset -eafter pytest to avoid masking later failures
set +eis kept for the rest of the script; failures in slack-sdk install or reporting could be silently ignored until much later. Re-enableset -eright after capturing pytest’s exit code.Apply:
set +e @@ PYTEST_RESULTS=$? +set -eexamples/getting_started/simple_web_query/tests/test_simple_web_query_workflow.py (1)
30-31: Nit: add return type for the test functionKeeps tests consistent with type‑hint guidance.
-async def test_full_workflow(): +async def test_full_workflow() -> None:
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
uv.lockis excluded by!**/*.lock
📒 Files selected for processing (17)
.cursor/rules/general.mdc(1 hunks)ci/scripts/gitlab/tests.sh(1 hunks)examples/advanced_agents/profiler_agent/tests/test_profiler_agent.py(2 hunks)examples/agents/rewoo/tests/test_rewoo_agent.py(1 hunks)examples/control_flow/router_agent/tests/test_control_flow_example_router_agent.py(1 hunks)examples/control_flow/sequential_executor/tests/test_example_sequential_executor.py(1 hunks)examples/custom_functions/plot_charts/tests/test_plot_charts_workflow.py(2 hunks)examples/evaluation_and_profiling/simple_web_query_eval/tests/test_simple_web_query_eval.py(1 hunks)examples/evaluation_and_profiling/swe_bench/tests/test_swe_bench_eval.py(1 hunks)examples/frameworks/agno_personal_finance/tests/test_agno_personal_finance_workflow.py(1 hunks)examples/frameworks/multi_frameworks/tests/test_multi_frameworks_workflow.py(1 hunks)examples/frameworks/semantic_kernel_demo/tests/test_semantic_kernel_workflow.py(1 hunks)examples/getting_started/simple_calculator/tests/test_simple_calculator_workflow.py(4 hunks)examples/getting_started/simple_web_query/tests/test_simple_web_query_workflow.py(1 hunks)packages/nvidia_nat_test/src/nat/test/plugin.py(1 hunks)pyproject.toml(1 hunks)tests/nat/server/test_unified_api_server.py(5 hunks)
✅ Files skipped from review due to trivial changes (2)
- tests/nat/server/test_unified_api_server.py
- examples/control_flow/sequential_executor/tests/test_example_sequential_executor.py
🚧 Files skipped from review as they are similar to previous changes (8)
- examples/getting_started/simple_calculator/tests/test_simple_calculator_workflow.py
- examples/evaluation_and_profiling/swe_bench/tests/test_swe_bench_eval.py
- examples/frameworks/semantic_kernel_demo/tests/test_semantic_kernel_workflow.py
- examples/control_flow/router_agent/tests/test_control_flow_example_router_agent.py
- examples/evaluation_and_profiling/simple_web_query_eval/tests/test_simple_web_query_eval.py
- packages/nvidia_nat_test/src/nat/test/plugin.py
- pyproject.toml
- examples/frameworks/agno_personal_finance/tests/test_agno_personal_finance_workflow.py
🧰 Additional context used
📓 Path-based instructions (10)
ci/scripts/**/*.sh
📄 CodeRabbit inference engine (.cursor/rules/general.mdc)
CI shell/utility scripts must live under ci/scripts/
Files:
ci/scripts/gitlab/tests.sh
**/*.{py,sh,md,yml,yaml,toml,ini,json,ipynb,txt,rst}
📄 CodeRabbit inference engine (.cursor/rules/general.mdc)
**/*.{py,sh,md,yml,yaml,toml,ini,json,ipynb,txt,rst}: Every file must start with the standard SPDX Apache-2.0 header; keep copyright years up‑to‑date
All source files must include the SPDX Apache‑2.0 header; do not bypass CI header checks
Files:
ci/scripts/gitlab/tests.shexamples/frameworks/multi_frameworks/tests/test_multi_frameworks_workflow.pyexamples/custom_functions/plot_charts/tests/test_plot_charts_workflow.pyexamples/agents/rewoo/tests/test_rewoo_agent.pyexamples/getting_started/simple_web_query/tests/test_simple_web_query_workflow.pyexamples/advanced_agents/profiler_agent/tests/test_profiler_agent.py
**/*
⚙️ CodeRabbit configuration file
**/*: # Code Review Instructions
- Ensure the code follows best practices and coding standards. - For Python code, follow
PEP 20 and
PEP 8 for style guidelines.- Check for security vulnerabilities and potential issues. - Python methods should use type hints for all parameters and return values.
Example:def my_function(param1: int, param2: str) -> bool: pass- For Python exception handling, ensure proper stack trace preservation:
- When re-raising exceptions: use bare
raisestatements to maintain the original stack trace,
and uselogger.error()(notlogger.exception()) to avoid duplicate stack trace output.- When catching and logging exceptions without re-raising: always use
logger.exception()
to capture the full stack trace information.Documentation Review Instructions - Verify that documentation and comments are clear and comprehensive. - Verify that the documentation doesn't contain any TODOs, FIXMEs or placeholder text like "lorem ipsum". - Verify that the documentation doesn't contain any offensive or outdated terms. - Verify that documentation and comments are free of spelling mistakes, ensure the documentation doesn't contain any
words listed in the
ci/vale/styles/config/vocabularies/nat/reject.txtfile, words that might appear to be
spelling mistakes but are listed in theci/vale/styles/config/vocabularies/nat/accept.txtfile are OK.Misc. - All code (except .mdc files that contain Cursor rules) should be licensed under the Apache License 2.0,
and should contain an Apache License 2.0 header comment at the top of each file.
- Confirm that copyright years are up-to date whenever a file is changed.
Files:
ci/scripts/gitlab/tests.shexamples/frameworks/multi_frameworks/tests/test_multi_frameworks_workflow.pyexamples/custom_functions/plot_charts/tests/test_plot_charts_workflow.pyexamples/agents/rewoo/tests/test_rewoo_agent.pyexamples/getting_started/simple_web_query/tests/test_simple_web_query_workflow.pyexamples/advanced_agents/profiler_agent/tests/test_profiler_agent.py
.cursor/rules/**/*.mdc
📄 CodeRabbit inference engine (.cursor/rules/cursor-rules.mdc)
.cursor/rules/**/*.mdc: Place all Cursor rule files under PROJECT_ROOT/.cursor/rules/
Name rule files in kebab-case, always using the .mdc extension, with descriptive filenames
Rule descriptions must start with the phrase: 'Follow these rules when'
Descriptions should specify clear trigger conditions (e.g., when the user's request meets certain criteria)
Use precise action verbs in descriptions (e.g., creating, modifying, implementing, configuring, adding, installing, evaluating)
Descriptions should be comprehensive but concise
Use consistent project terminology in descriptions (e.g., NAT workflows, NAT CLI commands)
Proofread descriptions for typos and grammar
Avoid overly narrow descriptions when rules cover multiple related scenarios
Prefer the 'user's request involves' phrasing pattern in descriptions
Rule files must include the specified frontmatter structure: description (string), optional globs, and alwaysApply (boolean), followed by markdown content
Files:
.cursor/rules/general.mdc
**/*.py
📄 CodeRabbit inference engine (.cursor/rules/general.mdc)
**/*.py: Follow PEP 8/20 style; format with yapf (column_limit=120) and use 4-space indentation; end files with a single newline
Run ruff (ruff check --fix) per pyproject.toml; fix warnings unless explicitly ignored; ruff is linter-only
Use snake_case for functions/variables, PascalCase for classes, and UPPER_CASE for constants
Treat pyright warnings as errors during development
Exception handling: preserve stack traces and avoid duplicate logging
When re-raising exceptions, use bareraiseand log with logger.error(), not logger.exception()
When catching and not re-raising, log with logger.exception() to capture stack trace
Validate and sanitize all user input; prefer httpx with SSL verification and follow OWASP Top‑10
Use async/await for I/O-bound work; profile CPU-heavy paths with cProfile/mprof; cache with functools.lru_cache or external cache; leverage NumPy vectorization when beneficial
**/*.py: Programmatic use: create TestLLMConfig(response_seq=[...], delay_ms=...), add with builder.add_llm("", cfg).
When retrieving the test LLM wrapper, use builder.get_llm(name, wrapper_type=LLMFrameworkEnum.) and call the framework’s method (e.g., ainvoke, achat, call).
Files:
examples/frameworks/multi_frameworks/tests/test_multi_frameworks_workflow.pyexamples/custom_functions/plot_charts/tests/test_plot_charts_workflow.pyexamples/agents/rewoo/tests/test_rewoo_agent.pyexamples/getting_started/simple_web_query/tests/test_simple_web_query_workflow.pyexamples/advanced_agents/profiler_agent/tests/test_profiler_agent.py
**/tests/test_*.py
📄 CodeRabbit inference engine (.cursor/rules/general.mdc)
Test files must be named test_*.py and placed under a tests/ folder
Files:
examples/frameworks/multi_frameworks/tests/test_multi_frameworks_workflow.pyexamples/custom_functions/plot_charts/tests/test_plot_charts_workflow.pyexamples/agents/rewoo/tests/test_rewoo_agent.pyexamples/getting_started/simple_web_query/tests/test_simple_web_query_workflow.pyexamples/advanced_agents/profiler_agent/tests/test_profiler_agent.py
**/tests/**/*.py
📄 CodeRabbit inference engine (.cursor/rules/general.mdc)
**/tests/**/*.py: Test functions must use the test_ prefix and snake_case
Extract repeated test code into pytest fixtures; fixtures should set name=... in @pytest.fixture and functions named with fixture_ prefix
Mark expensive tests with @pytest.mark.slow or @pytest.mark.integration
Use pytest with pytest-asyncio for async code; mock external services with pytest_httpserver or unittest.mock
Files:
examples/frameworks/multi_frameworks/tests/test_multi_frameworks_workflow.pyexamples/custom_functions/plot_charts/tests/test_plot_charts_workflow.pyexamples/agents/rewoo/tests/test_rewoo_agent.pyexamples/getting_started/simple_web_query/tests/test_simple_web_query_workflow.pyexamples/advanced_agents/profiler_agent/tests/test_profiler_agent.py
**/*.{py,md}
📄 CodeRabbit inference engine (.cursor/rules/general.mdc)
Never hard‑code version numbers in code or docs; versions are derived by setuptools‑scm
Files:
examples/frameworks/multi_frameworks/tests/test_multi_frameworks_workflow.pyexamples/custom_functions/plot_charts/tests/test_plot_charts_workflow.pyexamples/agents/rewoo/tests/test_rewoo_agent.pyexamples/getting_started/simple_web_query/tests/test_simple_web_query_workflow.pyexamples/advanced_agents/profiler_agent/tests/test_profiler_agent.py
**/*.{py,yaml,yml}
📄 CodeRabbit inference engine (.cursor/rules/nat-test-llm.mdc)
**/*.{py,yaml,yml}: Configure response_seq as a list of strings; values cycle per call, and [] yields an empty string.
Configure delay_ms to inject per-call artificial latency in milliseconds for nat_test_llm.
Files:
examples/frameworks/multi_frameworks/tests/test_multi_frameworks_workflow.pyexamples/custom_functions/plot_charts/tests/test_plot_charts_workflow.pyexamples/agents/rewoo/tests/test_rewoo_agent.pyexamples/getting_started/simple_web_query/tests/test_simple_web_query_workflow.pyexamples/advanced_agents/profiler_agent/tests/test_profiler_agent.py
examples/**/*
⚙️ CodeRabbit configuration file
examples/**/*: - This directory contains example code and usage scenarios for the toolkit, at a minimum an example should
contain a README.md or file README.ipynb.
- If an example contains Python code, it should be placed in a subdirectory named
src/and should
contain apyproject.tomlfile. Optionally, it might also contain scripts in ascripts/directory.- If an example contains YAML files, they should be placed in a subdirectory named
configs/. - If an example contains sample data files, they should be placed in a subdirectory nameddata/, and should
be checked into git-lfs.
Files:
examples/frameworks/multi_frameworks/tests/test_multi_frameworks_workflow.pyexamples/custom_functions/plot_charts/tests/test_plot_charts_workflow.pyexamples/agents/rewoo/tests/test_rewoo_agent.pyexamples/getting_started/simple_web_query/tests/test_simple_web_query_workflow.pyexamples/advanced_agents/profiler_agent/tests/test_profiler_agent.py
🧠 Learnings (6)
📚 Learning: 2025-08-28T23:22:41.742Z
Learnt from: CR
PR: NVIDIA/NeMo-Agent-Toolkit#0
File: .cursor/rules/general.mdc:0-0
Timestamp: 2025-08-28T23:22:41.742Z
Learning: Applies to tests/**/*.py : Unit tests must live under tests/ and use configured markers (e2e, integration, etc.)
Applied to files:
.cursor/rules/general.mdc
📚 Learning: 2025-08-28T23:22:41.742Z
Learnt from: CR
PR: NVIDIA/NeMo-Agent-Toolkit#0
File: .cursor/rules/general.mdc:0-0
Timestamp: 2025-08-28T23:22:41.742Z
Learning: Applies to packages/*/pyproject.toml : Package pyproject.toml must depend on nvidia-nat or a package starting with nvidia-nat- using ~=<two-digit> versions (e.g., ~=1.0)
Applied to files:
.cursor/rules/general.mdc
📚 Learning: 2025-08-28T23:22:41.742Z
Learnt from: CR
PR: NVIDIA/NeMo-Agent-Toolkit#0
File: .cursor/rules/general.mdc:0-0
Timestamp: 2025-08-28T23:22:41.742Z
Learning: Applies to **/tests/test_*.py : Test files must be named test_*.py and placed under a tests/ folder
Applied to files:
.cursor/rules/general.mdc
📚 Learning: 2025-08-28T23:22:41.742Z
Learnt from: CR
PR: NVIDIA/NeMo-Agent-Toolkit#0
File: .cursor/rules/general.mdc:0-0
Timestamp: 2025-08-28T23:22:41.742Z
Learning: Applies to packages/*/tests/**/*.py : Packages containing Python code must have tests under packages/<pkg>/tests
Applied to files:
.cursor/rules/general.mdc
📚 Learning: 2025-08-28T23:22:41.742Z
Learnt from: CR
PR: NVIDIA/NeMo-Agent-Toolkit#0
File: .cursor/rules/general.mdc:0-0
Timestamp: 2025-08-28T23:22:41.742Z
Learning: Applies to packages/*/pyproject.toml : Each package under packages/ must include a pyproject.toml
Applied to files:
.cursor/rules/general.mdc
📚 Learning: 2025-08-28T23:22:41.742Z
Learnt from: CR
PR: NVIDIA/NeMo-Agent-Toolkit#0
File: .cursor/rules/general.mdc:0-0
Timestamp: 2025-08-28T23:22:41.742Z
Learning: Applies to examples/*/tests/**/*.py : Example-specific tests must live under examples/<name>/tests and use configured markers
Applied to files:
.cursor/rules/general.mdc
🪛 Ruff (0.13.1)
examples/advanced_agents/profiler_agent/tests/test_profiler_agent.py
55-55: Abstract raise to an inner function
(TRY301)
55-55: Avoid specifying long messages outside the exception class
(TRY003)
56-56: Do not catch blind exception: Exception
(BLE001)
59-59: Within an except clause, raise exceptions with raise ... from err or raise ... from None to distinguish them from errors in exception handling
(B904)
🔇 Additional comments (10)
examples/custom_functions/plot_charts/tests/test_plot_charts_workflow.py (1)
30-31: No change required — repo uses pytest-asyncio auto modepyproject.toml sets asyncio_mode = "auto" (around line 280), so async tests run without adding @pytest.mark.asyncio.
examples/advanced_agents/profiler_agent/tests/test_profiler_agent.py (2)
34-41: Autouse fixture arg breaks collection; remove param, gate via env var, add scope/name and return type.Passing
fail_missingas a fixture param makes pytest look for a fixture namedfail_missingand fail at collection.-@pytest.fixture(autouse=True) -def require_profiler_agent(fail_missing: bool = False): - if not PROFILER_AGENT_AVAILABLE: - reason = "nat_profiler_agent is not installed" - if fail_missing: - raise RuntimeError(reason) - pytest.skip(reason=reason) +@pytest.fixture(autouse=True, scope="module", name="require_profiler_agent") +def fixture_require_profiler_agent() -> None: + import os + if not PROFILER_AGENT_AVAILABLE: + reason = "nat_profiler_agent is not installed" + fail_missing = os.getenv("NAT_FAIL_MISSING_DEPS", "").lower() in {"1", "true", "yes", "y"} + if fail_missing: + raise RuntimeError(reason) + pytest.skip(reason=reason)
43-61: Harden Phoenix prerequisite: no-param fixture, handle missing requests, narrow exceptions, timeout, cause chaining.Prevents collection errors, avoids blind
except Exception, and handles missingrequests. Adds module scope, name, and return type.-@pytest.fixture(autouse=True) -def require_phoenix_server(fail_missing: bool = False): +@pytest.fixture(autouse=True, scope="module", name="require_phoenix_server") +def fixture_require_phoenix_server() -> None: @@ - - import requests - try: - response = requests.get("http://localhost:6006/v1/traces", timeout=5) - if response.status_code != 200: - raise ConnectionError(f"Unexpected status code: {response.status_code}") - except Exception as e: - reason = f"Unable to connect to Phoenix server at http://localhost:6006/v1/traces: {e}" - if fail_missing: - raise RuntimeError(reason) - pytest.skip(reason=reason) + import os + try: + import requests + from requests.exceptions import RequestException + except Exception as e: + reason = "requests is not installed; required for Phoenix connectivity check" + fail_missing = os.getenv("NAT_FAIL_MISSING_DEPS", "").lower() in {"1", "true", "yes", "y"} + if fail_missing: + raise RuntimeError(reason) from e + pytest.skip(reason=reason) + return + + fail_missing = os.getenv("NAT_FAIL_MISSING_DEPS", "").lower() in {"1", "true", "yes", "y"} + try: + response = requests.get("http://localhost:6006/v1/traces", timeout=2) + except RequestException as e: + reason = "Unable to connect to Phoenix server at http://localhost:6006/v1/traces: {e}".format(e=e) + if fail_missing: + raise RuntimeError(reason) from e + pytest.skip(reason=reason) + return + + if response.status_code != 200: + reason = f"Unexpected status code from Phoenix: {response.status_code}" + if fail_missing: + raise RuntimeError(reason) + pytest.skip(reason=reason)examples/agents/rewoo/tests/test_rewoo_agent.py (1)
36-37: Integration marker + API-key fixtures: LGTM — verifiedpyproject.toml registers the "integration" marker and sets asyncio_mode="auto" with pytest-asyncio==0.24.* listed; fixtures nvidia_api_key and tavily_api_key are defined in packages/nvidia_nat_test/src/nat/test/plugin.py; examples/agents/rewoo/configs/config.yml exists.
examples/frameworks/multi_frameworks/tests/test_multi_frameworks_workflow.py (1)
30-31: LGTM: replace e2e with integration and require API keyConfirmed: fixture "nvidia_api_key" exists (packages/nvidia_nat_test/src/nat/test/plugin.py) and the "integration" marker is declared in pyproject.toml.
ci/scripts/gitlab/tests.sh (1)
34-34: LGTM — migration to --run_integration verified; no remaining functional e2e flags/markers
parser.addoption for --run_integration and --run_slow exists at packages/nvidia_nat_test/src/nat/test/plugin.py and the "integration"/"slow" markers are declared in pyproject.toml; remaining "e2e" occurrences are only docstrings/README/CHANGELOG..cursor/rules/general.mdc (1)
46-46: LGTM: marker guidance updated tointegration— residuale2e/E2Etext foundPytest markers/flags migrated; remaining textual occurrences to review:
- packages/nvidia_nat_test/tests/test_test_llm.py — docstrings containing "YAML e2e"
- examples/getting_started/simple_calculator/README.md — "end-to-end (E2E)"
- CHANGELOG.md — "Add e2e test..."
- Additional "end-to-end" mentions across docs/tests (tests/nat/, docs/source/)
Migrate or confirm these textual references.
examples/getting_started/simple_web_query/tests/test_simple_web_query_workflow.py (3)
31-31: No additional web-search fixture requiredThe example's config (examples/getting_started/simple_web_query/src/nat_simple_web_query/configs/config.yml) defines a static webpage_query (webpage_url + embedder) and contains no tavily/serpapi/bing/google provider keys — @pytest.mark.usefixtures("nvidia_api_key") is sufficient.
Likely an incorrect or invalid review comment.
30-31: No change required — pytest-asyncio auto mode is enabledpyproject.toml sets [tool.pytest.ini_options] asyncio_mode = "auto" and pytest-asyncio is present in dev deps, so async def test_full_workflow() is valid without @pytest.mark.asyncio.
30-31: Marker change + API-key fixture usage: LGTM — 'integration' marker registered and nvidia_api_key fixture present.
integration marker is declared in pyproject.toml and the nvidia_api_key fixture / skip logic is implemented in packages/nvidia_nat_test/src/nat/test/plugin.py
|
/merge |
* Fixes `testpaths` to run tests located at any level under the `examples/` dir * Add new fixtures for the following API keys: `SERP_API_KEY`, `TAVILY_API_KEY`, `MEM0_API_KEY`. * Update existing tests to properly depend on the API key fixtures that they require. * Drop the `e2e` marker along with the associated `--run_e2e` flag, opting to use the `integration` marker instead. * Skip broken tests which couldn't be trivially fixed (to be fixed in another PR). ## By Submitting this PR I confirm: - I am familiar with the [Contributing Guidelines](https://github.com/NVIDIA/NeMo-Agent-Toolkit/blob/develop/docs/source/resources/contributing.md). - We require that all contributors "sign-off" on their commits. This certifies that the contribution is your original work, or you have rights to submit it under the same license, or a compatible license. - Any contribution which contains commits that are not Signed-Off will not be accepted. - When the PR is ready for review, new or existing tests cover these changes. - When the PR is ready for review, the documentation is up to date with these changes. ## Summary by CodeRabbit - Tests - Reclassified end-to-end tests to integration across examples and server tests. - Added fixtures for required API keys and environment checks; tests auto-skip with clear reasons when prerequisites are missing. - Updated some example assertions and inputs for consistency. - Introduced recursive test discovery for examples and packages. - Chores - Removed deprecated e2e marker and flag from configuration and CI scripts; standardized integration runs. - Documentation - Updated internal examples to reflect integration markers (e2e references removed). Authors: - David Gardner (https://github.com/dagardner-nv) Approvers: - Will Killian (https://github.com/willkill07) - https://github.com/Salonijain27 URL: NVIDIA#838 Signed-off-by: Yuchen Zhang <yuchenz@nvidia.com>
Description
testpathsto run tests located at any level under theexamples/dirSERP_API_KEY,TAVILY_API_KEY,MEM0_API_KEY.e2emarker along with the associated--run_e2eflag, opting to use theintegrationmarker instead.By Submitting this PR I confirm:
Summary by CodeRabbit