Fix observability_evaluation_and_profiling example notebook#1874
Fix observability_evaluation_and_profiling example notebook#1874rapids-bot[bot] merged 12 commits intoNVIDIA:developfrom
observability_evaluation_and_profiling example notebook#1874Conversation
Signed-off-by: David Gardner <dagardner@nvidia.com>
Signed-off-by: David Gardner <dagardner@nvidia.com>
This reverts commit e881173. Signed-off-by: David Gardner <dagardner@nvidia.com>
Signed-off-by: David Gardner <dagardner@nvidia.com>
Signed-off-by: David Gardner <dagardner@nvidia.com>
|
Check out this pull request on See visual diffs & provide feedback on Jupyter Notebooks. Powered by ReviewNB |
1 similar comment
|
Check out this pull request on See visual diffs & provide feedback on Jupyter Notebooks. Powered by ReviewNB |
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
No actionable comments were generated in the recent review. 🎉 ℹ️ Recent review info⚙️ Run configurationConfiguration used: Path: .coderabbit.yaml Review profile: CHILL Plan: Pro Plus Run ID: 📒 Files selected for processing (1)
🚧 Files skipped from review as they are similar to previous changes (1)
WalkthroughDocumentation and notebooks updated: NeMo Customizer links now target the customizer landing page; NeMo Microservices setup link adjusted; profiling extra/subpackage renamed from Changes
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes Important Pre-merge checks failedPlease resolve all errors before merging. Addressing warnings is optional. ❌ Failed checks (1 inconclusive)
✅ Passed checks (2 passed)
✨ Finishing Touches🧪 Generate unit tests (beta)
Warning There were issues while running some tools. Please review the errors and either fix the tool's configuration or disable the tool if it's a critical failure. 🔧 Ruff (0.15.10)examples/notebooks/observability_evaluation_and_profiling.ipynbUnexpected end of JSON input Comment |
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (2)
examples/notebooks/observability_evaluation_and_profiling.ipynb (2)
1883-1887: Use a portable kernelspec display name.Line 1884 uses a local-environment label (
.venv (3.13.2)), which can cause noisy notebook diffs and confusion on other machines. Prefer a generic display name (for example,Python 3).Proposed metadata tweak
"kernelspec": { - "display_name": ".venv (3.13.2)", + "display_name": "Python 3", "language": "python", "name": "python3" },Also applies to: 1898-1898
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@examples/notebooks/observability_evaluation_and_profiling.ipynb` around lines 1883 - 1887, Replace the local-environment kernelspec display name with a portable, generic one: update the "kernelspec" -> "display_name" value currently set to ".venv (3.13.2)" to something like "Python 3" (and make the same change for the other occurrence at the second kernelspec entry), leaving "language" and "name" unchanged so the notebook metadata is stable across machines.
1104-1106: Lowermax_tokensdefault for this evaluation workflow configuration.The
max_tokens: 16384setting is significantly higher than recommended defaults (512–1024) for evaluation and interactive workloads. The evaluation dataset's longest response (Ark S12 Ultra tablet specifications) requires far fewer tokens. Reduce to 1024 or 2024 to align with best practices and minimize unnecessary token overhead during evaluation runs, then add a comment documenting when to increase this value for longer outputs.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@examples/notebooks/observability_evaluation_and_profiling.ipynb` around lines 1104 - 1106, Reduce the max_tokens value in the evaluation workflow configuration: change the current "max_tokens: 16384" to a lower default such as 1024 (or 2048 if you prefer), and add an inline comment next to the "max_tokens" setting explaining that this default suits most evaluation/interactive workloads and that it should be increased only for datasets or prompts that require much longer outputs (e.g., known long-document generation). Reference the existing keys "model_name", "temperature", and "max_tokens" to locate and update the setting.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@docs/source/improve-workflows/finetuning/dpo_with_nemo_customizer.md`:
- Line 22: Replace the incorrect NeMo Customizer URL
"embedding-customization-job.html" with the DPO-specific tutorial URL
"fine-tune/tutorials/dpo-customization-job.html" wherever it appears in the
document (e.g., the sentence starting "This guide covers Direct Preference
Optimization (DPO) training..." that currently links to
embedding-customization-job.html); also update the other occurrence noted (the
one referenced as "Also applies to: 950") to the same DPO URL so all links point
to the DPO customization job tutorial.
---
Nitpick comments:
In `@examples/notebooks/observability_evaluation_and_profiling.ipynb`:
- Around line 1883-1887: Replace the local-environment kernelspec display name
with a portable, generic one: update the "kernelspec" -> "display_name" value
currently set to ".venv (3.13.2)" to something like "Python 3" (and make the
same change for the other occurrence at the second kernelspec entry), leaving
"language" and "name" unchanged so the notebook metadata is stable across
machines.
- Around line 1104-1106: Reduce the max_tokens value in the evaluation workflow
configuration: change the current "max_tokens: 16384" to a lower default such as
1024 (or 2048 if you prefer), and add an inline comment next to the "max_tokens"
setting explaining that this default suits most evaluation/interactive workloads
and that it should be increased only for datasets or prompts that require much
longer outputs (e.g., known long-document generation). Reference the existing
keys "model_name", "temperature", and "max_tokens" to locate and update the
setting.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro Plus
Run ID: 48c28db0-448d-421b-864e-757a37832f26
📒 Files selected for processing (3)
docs/source/improve-workflows/finetuning/dpo_with_nemo_customizer.mdexamples/finetuning/dpo_tic_tac_toe/README.mdexamples/notebooks/observability_evaluation_and_profiling.ipynb
… group/extra for nvidia-nat-eval Signed-off-by: David Gardner <dagardner@nvidia.com>
Signed-off-by: David Gardner <dagardner@nvidia.com>
mnajafian-nv
left a comment
There was a problem hiding this comment.
Great work, conditional approval upon reviewing the inline suggestions :)
There was a problem hiding this comment.
🧹 Nitpick comments (1)
examples/notebooks/optimize_model_selection.ipynb (1)
179-184: Fix stale setup text to match the newprofilerextra.The install cell now uses
nvidia-nat[langchain,profiler], but the setup bullet still saysnvidia-nat[profiling](Line 162). Please align the prose with the command to avoid copy/paste confusion.As per coding guidelines: "Verify that documentation and comments are clear and comprehensive" and "Keep documentation in sync with code".✏️ Proposed doc update
-* The `nvidia-nat[profiling]` subpackage contains components for profiling and performance analysis. +* The `nvidia-nat[profiler]` subpackage contains components for profiling and performance analysis.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@examples/notebooks/optimize_model_selection.ipynb` around lines 179 - 184, The setup bullet text is stale and says "nvidia-nat[profiling]" while the install cell uses "nvidia-nat[langchain,profiler]"; update the prose to match the command by replacing the outdated extra name with "nvidia-nat[langchain,profiler]" (or otherwise reflect both extras), ensuring the documentation and the install cell (the pip command using nvidia-nat[langchain,profiler]) are consistent.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@examples/notebooks/optimize_model_selection.ipynb`:
- Around line 179-184: The setup bullet text is stale and says
"nvidia-nat[profiling]" while the install cell uses
"nvidia-nat[langchain,profiler]"; update the prose to match the command by
replacing the outdated extra name with "nvidia-nat[langchain,profiler]" (or
otherwise reflect both extras), ensuring the documentation and the install cell
(the pip command using nvidia-nat[langchain,profiler]) are consistent.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro Plus
Run ID: 659ac785-5177-4b3c-8379-d1b44b465d04
📒 Files selected for processing (2)
docs/source/resources/migration-guide.mdexamples/notebooks/optimize_model_selection.ipynb
Co-authored-by: mnajafian-nv <mnajafian@nvidia.com> Signed-off-by: David Gardner <96306125+dagardner-nv@users.noreply.github.com>
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@examples/notebooks/observability_evaluation_and_profiling.ipynb`:
- Around line 600-603: The install line currently installs
"nvidia-nat[langchain,llama-index,phoenix,profiler]" but the notebook runs `nat
eval`; update the pip install invocation in the notebook cell (the string
containing uv pip install "nvidia-nat[langchain,llama-index,phoenix,profiler]")
to include the eval extra for evaluation workflows—use
"nvidia-nat[eval,profiler]" (or add eval alongside the existing extras if you
need langchain/llama-index/phoenix too), and update the accompanying echo
message to reflect the chosen extras so the notebook installs the runtime
dependency required for `nat eval`.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro Plus
Run ID: 25ec6843-b65c-444d-8710-1e1de6863f1d
📒 Files selected for processing (1)
examples/notebooks/observability_evaluation_and_profiling.ipynb
Signed-off-by: David Gardner <dagardner@nvidia.com>
…/NeMo-Agent-Toolkit into david-observe-eval-notebook Signed-off-by: David Gardner <dagardner@nvidia.com>
Signed-off-by: David Gardner <dagardner@nvidia.com>
…om profiler, but still good to declare it explicitly Signed-off-by: David Gardner <dagardner@nvidia.com>
|
/merge |
…#1874) * This notebook was installing the `nvidia-nat-profiling` package which was dropped in v1.5, causing the notebook to install nat v1.3 * Replace the model with a nano model to avoid being rate limited during the eval steps. * Update `migration-guide.md` to fix profiler installation instructions. * Replace broken documentation links (unrelated link check errors found in CI) - I am familiar with the [Contributing Guidelines](https://github.com/NVIDIA/NeMo-Agent-Toolkit/blob/develop/docs/source/resources/contributing/index.md). - We require that all contributors "sign-off" on their commits. This certifies that the contribution is your original work, or you have rights to submit it under the same license, or a compatible license. - Any contribution which contains commits that are not Signed-Off will not be accepted. - When the PR is ready for review, new or existing tests cover these changes. - When the PR is ready for review, the documentation is up to date with these changes. * **Documentation** * Updated NeMo Customizer links in the finetuning guide; revised packaging/install guidance to replace the profiling extra with profiler and document the eval/profiler split. * **Examples** * Updated Microservices setup link and cleaned README whitespace/formatting. * **Notebooks** * Switched profiling extra name to profiler and added eval where applicable; updated generated model/config defaults (model selection and token limits) and bumped notebook kernel/Python metadata. Authors: - David Gardner (https://github.com/dagardner-nv) Approvers: - https://github.com/mnajafian-nv - Bryan Bednarski (https://github.com/bbednarski9) URL: NVIDIA#1874 Signed-off-by: Yuchen Zhang <yuchenz@nvidia.com>
…#1874) * This notebook was installing the `nvidia-nat-profiling` package which was dropped in v1.5, causing the notebook to install nat v1.3 * Replace the model with a nano model to avoid being rate limited during the eval steps. * Update `migration-guide.md` to fix profiler installation instructions. * Replace broken documentation links (unrelated link check errors found in CI) ## By Submitting this PR I confirm: - I am familiar with the [Contributing Guidelines](https://github.com/NVIDIA/NeMo-Agent-Toolkit/blob/develop/docs/source/resources/contributing/index.md). - We require that all contributors "sign-off" on their commits. This certifies that the contribution is your original work, or you have rights to submit it under the same license, or a compatible license. - Any contribution which contains commits that are not Signed-Off will not be accepted. - When the PR is ready for review, new or existing tests cover these changes. - When the PR is ready for review, the documentation is up to date with these changes. ## Summary by CodeRabbit * **Documentation** * Updated NeMo Customizer links in the finetuning guide; revised packaging/install guidance to replace the profiling extra with profiler and document the eval/profiler split. * **Examples** * Updated Microservices setup link and cleaned README whitespace/formatting. * **Notebooks** * Switched profiling extra name to profiler and added eval where applicable; updated generated model/config defaults (model selection and token limits) and bumped notebook kernel/Python metadata. Authors: - David Gardner (https://github.com/dagardner-nv) Approvers: - https://github.com/mnajafian-nv - Bryan Bednarski (https://github.com/bbednarski9) URL: NVIDIA#1874
Description
nvidia-nat-profilingpackage which was dropped in v1.5, causing the notebook to install nat v1.3migration-guide.mdto fix profiler installation instructions.By Submitting this PR I confirm:
Summary by CodeRabbit
Documentation
Examples
Notebooks