test: add unit tests for manage_packages tool and CLI#973
test: add unit tests for manage_packages tool and CLI#973Scriptwonder merged 3 commits intoCoplayDev:betafrom
Conversation
Reviewer's GuideAdds comprehensive unit tests for the manage_packages tool and its CLI commands, covering action list integrity, tool-level action validation, and parameter construction for all package and registry subcommands without requiring a Unity connection. File-Level Changes
Tips and commandsInteracting with Sourcery
Customizing Your ExperienceAccess your dashboard to:
Getting Help
|
|
No actionable comments were generated in the recent review. 🎉 ℹ️ Recent review info⚙️ Run configurationConfiguration used: defaults Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (1)
🚧 Files skipped from review as they are similar to previous changes (1)
📝 WalkthroughWalkthroughAdded a new test module Changes
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~22 minutes Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Hey - I've found 1 issue, and left some high level feedback:
- In
_get_params, consider accessing theparamsargument by name (e.g.,mock_run.call_args.kwargs['params']) instead of positional index[1]to make the tests more robust against changes in therun_commandcall signature. - The repeated nested
patch('cli.commands.packages.get_config', ...)/patch('cli.commands.packages.run_command', ...)blocks in the CLI tests could be refactored into a reusable helper or fixture to reduce duplication and make the tests easier to maintain.
Prompt for AI Agents
Please address the comments from this code review:
## Overall Comments
- In `_get_params`, consider accessing the `params` argument by name (e.g., `mock_run.call_args.kwargs['params']`) instead of positional index `[1]` to make the tests more robust against changes in the `run_command` call signature.
- The repeated nested `patch('cli.commands.packages.get_config', ...)` / `patch('cli.commands.packages.run_command', ...)` blocks in the CLI tests could be refactored into a reusable helper or fixture to reduce duplication and make the tests easier to maintain.
## Individual Comments
### Comment 1
<location path="Server/tests/test_manage_packages.py" line_range="71-79" />
<code_context>
+class TestManagePackagesToolValidation:
+ """Test action validation in the manage_packages tool function."""
+
+ def test_unknown_action_returns_error(self):
+ from services.tools.manage_packages import manage_packages
+
+ ctx = MagicMock()
+ ctx.get_state = AsyncMock(return_value=None)
+
+ result = asyncio.run(manage_packages(ctx, action="invalid_action"))
+ assert result["success"] is False
+ assert "Unknown action" in result["message"]
+
+ def test_unknown_action_lists_valid_actions(self):
</code_context>
<issue_to_address>
**suggestion (testing):** Consider asserting that no Unity-related calls are made for an unknown action.
This test only checks the error payload. Since a key requirement is rejecting invalid actions *without* contacting Unity, add an assertion that no Unity calls are made. For example, patch `services.tools.manage_packages._send_packages_command` (and/or `ctx.get_state`) and assert they are never called/awaited, so the test also validates the fail-fast behavior at the transport boundary.
```suggestion
def test_unknown_action_returns_error(self):
from services.tools.manage_packages import manage_packages
from unittest.mock import patch
ctx = MagicMock()
ctx.get_state = AsyncMock(return_value=None)
with patch("services.tools.manage_packages._send_packages_command") as send_packages_command:
result = asyncio.run(manage_packages(ctx, action="invalid_action"))
assert result["success"] is False
assert "Unknown action" in result["message"]
# Fail-fast: do not talk to Unity or even attempt to fetch state
send_packages_command.assert_not_called()
ctx.get_state.assert_not_awaited()
```
</issue_to_address>Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.
| def test_unknown_action_returns_error(self): | ||
| from services.tools.manage_packages import manage_packages | ||
|
|
||
| ctx = MagicMock() | ||
| ctx.get_state = AsyncMock(return_value=None) | ||
|
|
||
| result = asyncio.run(manage_packages(ctx, action="invalid_action")) | ||
| assert result["success"] is False | ||
| assert "Unknown action" in result["message"] |
There was a problem hiding this comment.
suggestion (testing): Consider asserting that no Unity-related calls are made for an unknown action.
This test only checks the error payload. Since a key requirement is rejecting invalid actions without contacting Unity, add an assertion that no Unity calls are made. For example, patch services.tools.manage_packages._send_packages_command (and/or ctx.get_state) and assert they are never called/awaited, so the test also validates the fail-fast behavior at the transport boundary.
| def test_unknown_action_returns_error(self): | |
| from services.tools.manage_packages import manage_packages | |
| ctx = MagicMock() | |
| ctx.get_state = AsyncMock(return_value=None) | |
| result = asyncio.run(manage_packages(ctx, action="invalid_action")) | |
| assert result["success"] is False | |
| assert "Unknown action" in result["message"] | |
| def test_unknown_action_returns_error(self): | |
| from services.tools.manage_packages import manage_packages | |
| from unittest.mock import patch | |
| ctx = MagicMock() | |
| ctx.get_state = AsyncMock(return_value=None) | |
| with patch("services.tools.manage_packages._send_packages_command") as send_packages_command: | |
| result = asyncio.run(manage_packages(ctx, action="invalid_action")) | |
| assert result["success"] is False | |
| assert "Unknown action" in result["message"] | |
| # Fail-fast: do not talk to Unity or even attempt to fetch state | |
| send_packages_command.assert_not_called() | |
| ctx.get_state.assert_not_awaited() |
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (1)
Server/tests/test_manage_packages.py (1)
118-126: Consider verifying CLI exit code for test robustness.The tests verify that
run_commandis called with correct params, but don't check the CLI's exit code. If the command fails silently or raises an exception (caught by CliRunner by default), tests could still pass.Proposed enhancement with exit code verification
def test_list_builds_correct_params(self, runner, mock_config, mock_success): with patch("cli.commands.packages.get_config", return_value=mock_config): with patch("cli.commands.packages.run_command", return_value=mock_success) as mock_run: - runner.invoke(packages, ["list"]) + result = runner.invoke(packages, ["list"], catch_exceptions=False) + assert result.exit_code == 0 mock_run.assert_called_once() params = _get_params(mock_run) assert params["action"] == "list_packages"Consider applying a similar pattern to other CLI tests for consistent robustness.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@Server/tests/test_manage_packages.py` around lines 118 - 126, Update the test_list_builds_correct_params to capture and assert the CLI exit code: call runner.invoke(packages, ["list"]) and assign its return to a variable (e.g., result), then assert result.exit_code == 0 (or the expected code) in addition to the existing mock_run assertions; make this change in the test_list_builds_correct_params function and keep the existing checks using _get_params(mock_run) and params["action"] == "list_packages".
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@Server/tests/test_manage_packages.py`:
- Around line 87-90: The test's assertion is too permissive using an OR; update
the assertion in tests/test_manage_packages.py so it verifies the error message
contains both the "Valid actions" text and a known action (e.g., "add_package")
or, even better, compare against the exact expected string using ALL_ACTIONS and
the manage_packages(action=...) format; locate the manage_packages call and
ALL_ACTIONS reference in this test and replace the current or-check with either
an and-check (assert "Valid actions" in result["message"] and "add_package" in
result["message"]) or an exact equality using f"Unknown action 'bogus'. Valid
actions: {', '.join(ALL_ACTIONS)}".
---
Nitpick comments:
In `@Server/tests/test_manage_packages.py`:
- Around line 118-126: Update the test_list_builds_correct_params to capture and
assert the CLI exit code: call runner.invoke(packages, ["list"]) and assign its
return to a variable (e.g., result), then assert result.exit_code == 0 (or the
expected code) in addition to the existing mock_run assertions; make this change
in the test_list_builds_correct_params function and keep the existing checks
using _get_params(mock_run) and params["action"] == "list_packages".
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 00018e3c-46c2-45fe-9ff6-fd3f62df901b
📒 Files selected for processing (1)
Server/tests/test_manage_packages.py
There was a problem hiding this comment.
Actionable comments posted: 1
♻️ Duplicate comments (1)
Server/tests/test_manage_packages.py (1)
110-120:⚠️ Potential issue | 🟡 MinorStrengthen the unknown-action message assertion.
Line 119 only checks
"Valid actions"; this can pass even if the valid-action list is incomplete or missing known entries. Assert the full expected string (or at least also assert a known action token).Suggested test tightening
result = asyncio.run(manage_packages(ctx, action="bogus")) assert result["success"] is False - assert "Valid actions" in result["message"] + expected = f"Unknown action 'bogus'. Valid actions: {', '.join(ALL_ACTIONS)}" + assert result["message"] == expected🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@Server/tests/test_manage_packages.py` around lines 110 - 120, Update the test_unknown_action_lists_valid_actions assertion to ensure the unknown-action error message includes a known valid action token (instead of only checking for the generic "Valid actions"); in the test function (test_unknown_action_lists_valid_actions) call manage_packages as before but add an assertion that result["message"] contains a specific expected action name from manage_packages (e.g., "install" or whichever action string is defined in the manage_packages implementation) so the test fails if the valid-action list is incomplete or missing entries.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@Server/tests/test_manage_packages.py`:
- Around line 142-149: The test should verify that manage_packages lowercases
the action before dispatching to _send_packages_command: after calling
manage_packages(ctx, action="LIST_PACKAGES") inspect the AsyncMock mock_send's
awaited call (mock_send.assert_awaited() and inspect mock_send.call_args) and
assert that the sent payload or argument includes the normalized string
"list_packages" (or that the action arg equals "list_packages"); update the
assertion to check mock_send.call_args/kwargs rather than only result["success"]
to prove case-insensitive normalization in the manage_packages ->
_send_packages_command flow.
---
Duplicate comments:
In `@Server/tests/test_manage_packages.py`:
- Around line 110-120: Update the test_unknown_action_lists_valid_actions
assertion to ensure the unknown-action error message includes a known valid
action token (instead of only checking for the generic "Valid actions"); in the
test function (test_unknown_action_lists_valid_actions) call manage_packages as
before but add an assertion that result["message"] contains a specific expected
action name from manage_packages (e.g., "install" or whichever action string is
defined in the manage_packages implementation) so the test fails if the
valid-action list is incomplete or missing entries.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 0d217b18-182b-4016-b345-e0c6ac329104
📒 Files selected for processing (1)
Server/tests/test_manage_packages.py
Description
What: Added unit tests for the
manage_packagestool and CLI commands.Why: The tool had no test coverage despite being a recently added core tool listed as Tier 1 in the roadmap.
Tests cover:
Type of Change
Changes Made
Server/tests/test_manage_packages.pywith 24 unit tests across 4 test classes:TestActionLists— verifies theALL_ACTIONSlist is complete and has no duplicatesTestManagePackagesToolValidation— verifies the Python tool rejects invalid actions without connecting to UnityTestPackagesQueryCLICommands— verifies query commands (list, search, info, ping, status) build correct paramsTestPackagesInstallRemoveCLICommands— verifies install/remove commands (add, remove, embed, resolve) build correct paramsTestRegistryCLICommands— verifies registry commands (list-registries, add-registry, remove-registry) build correct paramsTesting/Screenshots/Recordings
All 24 tests pass locally: 24 passed in 4.58s
No Unity connection required — tests run entirely in Python by mocking the transport layer.
Documentation Updates
tools/UPDATE_DOCS_PROMPT.md(recommended)tools/UPDATE_DOCS.mdRelated Issues
N/A
Additional Notes
Tests follow the same structure and conventions as the existing
test_manage_animation.py.Summary by Sourcery
Add comprehensive unit test coverage for the manage_packages tool and its CLI commands to validate supported actions and parameter construction.
Tests:
Summary by CodeRabbit