-
Notifications
You must be signed in to change notification settings - Fork 54
LCORE-636: finished split unit tests for config models #516
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
WalkthroughTest suite under tests/unit/models/config is restructured: one prior test file is reduced to a placeholder focused on configuration dumping, and three new targeted unit test modules are added for InferenceConfiguration, ModelContextProtocolServer (and related configuration wiring), and UserDataCollection validation. Changes
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes Possibly related PRs
Pre-merge checks (3 passed)✅ Passed checks (3 passed)
Poem
Tip 👮 Agentic pre-merge checks are now available in preview!Pro plan users can now enable pre-merge checks in their settings to enforce checklists before merging PRs.
Example: reviews:
pre_merge_checks:
custom_checks:
- name: "Undocumented Breaking Changes"
mode: "warning"
instructions: |
Flag potential breaking changes that are not documented:
1. Identify changes to public APIs/exports, CLI flags, environment variables, configuration keys, database schemas, or HTTP/GraphQL endpoints (including removed/renamed items and changes to types, required params, return values, defaults, or behavior).
2. Ignore purely internal/private changes (e.g., code not exported from package entry points or marked internal).
3. Verify documentation exists: a "Breaking Change" section in the PR description and updates to CHANGELOG.md.Please share your feedback with us on this Discord post. 📜 Recent review detailsConfiguration used: CodeRabbit UI Review profile: CHILL Plan: Pro 📒 Files selected for processing (2)
🚧 Files skipped from review as they are similar to previous changes (2)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
✨ Finishing Touches
🧪 Generate unit tests
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 4
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
tests/unit/models/config/test_user_data_collection.py (1)
34-42: Test function name doesn't match its behavior.The test function
test_user_data_collection_transcripts_disabled()actually tests the validation error when transcripts are enabled (True) with None storage.-def test_user_data_collection_transcripts_disabled() -> None: +def test_user_data_collection_transcripts_enabled_without_storage() -> None: """Test the UserDataCollection constructor for transcripts.""" - # incorrect configuration + # transcripts enabled without storage - should fail with pytest.raises( ValueError, match="transcripts_storage is required when transcripts is enabled", ): UserDataCollection(transcripts_enabled=True, transcripts_storage=None)
🧹 Nitpick comments (4)
tests/unit/models/config/test_dump_configuration.py (1)
1-1: Update docstring to match the narrowed test scope.The docstring mentions "checking ability to dump configuration" but could be more specific since the file now focuses on configuration serialization and JSON dumping functionality.
-"""Unit tests checking ability to dump configuration.""" +"""Unit tests for configuration dumping functionality."""tests/unit/models/config/test_inference_configuration.py (3)
8-27: Parametrize constructor cases and drop redundant assert
- The two “happy-path” checks can be table-driven to reduce duplication.
- The object “is not None” asserts are redundant after successful construction.
Apply:
-def test_inference_constructor() -> None: - """ - Test the InferenceConfiguration constructor with valid - parameters. - """ - # Test with no default provider or model, as they are optional - inference_config = InferenceConfiguration() - assert inference_config is not None - assert inference_config.default_provider is None - assert inference_config.default_model is None - - # Test with default provider and model - inference_config = InferenceConfiguration( - default_provider="default_provider", - default_model="default_model", - ) - assert inference_config is not None - assert inference_config.default_provider == "default_provider" - assert inference_config.default_model == "default_model" +@pytest.mark.parametrize( + "kwargs,expected_provider,expected_model", + [ + ({}, None, None), + ({"default_provider": "default_provider", "default_model": "default_model"}, + "default_provider", "default_model"), + ], +) +def test_inference_constructor(kwargs, expected_provider, expected_model) -> None: + cfg = InferenceConfiguration(**kwargs) + assert cfg.default_provider == expected_provider + assert cfg.default_model == expected_model
29-53: Deduplicate negative paths with parametrizeBoth exception tests share the same structure; parametrize them to stay DRY and keep messages together.
Apply:
-def test_inference_default_model_missing() -> None: - """ - Test case where only default provider is set, should fail - """ - with pytest.raises( - ValueError, - match="Default model must be specified when default provider is set", - ): - InferenceConfiguration( - default_provider="default_provider", - ) - - -def test_inference_default_provider_missing() -> None: - """ - Test case where only default model is set, should fail - """ - with pytest.raises( - ValueError, - match="Default provider must be specified when default model is set", - ): - InferenceConfiguration( - default_model="default_model", - ) +@pytest.mark.parametrize( + "kwargs,err_msg", + [ + ({"default_provider": "default_provider"}, + "Default model must be specified when default provider is set"), + ({"default_model": "default_model"}, + "Default provider must be specified when default model is set"), + ], +) +def test_inference_defaults_must_be_paired(kwargs, err_msg) -> None: + with pytest.raises(ValueError, match=err_msg): + InferenceConfiguration(**kwargs)
1-53: Clarify intended behavior for empty strings and explicit NoneDo we consider empty strings valid “set” values? Current validator treats only None specially; empty strings pass when both provided. If empty strings should be rejected, the model (and tests) need to enforce non-empty.
Option A (accept empty strings) — add:
def test_inference_empty_strings_are_accepted() -> None: cfg = InferenceConfiguration(default_provider="", default_model="") assert cfg.default_provider == "" assert cfg.default_model == ""Option B (reject empty strings) — add and then tighten the model accordingly:
import re def test_inference_empty_strings_are_rejected() -> None: with pytest.raises(ValueError, match=re.escape("Default provider/model cannot be empty")): InferenceConfiguration(default_provider="", default_model="")Would you like me to open a follow-up to align the model constraints and tests?
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (4)
tests/unit/models/config/test_dump_configuration.py(1 hunks)tests/unit/models/config/test_inference_configuration.py(1 hunks)tests/unit/models/config/test_model_context_protocol_server.py(1 hunks)tests/unit/models/config/test_user_data_collection.py(1 hunks)
🧰 Additional context used
🧬 Code graph analysis (4)
tests/unit/models/config/test_inference_configuration.py (1)
src/models/config.py (2)
config(130-136)InferenceConfiguration(434-451)
tests/unit/models/config/test_model_context_protocol_server.py (2)
src/models/config.py (6)
config(130-136)ModelContextProtocolServer(159-164)LlamaStackConfiguration(167-210)UserDataCollection(213-230)Configuration(454-471)ServiceConfiguration(139-156)src/configuration.py (2)
mcp_servers(88-92)customization(117-121)
tests/unit/models/config/test_user_data_collection.py (1)
src/models/config.py (2)
config(130-136)UserDataCollection(213-230)
tests/unit/models/config/test_dump_configuration.py (1)
src/models/config.py (8)
config(130-136)ModelContextProtocolServer(159-164)LlamaStackConfiguration(167-210)UserDataCollection(213-230)DatabaseConfiguration(98-136)Configuration(454-471)ServiceConfiguration(139-156)InferenceConfiguration(434-451)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
- GitHub Check: build-pr
- GitHub Check: e2e_tests
🔇 Additional comments (4)
tests/unit/models/config/test_dump_configuration.py (1)
22-282: Excellent test coverage for configuration dumping with MCP servers.The test functions comprehensively validate JSON serialization of configuration objects with different MCP server scenarios (empty, single, multiple). The test structure follows good practices with descriptive docstrings and clear assertions. The JSON comparison at the end of each test ensures complete validation of the serialized output.
tests/unit/models/config/test_model_context_protocol_server.py (2)
16-49: Thorough validation of ModelContextProtocolServer construction.The test functions provide comprehensive coverage of the ModelContextProtocolServer model, including default behavior, custom provider IDs, and proper validation of required fields. The test structure is well-organized with clear assertions and proper use of pytest.raises for validation testing.
51-135: Comprehensive integration testing for Configuration with MCP servers.These test functions effectively validate the integration between Configuration and ModelContextProtocolServer objects across different scenarios. The tests properly verify both the structural correctness and the specific values of nested configuration objects, ensuring robust validation of the configuration wiring.
tests/unit/models/config/test_inference_configuration.py (1)
1-6: LGTM: clear focus and importsDocstring + direct import from models.config read well; pytest is correctly used.
| def test_user_data_collection_feedback_enabled() -> None: | ||
| """Test the UserDataCollection constructor for feedback.""" | ||
| # correct configuration | ||
| cfg = UserDataCollection(feedback_enabled=False, feedback_storage=None) | ||
| assert cfg is not None | ||
| assert cfg.feedback_enabled is False | ||
| assert cfg.feedback_storage is None | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Test function name doesn't match its behavior.
The test function test_user_data_collection_feedback_enabled() actually tests the case where feedback is disabled (False) with None storage. This is misleading.
-def test_user_data_collection_feedback_enabled() -> None:
+def test_user_data_collection_feedback_disabled() -> None:
"""Test the UserDataCollection constructor for feedback."""
- # correct configuration
+ # feedback disabled with no storage - should succeed
cfg = UserDataCollection(feedback_enabled=False, feedback_storage=None)
assert cfg is not None
assert cfg.feedback_enabled is False
assert cfg.feedback_storage is None📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| def test_user_data_collection_feedback_enabled() -> None: | |
| """Test the UserDataCollection constructor for feedback.""" | |
| # correct configuration | |
| cfg = UserDataCollection(feedback_enabled=False, feedback_storage=None) | |
| assert cfg is not None | |
| assert cfg.feedback_enabled is False | |
| assert cfg.feedback_storage is None | |
| def test_user_data_collection_feedback_disabled() -> None: | |
| """Test the UserDataCollection constructor for feedback.""" | |
| # feedback disabled with no storage - should succeed | |
| cfg = UserDataCollection(feedback_enabled=False, feedback_storage=None) | |
| assert cfg is not None | |
| assert cfg.feedback_enabled is False | |
| assert cfg.feedback_storage is None |
🤖 Prompt for AI Agents
In tests/unit/models/config/test_user_data_collection.py around lines 8 to 15,
the test function name and docstring claim to test "feedback enabled" but
actually constructs UserDataCollection with feedback_enabled=False and
feedback_storage=None; rename the function (for example to
test_user_data_collection_feedback_disabled_with_none_storage) and update the
docstring to reflect that it verifies the constructor when feedback is disabled
and storage is None so the test name and description match the behavior.
| def test_user_data_collection_feedback_disabled() -> None: | ||
| """Test the UserDataCollection constructor for feedback.""" | ||
| # incorrect configuration | ||
| with pytest.raises( | ||
| ValueError, | ||
| match="feedback_storage is required when feedback is enabled", | ||
| ): | ||
| UserDataCollection(feedback_enabled=True, feedback_storage=None) | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Test function name doesn't match its behavior.
The test function test_user_data_collection_feedback_disabled() actually tests the validation error when feedback is enabled (True) with None storage.
-def test_user_data_collection_feedback_disabled() -> None:
+def test_user_data_collection_feedback_enabled_without_storage() -> None:
"""Test the UserDataCollection constructor for feedback."""
- # incorrect configuration
+ # feedback enabled without storage - should fail
with pytest.raises(
ValueError,
match="feedback_storage is required when feedback is enabled",
):
UserDataCollection(feedback_enabled=True, feedback_storage=None)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| def test_user_data_collection_feedback_disabled() -> None: | |
| """Test the UserDataCollection constructor for feedback.""" | |
| # incorrect configuration | |
| with pytest.raises( | |
| ValueError, | |
| match="feedback_storage is required when feedback is enabled", | |
| ): | |
| UserDataCollection(feedback_enabled=True, feedback_storage=None) | |
| def test_user_data_collection_feedback_enabled_without_storage() -> None: | |
| """Test the UserDataCollection constructor for feedback.""" | |
| # feedback enabled without storage - should fail | |
| with pytest.raises( | |
| ValueError, | |
| match="feedback_storage is required when feedback is enabled", | |
| ): | |
| UserDataCollection(feedback_enabled=True, feedback_storage=None) |
🤖 Prompt for AI Agents
In tests/unit/models/config/test_user_data_collection.py around lines 17 to 25,
the test function name and docstring are misleading: it is named
test_user_data_collection_feedback_disabled() but actually asserts that enabling
feedback without storage raises a ValueError. Rename the function to reflect the
behavior (e.g.,
test_user_data_collection_feedback_enabled_without_storage_raises) and update
the docstring to describe that the constructor should raise when
feedback_enabled=True and feedback_storage=None; keep the assertion and
exception match unchanged.
| def test_user_data_collection_transcripts_enabled() -> None: | ||
| """Test the UserDataCollection constructor for transcripts.""" | ||
| # correct configuration | ||
| cfg = UserDataCollection(transcripts_enabled=False, transcripts_storage=None) | ||
| assert cfg is not None | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Test function name doesn't match its behavior.
The test function test_user_data_collection_transcripts_enabled() actually tests the case where transcripts are disabled (False) with None storage.
-def test_user_data_collection_transcripts_enabled() -> None:
+def test_user_data_collection_transcripts_disabled() -> None:
"""Test the UserDataCollection constructor for transcripts."""
- # correct configuration
+ # transcripts disabled with no storage - should succeed
cfg = UserDataCollection(transcripts_enabled=False, transcripts_storage=None)
assert cfg is not None
+ assert cfg.transcripts_enabled is False
+ assert cfg.transcripts_storage is None📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| def test_user_data_collection_transcripts_enabled() -> None: | |
| """Test the UserDataCollection constructor for transcripts.""" | |
| # correct configuration | |
| cfg = UserDataCollection(transcripts_enabled=False, transcripts_storage=None) | |
| assert cfg is not None | |
| def test_user_data_collection_transcripts_disabled() -> None: | |
| """Test the UserDataCollection constructor for transcripts.""" | |
| # transcripts disabled with no storage - should succeed | |
| cfg = UserDataCollection(transcripts_enabled=False, transcripts_storage=None) | |
| assert cfg is not None | |
| assert cfg.transcripts_enabled is False | |
| assert cfg.transcripts_storage is None |
🤖 Prompt for AI Agents
In tests/unit/models/config/test_user_data_collection.py around lines 27 to 32,
the test function name says "transcripts_enabled" but the body constructs
UserDataCollection with transcripts_enabled=False; either rename the test to
reflect the disabled case (e.g., test_user_data_collection_transcripts_disabled)
or change the constructed value to True to match the current name; update the
test name (or the test input) and any related docstring accordingly so the
function name matches the behavior.
…mping configuration
19d70ea to
e0e8024
Compare
Description
LCORE-636: finished split unit tests for config models
Type of change
Related Tickets & Documents
Summary by CodeRabbit