Skip to content

Conversation

@tisnik
Copy link
Contributor

@tisnik tisnik commented Sep 9, 2025

Description

LCORE-636: finished split unit tests for config models

Type of change

  • Refactor
  • New feature
  • Bug fix
  • CVE fix
  • Optimization
  • Documentation Update
  • Configuration Update
  • Bump-up service version
  • Bump-up dependent library
  • Bump-up library or tool used for development (does not change the final image)
  • CI configuration change
  • Konflux configuration change
  • Unit tests improvement
  • Integration tests improvement
  • End to end tests improvement

Related Tickets & Documents

  • Related Issue #LCORE-636

Summary by CodeRabbit

  • Tests
    • Reoriented configuration tests to focus on dumping behavior.
    • Added unit tests for inference defaults and provider/model validation.
    • Added tests for protocol server setup, including default/custom identifiers and integration within overall configuration.
    • Added tests for user data collection toggles, ensuring required storage validation.
    • Removed obsolete configuration tests and streamlined test structure to emphasize critical scenarios.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Sep 9, 2025

Walkthrough

Test suite under tests/unit/models/config is restructured: one prior test file is reduced to a placeholder focused on configuration dumping, and three new targeted unit test modules are added for InferenceConfiguration, ModelContextProtocolServer (and related configuration wiring), and UserDataCollection validation.

Changes

Cohort / File(s) Summary of Changes
Refocus dump configuration tests
tests/unit/models/config/test_dump_configuration.py
Rewritten to narrow scope to dumping configuration; removes extensive prior config validation tests, leaving a skeleton placeholder for test_dump_configuration. Imports adjusted.
Add InferenceConfiguration tests
tests/unit/models/config/test_inference_configuration.py
New tests validate default construction, explicit defaults, and error handling when default provider/model are inconsistently specified.
Add MCP server and configuration wiring tests
tests/unit/models/config/test_model_context_protocol_server.py
New tests for ModelContextProtocolServer defaults/custom provider_id, validation errors for missing required fields, and wiring checks for Configuration/ServiceConfiguration/LlamaStackConfiguration with 0/1/multiple MCP servers.
Add UserDataCollection validation tests
tests/unit/models/config/test_user_data_collection.py
New tests ensure errors when feedback/transcripts are enabled without storage and success when features are disabled without storage.

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

Possibly related PRs

Pre-merge checks (3 passed)

✅ Passed checks (3 passed)
Check name Status Explanation
Title Check ✅ Passed The title clearly summarizes the primary change by indicating the completion of splitting unit tests for configuration models and includes the relevant ticket identifier, making it concise, specific, and directly related to the changeset.
Description Check ✅ Passed The description accurately states the completion of splitting unit tests for config models, correctly categorizes the change as a unit test improvement, and directly aligns with the modifications introduced in the pull request.
Docstring Coverage ✅ Passed Docstring coverage is 100.00% which is sufficient. The required threshold is 80.00%.

Poem

I hop through tests with whiskers twitching,
New specs tucked in, old ones unglitching.
MCPs march tidy, defaults in tune,
Storage rules checked beneath the moon.
A carrot-coded cheer — tests passing soon! 🥕

Tip

👮 Agentic pre-merge checks are now available in preview!

Pro plan users can now enable pre-merge checks in their settings to enforce checklists before merging PRs.

  • Built-in checks – Quickly apply ready-made checks to enforce title conventions, require pull request descriptions that follow templates, validate linked issues for compliance, and more.
  • Custom agentic checks – Define your own rules using CodeRabbit’s advanced agentic capabilities to enforce organization-specific policies and workflows. For example, you can instruct CodeRabbit’s agent to verify that API documentation is updated whenever API schema files are modified in a PR. Note: Upto 5 custom checks are currently allowed during the preview period. Pricing for this feature will be announced in a few weeks.

Example:

reviews:
  pre_merge_checks:
    custom_checks:
		  - name: "Undocumented Breaking Changes"
			  mode: "warning"
			  instructions: |
				  Flag potential breaking changes that are not documented:
				  1. Identify changes to public APIs/exports, CLI flags, environment variables, configuration keys, database schemas, or HTTP/GraphQL endpoints (including removed/renamed items and changes to types, required params, return values, defaults, or behavior).
				  2. Ignore purely internal/private changes (e.g., code not exported from package entry points or marked internal).
				  3. Verify documentation exists: a "Breaking Change" section in the PR description and updates to CHANGELOG.md.

Please share your feedback with us on this Discord post.


📜 Recent review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 19d70ea and e0e8024.

📒 Files selected for processing (2)
  • tests/unit/models/config/test_dump_configuration.py (1 hunks)
  • tests/unit/models/config/test_model_context_protocol_server.py (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (2)
  • tests/unit/models/config/test_model_context_protocol_server.py
  • tests/unit/models/config/test_dump_configuration.py
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
  • GitHub Check: build-pr
  • GitHub Check: unit_tests (3.12)
  • GitHub Check: e2e_tests
✨ Finishing Touches
  • 📝 Generate Docstrings
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
tests/unit/models/config/test_user_data_collection.py (1)

34-42: Test function name doesn't match its behavior.

The test function test_user_data_collection_transcripts_disabled() actually tests the validation error when transcripts are enabled (True) with None storage.

-def test_user_data_collection_transcripts_disabled() -> None:
+def test_user_data_collection_transcripts_enabled_without_storage() -> None:
     """Test the UserDataCollection constructor for transcripts."""
-    # incorrect configuration
+    # transcripts enabled without storage - should fail
     with pytest.raises(
         ValueError,
         match="transcripts_storage is required when transcripts is enabled",
     ):
         UserDataCollection(transcripts_enabled=True, transcripts_storage=None)
🧹 Nitpick comments (4)
tests/unit/models/config/test_dump_configuration.py (1)

1-1: Update docstring to match the narrowed test scope.

The docstring mentions "checking ability to dump configuration" but could be more specific since the file now focuses on configuration serialization and JSON dumping functionality.

-"""Unit tests checking ability to dump configuration."""
+"""Unit tests for configuration dumping functionality."""
tests/unit/models/config/test_inference_configuration.py (3)

8-27: Parametrize constructor cases and drop redundant assert

  • The two “happy-path” checks can be table-driven to reduce duplication.
  • The object “is not None” asserts are redundant after successful construction.

Apply:

-def test_inference_constructor() -> None:
-    """
-    Test the InferenceConfiguration constructor with valid
-    parameters.
-    """
-    # Test with no default provider or model, as they are optional
-    inference_config = InferenceConfiguration()
-    assert inference_config is not None
-    assert inference_config.default_provider is None
-    assert inference_config.default_model is None
-
-    # Test with default provider and model
-    inference_config = InferenceConfiguration(
-        default_provider="default_provider",
-        default_model="default_model",
-    )
-    assert inference_config is not None
-    assert inference_config.default_provider == "default_provider"
-    assert inference_config.default_model == "default_model"
+@pytest.mark.parametrize(
+    "kwargs,expected_provider,expected_model",
+    [
+        ({}, None, None),
+        ({"default_provider": "default_provider", "default_model": "default_model"},
+         "default_provider", "default_model"),
+    ],
+)
+def test_inference_constructor(kwargs, expected_provider, expected_model) -> None:
+    cfg = InferenceConfiguration(**kwargs)
+    assert cfg.default_provider == expected_provider
+    assert cfg.default_model == expected_model

29-53: Deduplicate negative paths with parametrize

Both exception tests share the same structure; parametrize them to stay DRY and keep messages together.

Apply:

-def test_inference_default_model_missing() -> None:
-    """
-    Test case where only default provider is set, should fail
-    """
-    with pytest.raises(
-        ValueError,
-        match="Default model must be specified when default provider is set",
-    ):
-        InferenceConfiguration(
-            default_provider="default_provider",
-        )
-
-
-def test_inference_default_provider_missing() -> None:
-    """
-    Test case where only default model is set, should fail
-    """
-    with pytest.raises(
-        ValueError,
-        match="Default provider must be specified when default model is set",
-    ):
-        InferenceConfiguration(
-            default_model="default_model",
-        )
+@pytest.mark.parametrize(
+    "kwargs,err_msg",
+    [
+        ({"default_provider": "default_provider"},
+         "Default model must be specified when default provider is set"),
+        ({"default_model": "default_model"},
+         "Default provider must be specified when default model is set"),
+    ],
+)
+def test_inference_defaults_must_be_paired(kwargs, err_msg) -> None:
+    with pytest.raises(ValueError, match=err_msg):
+        InferenceConfiguration(**kwargs)

1-53: Clarify intended behavior for empty strings and explicit None

Do we consider empty strings valid “set” values? Current validator treats only None specially; empty strings pass when both provided. If empty strings should be rejected, the model (and tests) need to enforce non-empty.

Option A (accept empty strings) — add:

def test_inference_empty_strings_are_accepted() -> None:
    cfg = InferenceConfiguration(default_provider="", default_model="")
    assert cfg.default_provider == ""
    assert cfg.default_model == ""

Option B (reject empty strings) — add and then tighten the model accordingly:

import re

def test_inference_empty_strings_are_rejected() -> None:
    with pytest.raises(ValueError, match=re.escape("Default provider/model cannot be empty")):
        InferenceConfiguration(default_provider="", default_model="")

Would you like me to open a follow-up to align the model constraints and tests?

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 16d12e0 and 19d70ea.

📒 Files selected for processing (4)
  • tests/unit/models/config/test_dump_configuration.py (1 hunks)
  • tests/unit/models/config/test_inference_configuration.py (1 hunks)
  • tests/unit/models/config/test_model_context_protocol_server.py (1 hunks)
  • tests/unit/models/config/test_user_data_collection.py (1 hunks)
🧰 Additional context used
🧬 Code graph analysis (4)
tests/unit/models/config/test_inference_configuration.py (1)
src/models/config.py (2)
  • config (130-136)
  • InferenceConfiguration (434-451)
tests/unit/models/config/test_model_context_protocol_server.py (2)
src/models/config.py (6)
  • config (130-136)
  • ModelContextProtocolServer (159-164)
  • LlamaStackConfiguration (167-210)
  • UserDataCollection (213-230)
  • Configuration (454-471)
  • ServiceConfiguration (139-156)
src/configuration.py (2)
  • mcp_servers (88-92)
  • customization (117-121)
tests/unit/models/config/test_user_data_collection.py (1)
src/models/config.py (2)
  • config (130-136)
  • UserDataCollection (213-230)
tests/unit/models/config/test_dump_configuration.py (1)
src/models/config.py (8)
  • config (130-136)
  • ModelContextProtocolServer (159-164)
  • LlamaStackConfiguration (167-210)
  • UserDataCollection (213-230)
  • DatabaseConfiguration (98-136)
  • Configuration (454-471)
  • ServiceConfiguration (139-156)
  • InferenceConfiguration (434-451)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
  • GitHub Check: build-pr
  • GitHub Check: e2e_tests
🔇 Additional comments (4)
tests/unit/models/config/test_dump_configuration.py (1)

22-282: Excellent test coverage for configuration dumping with MCP servers.

The test functions comprehensively validate JSON serialization of configuration objects with different MCP server scenarios (empty, single, multiple). The test structure follows good practices with descriptive docstrings and clear assertions. The JSON comparison at the end of each test ensures complete validation of the serialized output.

tests/unit/models/config/test_model_context_protocol_server.py (2)

16-49: Thorough validation of ModelContextProtocolServer construction.

The test functions provide comprehensive coverage of the ModelContextProtocolServer model, including default behavior, custom provider IDs, and proper validation of required fields. The test structure is well-organized with clear assertions and proper use of pytest.raises for validation testing.


51-135: Comprehensive integration testing for Configuration with MCP servers.

These test functions effectively validate the integration between Configuration and ModelContextProtocolServer objects across different scenarios. The tests properly verify both the structural correctness and the specific values of nested configuration objects, ensuring robust validation of the configuration wiring.

tests/unit/models/config/test_inference_configuration.py (1)

1-6: LGTM: clear focus and imports

Docstring + direct import from models.config read well; pytest is correctly used.

Comment on lines +8 to +15
def test_user_data_collection_feedback_enabled() -> None:
"""Test the UserDataCollection constructor for feedback."""
# correct configuration
cfg = UserDataCollection(feedback_enabled=False, feedback_storage=None)
assert cfg is not None
assert cfg.feedback_enabled is False
assert cfg.feedback_storage is None

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Test function name doesn't match its behavior.

The test function test_user_data_collection_feedback_enabled() actually tests the case where feedback is disabled (False) with None storage. This is misleading.

-def test_user_data_collection_feedback_enabled() -> None:
+def test_user_data_collection_feedback_disabled() -> None:
     """Test the UserDataCollection constructor for feedback."""
-    # correct configuration
+    # feedback disabled with no storage - should succeed
     cfg = UserDataCollection(feedback_enabled=False, feedback_storage=None)
     assert cfg is not None
     assert cfg.feedback_enabled is False
     assert cfg.feedback_storage is None
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
def test_user_data_collection_feedback_enabled() -> None:
"""Test the UserDataCollection constructor for feedback."""
# correct configuration
cfg = UserDataCollection(feedback_enabled=False, feedback_storage=None)
assert cfg is not None
assert cfg.feedback_enabled is False
assert cfg.feedback_storage is None
def test_user_data_collection_feedback_disabled() -> None:
"""Test the UserDataCollection constructor for feedback."""
# feedback disabled with no storage - should succeed
cfg = UserDataCollection(feedback_enabled=False, feedback_storage=None)
assert cfg is not None
assert cfg.feedback_enabled is False
assert cfg.feedback_storage is None
🤖 Prompt for AI Agents
In tests/unit/models/config/test_user_data_collection.py around lines 8 to 15,
the test function name and docstring claim to test "feedback enabled" but
actually constructs UserDataCollection with feedback_enabled=False and
feedback_storage=None; rename the function (for example to
test_user_data_collection_feedback_disabled_with_none_storage) and update the
docstring to reflect that it verifies the constructor when feedback is disabled
and storage is None so the test name and description match the behavior.

Comment on lines +17 to +25
def test_user_data_collection_feedback_disabled() -> None:
"""Test the UserDataCollection constructor for feedback."""
# incorrect configuration
with pytest.raises(
ValueError,
match="feedback_storage is required when feedback is enabled",
):
UserDataCollection(feedback_enabled=True, feedback_storage=None)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Test function name doesn't match its behavior.

The test function test_user_data_collection_feedback_disabled() actually tests the validation error when feedback is enabled (True) with None storage.

-def test_user_data_collection_feedback_disabled() -> None:
+def test_user_data_collection_feedback_enabled_without_storage() -> None:
     """Test the UserDataCollection constructor for feedback."""
-    # incorrect configuration
+    # feedback enabled without storage - should fail
     with pytest.raises(
         ValueError,
         match="feedback_storage is required when feedback is enabled",
     ):
         UserDataCollection(feedback_enabled=True, feedback_storage=None)
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
def test_user_data_collection_feedback_disabled() -> None:
"""Test the UserDataCollection constructor for feedback."""
# incorrect configuration
with pytest.raises(
ValueError,
match="feedback_storage is required when feedback is enabled",
):
UserDataCollection(feedback_enabled=True, feedback_storage=None)
def test_user_data_collection_feedback_enabled_without_storage() -> None:
"""Test the UserDataCollection constructor for feedback."""
# feedback enabled without storage - should fail
with pytest.raises(
ValueError,
match="feedback_storage is required when feedback is enabled",
):
UserDataCollection(feedback_enabled=True, feedback_storage=None)
🤖 Prompt for AI Agents
In tests/unit/models/config/test_user_data_collection.py around lines 17 to 25,
the test function name and docstring are misleading: it is named
test_user_data_collection_feedback_disabled() but actually asserts that enabling
feedback without storage raises a ValueError. Rename the function to reflect the
behavior (e.g.,
test_user_data_collection_feedback_enabled_without_storage_raises) and update
the docstring to describe that the constructor should raise when
feedback_enabled=True and feedback_storage=None; keep the assertion and
exception match unchanged.

Comment on lines +27 to +32
def test_user_data_collection_transcripts_enabled() -> None:
"""Test the UserDataCollection constructor for transcripts."""
# correct configuration
cfg = UserDataCollection(transcripts_enabled=False, transcripts_storage=None)
assert cfg is not None

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Test function name doesn't match its behavior.

The test function test_user_data_collection_transcripts_enabled() actually tests the case where transcripts are disabled (False) with None storage.

-def test_user_data_collection_transcripts_enabled() -> None:
+def test_user_data_collection_transcripts_disabled() -> None:
     """Test the UserDataCollection constructor for transcripts."""
-    # correct configuration
+    # transcripts disabled with no storage - should succeed
     cfg = UserDataCollection(transcripts_enabled=False, transcripts_storage=None)
     assert cfg is not None
+    assert cfg.transcripts_enabled is False
+    assert cfg.transcripts_storage is None
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
def test_user_data_collection_transcripts_enabled() -> None:
"""Test the UserDataCollection constructor for transcripts."""
# correct configuration
cfg = UserDataCollection(transcripts_enabled=False, transcripts_storage=None)
assert cfg is not None
def test_user_data_collection_transcripts_disabled() -> None:
"""Test the UserDataCollection constructor for transcripts."""
# transcripts disabled with no storage - should succeed
cfg = UserDataCollection(transcripts_enabled=False, transcripts_storage=None)
assert cfg is not None
assert cfg.transcripts_enabled is False
assert cfg.transcripts_storage is None
🤖 Prompt for AI Agents
In tests/unit/models/config/test_user_data_collection.py around lines 27 to 32,
the test function name says "transcripts_enabled" but the body constructs
UserDataCollection with transcripts_enabled=False; either rename the test to
reflect the disabled case (e.g., test_user_data_collection_transcripts_disabled)
or change the constructed value to True to match the current name; update the
test name (or the test input) and any related docstring accordingly so the
function name matches the behavior.

@tisnik tisnik force-pushed the lcore-636-finished-split branch from 19d70ea to e0e8024 Compare September 9, 2025 06:22
@tisnik tisnik merged commit 528e0b9 into lightspeed-core:main Sep 9, 2025
19 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant