-
Notifications
You must be signed in to change notification settings - Fork 52
LCORE-491: E2E tests for Info, Models and Metrics endpoints #546
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
LCORE-491: E2E tests for Info, Models and Metrics endpoints #546
Conversation
WalkthroughUpdates the e2e workflow’s generated service name to “Lightspeed Core Service (LCS)” and overhauls e2e Info feature tests and step implementations. Tests now actively validate openapi.json, info (including llama-stack version), models structure for gpt-4o-mini, metrics content, and error handling when llama-stack is disrupted. Changes
Sequence Diagram(s)sequenceDiagram
autonumber
actor Tester as E2E Tests
participant Svc as Lightspeed Core Service (LCS)
participant Llama as llama-stack
rect rgba(230,245,255,0.6)
note over Tester,Svc: Success path
Tester->>Svc: GET /info
Svc->>Llama: Fetch stack/version
Llama-->>Svc: Version, status
Svc-->>Tester: 200 JSON {name, service_version, llama_stack_version}
Tester->>Svc: GET /models
Svc->>Llama: List models
Llama-->>Svc: Models array
Svc-->>Tester: 200 JSON models incl. gpt-4o-mini
end
rect rgba(255,235,230,0.6)
note over Tester,Svc: Error path (disrupted llama-stack)
Tester->>Svc: GET /info
Svc->>Llama: Request fails
Llama--x Svc: Error
Svc-->>Tester: 500 JSON error payload
end
Estimated code review effort🎯 2 (Simple) | ⏱️ ~10 minutes Possibly related PRs
Suggested reviewers
Poem
Pre-merge checks and finishing touches✅ Passed checks (3 passed)
✨ Finishing touches
🧪 Generate unit tests
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
tisnik
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (1)
tests/e2e/features/info.feature (1)
40-49: Duplicate scenario title needs correction.Both the models endpoint scenarios on lines 33 and 40 have the same title "Check if models endpoint is working". The second scenario should have a more descriptive title indicating it tests the error case.
Apply this diff to fix the duplicate scenario title:
- Scenario: Check if models endpoint is working + Scenario: Check if models endpoint reports error when llama-stack connection is not working
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (3)
.github/workflows/e2e_tests.yaml(1 hunks)tests/e2e/features/info.feature(1 hunks)tests/e2e/features/steps/info.py(1 hunks)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
- GitHub Check: build-pr
- GitHub Check: e2e_tests
🔇 Additional comments (10)
.github/workflows/e2e_tests.yaml (1)
68-68: Updated service name aligns with test expectations.The service name change from "foo bar baz" to "Lightspeed Core Service (LCS)" is consistent with the test expectations defined in the feature file.
tests/e2e/features/steps/info.py (3)
7-17: Good refactor to JSON-based validation.The function now properly validates JSON response structure and provides clear error messages when assertions fail. The parameter name change from
system_prompttoservice_nameis more accurate and descriptive.
19-28: Clean implementation for llama-stack version validation.The function follows the same pattern as the other validation functions and provides appropriate error messaging.
30-57: Add descriptive None-check message; identifier lookup is already safe
- Replace the bare check with a message: assert gpt_model is not None, f"Model '{model}' not found in models list" — file: tests/e2e/features/steps/info.py.
- No change required for the identifier lookup: the code uses .get("identifier", "") so a missing key is already handled.
- Update the docstring to reflect the function validates the passed-in model parameter (e.g., "gpt-4o-mini"); optional: rename model_id → model_entry for clarity.
Likely an incorrect or invalid review comment.
tests/e2e/features/info.feature (6)
1-1: Clean feature name update.The feature name "Info tests" is more concise and appropriate than the previous "Info endpoint API tests".
10-14: Good OpenAPI endpoint validation.The scenario properly validates the OpenAPI endpoint functionality with appropriate status code and content checks.
16-22: Service name and version validation looks correct.The test validates the updated service name "Lightspeed Core Service (LCS)" and checks both service version (0.2.0) and llama-stack version (0.2.19).
23-32: Good negative test coverage.The scenario properly tests error handling when llama-stack connection is disrupted, expecting a 500 status code with specific error JSON structure.
33-38: Models endpoint test structure is appropriate.The test validates the models endpoint and checks for proper gpt-4o-mini model structure.
51-56: Metrics endpoint validation is clear.The scenario properly validates the metrics endpoint and checks for the presence of
ls_provider_model_configurationin the response.
Description
E2E tests for Info, Models and Metrics endpoints
Type of change
Related Tickets & Documents
Checklist before requesting a review
Testing
Summary by CodeRabbit