Skip to content

Conversation

@radofuchs
Copy link
Contributor

@radofuchs radofuchs commented Sep 16, 2025

Description

E2E tests for Info, Models and Metrics endpoints

Type of change

  • Refactor
  • New feature
  • Bug fix
  • CVE fix
  • Optimization
  • Documentation Update
  • Configuration Update
  • Bump-up service version
  • Bump-up dependent library
  • Bump-up library or tool used for development (does not change the final image)
  • CI configuration change
  • Konflux configuration change
  • Unit tests improvement
  • Integration tests improvement
  • End to end tests improvement

Related Tickets & Documents

  • Related Issue #LCORE-491
  • Closes #LCORE-491

Checklist before requesting a review

  • I have performed a self-review of my code.
  • PR has passed all pre-merge test jobs.
  • If it is a core feature, I have added thorough tests.

Testing

  • Please provide detailed steps to perform tests related to this code change.
  • How were the fix/results from this change verified? Please provide relevant screenshots or results.

Summary by CodeRabbit

  • New Features
    • Info endpoint now includes the llama-stack version.
  • Style
    • Updated service branding to “Lightspeed Core Service (LCS)” in Info responses.
  • Tests
    • Expanded end-to-end coverage for openapi.json, info, models, and metrics endpoints.
    • Added negative tests for disrupted llama-stack connectivity with clear error expectations.
    • Tightened model response structure validation (e.g., gpt-4o-mini).
  • Chores
    • Updated workflow-generated configuration to reflect new service name.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Sep 16, 2025

Walkthrough

Updates the e2e workflow’s generated service name to “Lightspeed Core Service (LCS)” and overhauls e2e Info feature tests and step implementations. Tests now actively validate openapi.json, info (including llama-stack version), models structure for gpt-4o-mini, metrics content, and error handling when llama-stack is disrupted.

Changes

Cohort / File(s) Summary
E2E Workflow Generation
\.github/workflows/e2e_tests.yaml
Adjusts generated lightspeed-stack.yaml content to set service name to “Lightspeed Core Service (LCS)”; no other workflow logic changed.
E2E Feature Scenarios
tests/e2e/features/info.feature
Replaces commented placeholders with active scenarios for openapi.json, info, models, and metrics. Adds explicit checks for service name/version, llama-stack version, model structure for gpt-4o-mini, and 500 error cases when llama-stack is disrupted.
E2E Step Implementations
tests/e2e/features/steps/info.py
Refactors validations to JSON-based checks; renames check_name_version parameter to service_name. Adds check_llama_version and check_model_structure. Removes unused metrics step.

Sequence Diagram(s)

sequenceDiagram
  autonumber
  actor Tester as E2E Tests
  participant Svc as Lightspeed Core Service (LCS)
  participant Llama as llama-stack

  rect rgba(230,245,255,0.6)
  note over Tester,Svc: Success path
  Tester->>Svc: GET /info
  Svc->>Llama: Fetch stack/version
  Llama-->>Svc: Version, status
  Svc-->>Tester: 200 JSON {name, service_version, llama_stack_version}
  Tester->>Svc: GET /models
  Svc->>Llama: List models
  Llama-->>Svc: Models array
  Svc-->>Tester: 200 JSON models incl. gpt-4o-mini
  end

  rect rgba(255,235,230,0.6)
  note over Tester,Svc: Error path (disrupted llama-stack)
  Tester->>Svc: GET /info
  Svc->>Llama: Request fails
  Llama--x Svc: Error
  Svc-->>Tester: 500 JSON error payload
  end
Loading

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~10 minutes

Possibly related PRs

Suggested reviewers

  • tisnik

Poem

I twitch my whiskers at the name refresh,
LCS now shines—succinct and fresh.
I hop through tests, both green and red,
Models counted, metrics read.
If llama-stack naps and fails to reply,
I file a 500 with a knowing sigh.
Carrots for coverage, oh my! 🥕

Pre-merge checks and finishing touches

✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title Check ✅ Passed The title "LCORE-491: E2E tests for Info, Models and Metrics endpoints" accurately and succinctly summarizes the primary change in this PR — addition of end-to-end tests for the Info, Models, and Metrics endpoints as reflected in the updated feature and step files; it is specific, concise, and includes the ticket reference for traceability.
Docstring Coverage ✅ Passed Docstring coverage is 100.00% which is sufficient. The required threshold is 80.00%.
✨ Finishing touches
  • 📝 Generate Docstrings
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@radofuchs radofuchs requested a review from tisnik September 16, 2025 08:44
Copy link
Contributor

@tisnik tisnik left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (1)
tests/e2e/features/info.feature (1)

40-49: Duplicate scenario title needs correction.

Both the models endpoint scenarios on lines 33 and 40 have the same title "Check if models endpoint is working". The second scenario should have a more descriptive title indicating it tests the error case.

Apply this diff to fix the duplicate scenario title:

-  Scenario: Check if models endpoint is working
+  Scenario: Check if models endpoint reports error when llama-stack connection is not working
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between d8025ac and caf23d9.

📒 Files selected for processing (3)
  • .github/workflows/e2e_tests.yaml (1 hunks)
  • tests/e2e/features/info.feature (1 hunks)
  • tests/e2e/features/steps/info.py (1 hunks)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
  • GitHub Check: build-pr
  • GitHub Check: e2e_tests
🔇 Additional comments (10)
.github/workflows/e2e_tests.yaml (1)

68-68: Updated service name aligns with test expectations.

The service name change from "foo bar baz" to "Lightspeed Core Service (LCS)" is consistent with the test expectations defined in the feature file.

tests/e2e/features/steps/info.py (3)

7-17: Good refactor to JSON-based validation.

The function now properly validates JSON response structure and provides clear error messages when assertions fail. The parameter name change from system_prompt to service_name is more accurate and descriptive.


19-28: Clean implementation for llama-stack version validation.

The function follows the same pattern as the other validation functions and provides appropriate error messaging.


30-57: Add descriptive None-check message; identifier lookup is already safe

  • Replace the bare check with a message: assert gpt_model is not None, f"Model '{model}' not found in models list" — file: tests/e2e/features/steps/info.py.
  • No change required for the identifier lookup: the code uses .get("identifier", "") so a missing key is already handled.
  • Update the docstring to reflect the function validates the passed-in model parameter (e.g., "gpt-4o-mini"); optional: rename model_id → model_entry for clarity.

Likely an incorrect or invalid review comment.

tests/e2e/features/info.feature (6)

1-1: Clean feature name update.

The feature name "Info tests" is more concise and appropriate than the previous "Info endpoint API tests".


10-14: Good OpenAPI endpoint validation.

The scenario properly validates the OpenAPI endpoint functionality with appropriate status code and content checks.


16-22: Service name and version validation looks correct.

The test validates the updated service name "Lightspeed Core Service (LCS)" and checks both service version (0.2.0) and llama-stack version (0.2.19).


23-32: Good negative test coverage.

The scenario properly tests error handling when llama-stack connection is disrupted, expecting a 500 status code with specific error JSON structure.


33-38: Models endpoint test structure is appropriate.

The test validates the models endpoint and checks for proper gpt-4o-mini model structure.


51-56: Metrics endpoint validation is clear.

The scenario properly validates the metrics endpoint and checks for the presence of ls_provider_model_configuration in the response.

@tisnik tisnik merged commit 5e2bc4e into lightspeed-core:main Sep 16, 2025
18 of 19 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants