RSPEED-2817: add configuration guard to rlsapi_v1 /infer endpoint#1468
RSPEED-2817: add configuration guard to rlsapi_v1 /infer endpoint#1468major wants to merge 1 commit into
Conversation
WalkthroughAn additional validation call to Changes
Estimated code review effort🎯 1 (Trivial) | ⏱️ ~3 minutes 🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
✨ Simplify code
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
src/app/endpoints/rlsapi_v1.py (1)
268-305: 🧹 Nitpick | 🔵 TrivialConsider making
model_idrequired or removing internal resolution.The
model_idfallback at line 294 is never exercised:retrieve_simple_response(line 262) andinfer_endpoint(line 620) both resolve the model before calling_call_llm. This redundancy contradicts the "transport-only" claim in the docstring.♻️ Suggested refactor: make model_id required
async def _call_llm( question: str, instructions: str, tools: Optional[list[Any]] = None, - model_id: Optional[str] = None, + model_id: str, ) -> OpenAIResponseObject: """Call the LLM via the Responses API and return the full response object. This is a transport-only function: it calls the LLM and returns the raw response. Callers are responsible for token usage extraction and metrics. Args: question: The combined user input (question + context). instructions: System instructions for the LLM. tools: Optional list of MCP tool definitions for the LLM. - model_id: Fully qualified model identifier in provider/model format. - When omitted, the configured default model is used. + model_id: Fully qualified model identifier in provider/model format. Returns: The full OpenAIResponseObject from the LLM. Raises: - APIConnectionError: If the Llama Stack service is unreachable. - HTTPException: 503 if no default model is configured. + APIConnectionError: If the Llama Stack service is unreachable. """ client = AsyncLlamaStackClientHolder().get_client() - resolved_model_id = model_id or await _get_default_model_id() - logger.debug("Using model %s for rlsapi v1 inference", resolved_model_id) + logger.debug("Using model %s for rlsapi v1 inference", model_id) response = await client.responses.create( input=question, - model=resolved_model_id, + model=model_id, instructions=instructions, tools=tools or [], stream=False, store=False, ) return cast(OpenAIResponseObject, response)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/app/endpoints/rlsapi_v1.py` around lines 268 - 305, The _call_llm function currently resolves a default model_id internally (via _get_default_model_id) which is redundant because callers like retrieve_simple_response and infer_endpoint already resolve the model; make model_id required by changing the signature of _call_llm to accept a non-optional model_id, remove the internal resolution (and the await _get_default_model_id call) so the function becomes a true transport-only wrapper, and update the docstring to reflect that model_id must be provided; ensure callers (retrieve_simple_response, infer_endpoint) still pass the resolved model_id and remove any code/comments relying on the internal fallback.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Outside diff comments:
In `@src/app/endpoints/rlsapi_v1.py`:
- Around line 268-305: The _call_llm function currently resolves a default
model_id internally (via _get_default_model_id) which is redundant because
callers like retrieve_simple_response and infer_endpoint already resolve the
model; make model_id required by changing the signature of _call_llm to accept a
non-optional model_id, remove the internal resolution (and the await
_get_default_model_id call) so the function becomes a true transport-only
wrapper, and update the docstring to reflect that model_id must be provided;
ensure callers (retrieve_simple_response, infer_endpoint) still pass the
resolved model_id and remove any code/comments relying on the internal fallback.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: ASSERTIVE
Plan: Pro
Run ID: 74ac17e7-3b12-4bc9-955a-baf8d874f038
📒 Files selected for processing (2)
src/app/endpoints/rlsapi_v1.pytests/unit/app/endpoints/test_rlsapi_v1.py
📜 Review details
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (4)
- GitHub Check: E2E: server mode / ci
- GitHub Check: E2E: library mode / ci
- GitHub Check: E2E Tests for Lightspeed Evaluation job
- GitHub Check: build-pr
🧰 Additional context used
📓 Path-based instructions (4)
tests/**/*.py
📄 CodeRabbit inference engine (AGENTS.md)
tests/**/*.py: Use pytest for all unit and integration tests; do not use unittest
Usepytest.mark.asynciomarker for async unit tests
Files:
tests/unit/app/endpoints/test_rlsapi_v1.py
tests/unit/**/*.py
📄 CodeRabbit inference engine (AGENTS.md)
Use
pytest-mockfor AsyncMock objects in unit tests
Files:
tests/unit/app/endpoints/test_rlsapi_v1.py
src/**/*.py
📄 CodeRabbit inference engine (AGENTS.md)
src/**/*.py: Use absolute imports for internal modules (e.g.,from authentication import get_auth_dependency)
Usefrom llama_stack_client import AsyncLlamaStackClientfor Llama Stack imports
Checkconstants.pyfor shared constants before defining new ones
All modules start with descriptive docstrings explaining purpose
Uselogger = get_logger(__name__)fromlog.pyfor module logging
Type aliases defined at module level for clarity
All functions require docstrings with brief descriptions
Use complete type annotations for function parameters and return types
Use union types with modern syntax:str | intinstead ofUnion[str, int]
UseOptional[Type]for optional types in type annotations
Use snake_case with descriptive, action-oriented names for functions (get_, validate_, check_)
Avoid in-place parameter modification anti-patterns: return new data structures instead of modifying parameters
Useasync deffor I/O operations and external API calls
HandleAPIConnectionErrorfrom Llama Stack in error handling
Uselogger.debug()for detailed diagnostic information
Uselogger.info()for general information about program execution
Uselogger.warning()for unexpected events or potential problems
Uselogger.error()for serious problems that prevented function execution
All classes require descriptive docstrings explaining purpose
Use PascalCase for class names with descriptive names and standard suffixes:Configuration,Error/Exception,Resolver,Interface
Use complete type annotations for all class attributes; avoid usingAny
Follow Google Python docstring conventions for all modules, classes, and functions
IncludeParameters:,Returns:,Raises:sections in function docstrings as needed
Files:
src/app/endpoints/rlsapi_v1.py
src/app/**/*.py
📄 CodeRabbit inference engine (AGENTS.md)
src/app/**/*.py: Usefrom fastapi import APIRouter, HTTPException, Request, status, Dependsfor FastAPI dependencies
Use FastAPIHTTPExceptionwith appropriate status codes for API endpoint error handling
Files:
src/app/endpoints/rlsapi_v1.py
🧠 Learnings (1)
📚 Learning: 2026-04-06T20:18:07.852Z
Learnt from: major
Repo: lightspeed-core/lightspeed-stack PR: 1463
File: src/app/endpoints/rlsapi_v1.py:266-271
Timestamp: 2026-04-06T20:18:07.852Z
Learning: In the lightspeed-stack codebase, within `src/app/endpoints/` inference/MCP endpoints, treat `tools: Optional[list[Any]]` in MCP tool definitions as an intentional, consistent typing pattern (used across `query`, `responses`, `streaming_query`, `rlsapi_v1`). Do not raise or suggest this as a typing issue during code review; changing it in isolation could break endpoint typing consistency across the codebase.
Applied to files:
src/app/endpoints/rlsapi_v1.py
🔇 Additional comments (8)
src/app/endpoints/rlsapi_v1.py (7)
41-41: LGTM!Import for
check_configuration_loadedis correctly added fromutils.endpoints.
257-265: LGTM!The refactoring of
retrieve_simple_responseto delegate to_call_llmand unconditionally extract token usage is clean. The model_id resolution is handled consistently for both the LLM call and metrics extraction.
445-462: LGTM!Clean extraction of the verbose mode decision logic. The dual opt-in requirement (server config + client request) is clearly documented and correctly implemented.
465-515: LGTM!Well-structured helper that cleanly separates verbose and minimal response construction. The consistent field handling across both branches ensures predictable API responses.
599-600: Configuration guard correctly placed.Adding
check_configuration_loaded(configuration)at the start of the endpoint ensures a clean HTTP 500 response instead of an opaque LogicError crash when configuration is missing. This aligns with the pattern used by other endpoints.
625-641: LGTM on the refactored inference flow.The changes correctly:
- Check verbose mode before the try block
- Extract token usage unconditionally on success (line 637)
- Guard token extraction in the failure path with
if response is not None(lines 640-641), which handles cases where exceptions occur after_call_llmreturns but before the try block completes
677-682: LGTM!Clean delegation to
_build_infer_responsewith the verbose-mode conditional correctly passingresponseorNone.tests/unit/app/endpoints/test_rlsapi_v1.py (1)
496-518: LGTM! Test correctly validates the configuration guard.The test properly simulates the unloaded configuration state by patching
_configuration = Noneand verifies the expected HTTP 500 response. The# pylint: disable=protected-accesscomment appropriately acknowledges the private attribute access required for this test scenario.
Call check_configuration_loaded() before processing requests so a missing config returns a clean HTTP 500 instead of an opaque crash. Matches the pattern used by every other endpoint. Ref: RSPEED-2817 Signed-off-by: Major Hayden <major@redhat.com>
367e0e6 to
23814cb
Compare
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/app/endpoints/rlsapi_v1.py`:
- Around line 652-653: Remove the redundant configuration guard call: delete the
second invocation of check_configuration_loaded(configuration) that appears
after the initial call in the same endpoint flow (the duplicate call referenced
in the diff). Keep the first call (the one at the start of the handler) and
ensure there is only a single call to check_configuration_loaded(configuration)
in this function to avoid unnecessary repetition.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: ASSERTIVE
Plan: Pro
Run ID: f03f0183-ae31-46f9-b6c8-fdfe42747c63
📒 Files selected for processing (1)
src/app/endpoints/rlsapi_v1.py
📜 Review details
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
- GitHub Check: E2E: server mode / ci
- GitHub Check: E2E: library mode / ci
- GitHub Check: E2E Tests for Lightspeed Evaluation job
🧰 Additional context used
📓 Path-based instructions (2)
src/**/*.py
📄 CodeRabbit inference engine (AGENTS.md)
src/**/*.py: Use absolute imports for internal modules (e.g.,from authentication import get_auth_dependency)
Usefrom llama_stack_client import AsyncLlamaStackClientfor Llama Stack imports
Checkconstants.pyfor shared constants before defining new ones
All modules start with descriptive docstrings explaining purpose
Uselogger = get_logger(__name__)fromlog.pyfor module logging
Type aliases defined at module level for clarity
All functions require docstrings with brief descriptions
Use complete type annotations for function parameters and return types
Use union types with modern syntax:str | intinstead ofUnion[str, int]
UseOptional[Type]for optional types in type annotations
Use snake_case with descriptive, action-oriented names for functions (get_, validate_, check_)
Avoid in-place parameter modification anti-patterns: return new data structures instead of modifying parameters
Useasync deffor I/O operations and external API calls
HandleAPIConnectionErrorfrom Llama Stack in error handling
Uselogger.debug()for detailed diagnostic information
Uselogger.info()for general information about program execution
Uselogger.warning()for unexpected events or potential problems
Uselogger.error()for serious problems that prevented function execution
All classes require descriptive docstrings explaining purpose
Use PascalCase for class names with descriptive names and standard suffixes:Configuration,Error/Exception,Resolver,Interface
Use complete type annotations for all class attributes; avoid usingAny
Follow Google Python docstring conventions for all modules, classes, and functions
IncludeParameters:,Returns:,Raises:sections in function docstrings as needed
Files:
src/app/endpoints/rlsapi_v1.py
src/app/**/*.py
📄 CodeRabbit inference engine (AGENTS.md)
src/app/**/*.py: Usefrom fastapi import APIRouter, HTTPException, Request, status, Dependsfor FastAPI dependencies
Use FastAPIHTTPExceptionwith appropriate status codes for API endpoint error handling
Files:
src/app/endpoints/rlsapi_v1.py
🧠 Learnings (3)
📓 Common learnings
Learnt from: major
Repo: lightspeed-core/lightspeed-stack PR: 1469
File: src/models/config.py:1928-1933
Timestamp: 2026-04-07T14:44:42.022Z
Learning: In lightspeed-core/lightspeed-stack, `allow_verbose_infer` (previously `customization.allow_verbose_infer`, now `rlsapi_v1.allow_verbose_infer`) is only used internally by the `rlsapi_v1` `/infer` endpoint and has a single known consumer (the PR author). Backward compatibility for this config field relocation is intentionally not required and should not be flagged in future reviews.
📚 Learning: 2026-04-07T14:44:42.022Z
Learnt from: major
Repo: lightspeed-core/lightspeed-stack PR: 1469
File: src/models/config.py:1928-1933
Timestamp: 2026-04-07T14:44:42.022Z
Learning: In lightspeed-core/lightspeed-stack, `allow_verbose_infer` (previously `customization.allow_verbose_infer`, now `rlsapi_v1.allow_verbose_infer`) is only used internally by the `rlsapi_v1` `/infer` endpoint and has a single known consumer (the PR author). Backward compatibility for this config field relocation is intentionally not required and should not be flagged in future reviews.
Applied to files:
src/app/endpoints/rlsapi_v1.py
📚 Learning: 2026-04-06T20:18:07.852Z
Learnt from: major
Repo: lightspeed-core/lightspeed-stack PR: 1463
File: src/app/endpoints/rlsapi_v1.py:266-271
Timestamp: 2026-04-06T20:18:07.852Z
Learning: In the lightspeed-stack codebase, within `src/app/endpoints/` inference/MCP endpoints, treat `tools: Optional[list[Any]]` in MCP tool definitions as an intentional, consistent typing pattern (used across `query`, `responses`, `streaming_query`, `rlsapi_v1`). Do not raise or suggest this as a typing issue during code review; changing it in isolation could break endpoint typing consistency across the codebase.
Applied to files:
src/app/endpoints/rlsapi_v1.py
| check_configuration_loaded(configuration) | ||
|
|
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Remove duplicate configuration guard call.
Line 652 repeats the same check_configuration_loaded(configuration) already executed at Line 644. This check is idempotent and has no side effects, so the second call is unnecessary and should be removed to keep the endpoint flow clean.
Proposed diff
- check_configuration_loaded(configuration)
-
request_id = get_suid()📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| check_configuration_loaded(configuration) | |
| request_id = get_suid() |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/app/endpoints/rlsapi_v1.py` around lines 652 - 653, Remove the redundant
configuration guard call: delete the second invocation of
check_configuration_loaded(configuration) that appears after the initial call in
the same endpoint flow (the duplicate call referenced in the diff). Keep the
first call (the one at the start of the handler) and ensure there is only a
single call to check_configuration_loaded(configuration) in this function to
avoid unnecessary repetition.
|
Ah, the other PR fixed this. |
Description
Add
check_configuration_loaded()call to the rlsapi_v1/inferendpoint so a missing config returns a clean HTTP 500 instead of an opaqueLogicErrorcrash. Matches the pattern used by every other endpoint.Type of change
Tools used to create PR
Related Tickets & Documents
Checklist before requesting a review
Testing
uv run make verify- all linters passuv run pytest tests/unit/app/endpoints/test_rlsapi_v1.py- 49/49 passtest_infer_endpoint_configuration_not_loadedverifies HTTP 500 when config is unloadedSummary by CodeRabbit
No user-visible changes detected in this update. This appears to be an internal code improvement with no impact on end-user features or functionality.