Skip to content

RSPEED-2820: add rlsapi_v1 config section with quota enforcement#1469

Merged
tisnik merged 2 commits into
lightspeed-core:mainfrom
major:rlsapi-v1-quota-enforcement
Apr 7, 2026
Merged

RSPEED-2820: add rlsapi_v1 config section with quota enforcement#1469
tisnik merged 2 commits into
lightspeed-core:mainfrom
major:rlsapi-v1-quota-enforcement

Conversation

@major
Copy link
Copy Markdown
Contributor

@major major commented Apr 7, 2026

Stacked on #1468 - review/merge that first.

Description

Add a dedicated rlsapi_v1 config section and configurable token quota enforcement for /v1/infer.

Moves allow_verbose_infer from shared Customization into a new rlsapi_v1 section. Adds quota_subject field that selects which identity field (user_id, org_id, system_id) to use for quota tracking. Disabled by default. Startup validation rejects org_id/system_id without rh-identity auth and warns when no limiters are configured.

No changes to core quota system.

Type of change

  • New feature
  • Configuration Update
  • Unit tests improvement

Tools used to create PR

  • Assisted-by: Claude (opencode)
  • Generated by: N/A

Related Tickets & Documents

Checklist before requesting a review

  • I have performed a self-review of my code.
  • PR has passed all pre-merge test jobs.
  • If it is a core feature, I have added thorough tests.

Testing

  1. uv run make verify passes all linters
  2. uv run pytest tests/unit/ -x passes 1915 tests
  3. Config validation: set quota_subject: org_id with authentication.module: noop, service rejects at startup
  4. Default behavior: omit rlsapi_v1 section entirely, quotas disabled, no behavioral change

Summary by CodeRabbit

  • New Features

    • Optional quota enforcement for /v1/infer with selectable subject scope (user, organization, system) and pre-checks plus token consumption.
    • Optional verbose metadata for /v1/infer when clients request include_metadata.
  • Configuration Updates

    • Verbose inference and quota subject moved into a dedicated rlsapi_v1 configuration section; warnings raised when quota is incompletely configured.
  • Tests

    • Added comprehensive unit tests covering quota resolution, enforcement behavior, and rlsapi_v1 schema/validation.

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Apr 7, 2026

Walkthrough

Introduces a new top-level rlsapi_v1 configuration and wires quota enforcement and verbose-metadata control into /v1/infer, including quota subject resolution (RH identity-aware), pre-inference token checks, and post-inference token consumption, plus supporting model, config accessor, example, and tests.

Changes

Cohort / File(s) Summary
Configuration Models
src/models/config.py
Add RlsapiV1Configuration (allow_verbose_infer, quota_subject); move allow_verbose_infer out of Customization; add rlsapi_v1 to Configuration; add @model_validator to validate quota_subject vs auth and warn when quota components are missing.
Configuration Accessor
src/configuration.py
Add AppConfig.rlsapi_v1 property exposing typed RlsapiV1Configuration with loaded-configuration guard.
Endpoint Implementation
src/app/endpoints/rlsapi_v1.py
Add configuration readiness check; introduce _resolve_quota_subject(request, auth) using configured quota_subject and RH identity context; perform check_tokens_available() pre-check when quota is enabled; capture token_usage and call consume_query_tokens() after successful inference; switch verbose check to configuration.rlsapi_v1.allow_verbose_infer.
Examples
examples/lightspeed-stack-rlsapi-cla.yaml
Add top-level rlsapi_v1 YAML block (commented) with quota_subject and allow_verbose_infer options.
Endpoint Tests
tests/unit/app/endpoints/test_rlsapi_v1.py
Extend mocks to include rlsapi_v1, add test for missing loaded configuration error, add parameterized tests for _resolve_quota_subject, and add quota-enforcement tests (pre-check, consume tokens, 429 propagation, interactions with RH identity and ShieldModeration blocking).
Config Dump Tests
tests/unit/models/config/test_dump_configuration.py
Update expected dumps to include rlsapi_v1 with default values.
Config Model Tests
tests/unit/models/config/test_rlsapi_v1_configuration.py
New tests for RlsapiV1Configuration defaults, allowed literals, extra-field rejection, and Configuration startup validation/warnings around quota_subject vs auth and quota component presence.

Sequence Diagram

sequenceDiagram
    participant Client
    participant Endpoint as RlsapiV1 Endpoint
    participant RHId as RH Identity<br/>(from Request)
    participant QuotaLimiter as Quota Limiter / Storage
    participant Inference as Inference Engine

    Client->>Endpoint: POST /v1/infer
    activate Endpoint

    Endpoint->>Endpoint: Ensure configuration loaded
    Endpoint->>Endpoint: _resolve_quota_subject(request, auth)
    Endpoint->>RHId: read identity context (if needed)

    alt quota_subject configured
        Endpoint->>QuotaLimiter: check_tokens_available(quota_id)
        QuotaLimiter-->>Endpoint: OK or HTTPException(429)
    end

    rect rgba(100, 200, 100, 0.5)
    Endpoint->>Inference: perform inference request
    Inference-->>Endpoint: inference response
    Endpoint->>Endpoint: extract_token_usage(response.usage, model_id)
    end

    alt inference successful and quota_subject configured
        Endpoint->>QuotaLimiter: consume_query_tokens(user_id=quota_id, model_id, token_usage)
        QuotaLimiter-->>Endpoint: OK
    end

    Endpoint-->>Client: return JSON response (with metadata if allowed)
    deactivate Endpoint
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

🚥 Pre-merge checks | ✅ 3
✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title clearly and specifically summarizes the main change: adding a new rlsapi_v1 config section with quota enforcement functionality.
Docstring Coverage ✅ Passed Docstring coverage is 95.24% which is sufficient. The required threshold is 80.00%.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
✨ Simplify code
  • Create PR with simplified code

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@src/models/config.py`:
- Around line 1377-1385: The schema description for allow_verbose_infer
references a non-existent RBAC action Action.RLSAPI_V1_INFER_VERBOSE; update the
code so the doc and enforcement align by either (A) changing the description to
reference the existing Action.RLSAPI_V1_INFER enum value, or (B) add a new enum
member (e.g., RLSAPI_V1_INFER_VERBOSE) to the Action enum with the correct value
and any associated permissions, and then reference that new member in the
allow_verbose_infer description; locate references to allow_verbose_infer and
the Action enum to make the change so docs and RBAC names match.
- Around line 1928-1933: The config change dropped support for the legacy
customization.allow_verbose_infer key; add a backward-compat migration in the
model so old configs don't fail: in the Customization model (or whichever class
previously accepted allow_verbose_infer) add an optional field named
allow_verbose_infer (or an alias for it) and a root_validator or post-init hook
that, when allow_verbose_infer is present, moves/sets that value into the new
rlsapi_v1.allow_verbose_infer location (or equivalent field on
RlsapiV1Configuration) and emits a deprecation warning; keep
ConfigurationBase.extra="forbid" unchanged so unknown keys still error, but
ensure the legacy key is accepted and mapped before validation rejects extras.
- Around line 2017-2055: The validator validate_rlsapi_v1_quota_configuration
currently only checks quota_handlers.limiters and can miss the case where
limiters are empty because no storage backend is configured
(quota_handlers.sqlite and quota_handlers.postgres both unset); update the
validator to also check quota storage backends by verifying that at least one of
quota_handlers.sqlite or quota_handlers.postgres is configured (in addition to
limiters) before treating rlsapi_v1.quota_subject as effective, and if neither
backend is present and rlsapi_v1.quota_subject is set, emit a warning (or raise
ValueError) that quota_subject is enabled but no quota storage backend is
configured so quota enforcement will be a silent no-op.

In `@tests/unit/app/endpoints/test_rlsapi_v1.py`:
- Around line 510-512: The test currently mutates the process-wide AppConfig
singleton (via AppConfig() assigned to mock_config) which clears shared state
and leaks to other tests; instead create a separate, non-mutating fake config by
shallow-copying the singleton (e.g., copy.copy(AppConfig())), set
fake_config._configuration = None on that copy, and patch
app.endpoints.rlsapi_v1.configuration with the copied fake_config (referencing
AppConfig, mock_config/fake_config, and rlsapi_v1.configuration) so the real
singleton is not mutated and no teardown restore is required.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: ASSERTIVE

Plan: Pro

Run ID: b7d12f38-83cc-4f17-b69f-4a4f63deb82d

📥 Commits

Reviewing files that changed from the base of the PR and between 8b0938e and 3c2c22a.

📒 Files selected for processing (7)
  • examples/lightspeed-stack-rlsapi-cla.yaml
  • src/app/endpoints/rlsapi_v1.py
  • src/configuration.py
  • src/models/config.py
  • tests/unit/app/endpoints/test_rlsapi_v1.py
  • tests/unit/models/config/test_dump_configuration.py
  • tests/unit/models/config/test_rlsapi_v1_configuration.py
📜 Review details
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (4)
  • GitHub Check: build-pr
  • GitHub Check: E2E: server mode / ci
  • GitHub Check: E2E: library mode / ci
  • GitHub Check: E2E Tests for Lightspeed Evaluation job
🧰 Additional context used
📓 Path-based instructions (6)
src/**/*.py

📄 CodeRabbit inference engine (AGENTS.md)

src/**/*.py: Use absolute imports for internal modules (e.g., from authentication import get_auth_dependency)
Use from llama_stack_client import AsyncLlamaStackClient for Llama Stack imports
Check constants.py for shared constants before defining new ones
All modules start with descriptive docstrings explaining purpose
Use logger = get_logger(__name__) from log.py for module logging
Type aliases defined at module level for clarity
All functions require docstrings with brief descriptions
Use complete type annotations for function parameters and return types
Use union types with modern syntax: str | int instead of Union[str, int]
Use Optional[Type] for optional types in type annotations
Use snake_case with descriptive, action-oriented names for functions (get_, validate_, check_)
Avoid in-place parameter modification anti-patterns: return new data structures instead of modifying parameters
Use async def for I/O operations and external API calls
Handle APIConnectionError from Llama Stack in error handling
Use logger.debug() for detailed diagnostic information
Use logger.info() for general information about program execution
Use logger.warning() for unexpected events or potential problems
Use logger.error() for serious problems that prevented function execution
All classes require descriptive docstrings explaining purpose
Use PascalCase for class names with descriptive names and standard suffixes: Configuration, Error/Exception, Resolver, Interface
Use complete type annotations for all class attributes; avoid using Any
Follow Google Python docstring conventions for all modules, classes, and functions
Include Parameters:, Returns:, Raises: sections in function docstrings as needed

Files:

  • src/configuration.py
  • src/app/endpoints/rlsapi_v1.py
  • src/models/config.py
tests/**/*.py

📄 CodeRabbit inference engine (AGENTS.md)

tests/**/*.py: Use pytest for all unit and integration tests; do not use unittest
Use pytest.mark.asyncio marker for async unit tests

Files:

  • tests/unit/models/config/test_dump_configuration.py
  • tests/unit/models/config/test_rlsapi_v1_configuration.py
  • tests/unit/app/endpoints/test_rlsapi_v1.py
tests/unit/**/*.py

📄 CodeRabbit inference engine (AGENTS.md)

Use pytest-mock for AsyncMock objects in unit tests

Files:

  • tests/unit/models/config/test_dump_configuration.py
  • tests/unit/models/config/test_rlsapi_v1_configuration.py
  • tests/unit/app/endpoints/test_rlsapi_v1.py
src/app/**/*.py

📄 CodeRabbit inference engine (AGENTS.md)

src/app/**/*.py: Use from fastapi import APIRouter, HTTPException, Request, status, Depends for FastAPI dependencies
Use FastAPI HTTPException with appropriate status codes for API endpoint error handling

Files:

  • src/app/endpoints/rlsapi_v1.py
src/models/config.py

📄 CodeRabbit inference engine (AGENTS.md)

src/models/config.py: All config uses Pydantic models extending ConfigurationBase
Base class sets extra="forbid" to reject unknown fields in Pydantic configuration models
Use type hints: Optional[FilePath], PositiveInt, SecretStr for configuration fields
Pydantic configuration models should extend ConfigurationBase

Files:

  • src/models/config.py
src/models/**/*.py

📄 CodeRabbit inference engine (AGENTS.md)

src/models/**/*.py: Use @field_validator and @model_validator for custom validation in Pydantic models
Use typing_extensions.Self for model validators in type annotations
Pydantic data models should extend BaseModel
Include Attributes: section in Pydantic model docstrings

Files:

  • src/models/config.py
🧠 Learnings (4)
📚 Learning: 2026-04-05T12:19:36.009Z
Learnt from: CR
Repo: lightspeed-core/lightspeed-stack PR: 0
File: AGENTS.md:0-0
Timestamp: 2026-04-05T12:19:36.009Z
Learning: Applies to src/models/config.py : All config uses Pydantic models extending `ConfigurationBase`

Applied to files:

  • src/configuration.py
  • src/models/config.py
📚 Learning: 2026-04-06T20:18:07.852Z
Learnt from: major
Repo: lightspeed-core/lightspeed-stack PR: 1463
File: src/app/endpoints/rlsapi_v1.py:266-271
Timestamp: 2026-04-06T20:18:07.852Z
Learning: In the lightspeed-stack codebase, within `src/app/endpoints/` inference/MCP endpoints, treat `tools: Optional[list[Any]]` in MCP tool definitions as an intentional, consistent typing pattern (used across `query`, `responses`, `streaming_query`, `rlsapi_v1`). Do not raise or suggest this as a typing issue during code review; changing it in isolation could break endpoint typing consistency across the codebase.

Applied to files:

  • src/app/endpoints/rlsapi_v1.py
📚 Learning: 2026-01-12T10:58:40.230Z
Learnt from: blublinsky
Repo: lightspeed-core/lightspeed-stack PR: 972
File: src/models/config.py:459-513
Timestamp: 2026-01-12T10:58:40.230Z
Learning: In lightspeed-core/lightspeed-stack, for Python files under src/models, when a user claims a fix is done but the issue persists, verify the current code state before accepting the fix. Steps: review the diff, fetch the latest changes, run relevant tests, reproduce the issue, search the codebase for lingering references to the original problem, confirm the fix is applied and not undone by subsequent commits, and validate with local checks to ensure the issue is resolved.

Applied to files:

  • src/models/config.py
📚 Learning: 2026-02-25T07:46:33.545Z
Learnt from: asimurka
Repo: lightspeed-core/lightspeed-stack PR: 1211
File: src/models/responses.py:8-16
Timestamp: 2026-02-25T07:46:33.545Z
Learning: In the Python codebase, requests.py should use OpenAIResponseInputTool as Tool while responses.py uses OpenAIResponseTool as Tool. This difference is intentional due to differing schemas for input vs output tools in llama-stack-api. Apply this distinction consistently to other models under src/models (e.g., ensure request-related tools use the InputTool variant and response-related tools use the ResponseTool variant). If adding new tools, choose the corresponding InputTool or Tool class based on whether the tool represents input or output, and document the rationale in code comments.

Applied to files:

  • src/models/config.py

Comment thread src/models/config.py
Comment thread src/models/config.py
Comment thread src/models/config.py
Comment thread tests/unit/app/endpoints/test_rlsapi_v1.py Outdated
major added 2 commits April 7, 2026 09:51
Call check_configuration_loaded() before processing requests so a
missing config returns a clean HTTP 500 instead of an opaque crash.
Matches the pattern used by every other endpoint.

Ref: RSPEED-2817
Signed-off-by: Major Hayden <major@redhat.com>
Move allow_verbose_infer from the shared Customization config into a
new dedicated rlsapi_v1 config section so CLA-specific settings are
consolidated in one place and don't clutter shared configuration.

Add configurable quota enforcement for /v1/infer via quota_subject,
which selects the identity field (user_id, org_id, or system_id)
used as the quota subject. Disabled by default (quota_subject: null).

Startup validation rejects org_id/system_id quota_subject when the
authentication module is not rh-identity, and warns when quota_subject
is set but no quota limiters are configured. Falls back to user_id at
runtime if a specific request lacks rh-identity data.

No changes to the core quota system (utils/quota.py, quota limiters,
factory, or scheduler). All existing endpoints are unaffected.

Signed-off-by: Major Hayden <major@redhat.com>
Copy link
Copy Markdown
Contributor

@tisnik tisnik left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@tisnik
Copy link
Copy Markdown
Contributor

tisnik commented Apr 7, 2026

/ok-to-test

@major major force-pushed the rlsapi-v1-quota-enforcement branch from 3c2c22a to cb2db15 Compare April 7, 2026 14:56
@major
Copy link
Copy Markdown
Contributor Author

major commented Apr 7, 2026

@tisnik Coderabbit had some legitimate complaints. Those should be fixed now.

Also, this one is stacked on #1468

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@src/app/endpoints/rlsapi_v1.py`:
- Around line 486-509: The quota_subject "system_id" currently resolves to the
same value as "user_id" because auth[0] and the system_id returned by
_get_rh_identity_context() both come from RHIdentityData.get_user_id(); update
the source of system_id so it is distinct: change _get_rh_identity_context() (or
its callers) to obtain system_id via RHIdentityData.get_system_id() (or auth[1]
if that tuple holds the system id) and return that value, then keep the existing
logic in rlsapi_v1.py to use the distinct system_id for quota_subject ==
"system_id".

In `@src/models/config.py`:
- Around line 1369-1395: Add a missing "Attributes:" section to the
RlsapiV1Configuration model docstring describing the model fields (e.g.,
allow_verbose_infer, quota_subject and any other fields nearby) so the Pydantic
model docs follow the project guideline, and update the quota_subject Field
description to state that quota enforcement requires not only configured
quota_handlers but also at least one limiter plus a sqlite/postgres quota
backend (so it’s not a warning-only no-op); locate the class
RlsapiV1Configuration and the quota_subject Field to make these changes and
ensure the docstring format matches other src/models/*.py models.

In `@tests/unit/models/config/test_rlsapi_v1_configuration.py`:
- Around line 134-142: The test mutates the global "models.config" logger
propagate flag and always resets it to False; change it to save the original
state and restore it in the finally block — e.g., capture original_propagate =
config_logger.propagate after config_logger =
logging.getLogger("models.config"), then in finally set config_logger.propagate
= original_propagate; apply the same change to the other similar test (the one
around lines 164-172) that constructs Configuration(**config_dict) under caplog.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: ASSERTIVE

Plan: Pro

Run ID: 19c7233c-d5cc-4cc1-a7c9-80e28e32a134

📥 Commits

Reviewing files that changed from the base of the PR and between 3c2c22a and cb2db15.

📒 Files selected for processing (7)
  • examples/lightspeed-stack-rlsapi-cla.yaml
  • src/app/endpoints/rlsapi_v1.py
  • src/configuration.py
  • src/models/config.py
  • tests/unit/app/endpoints/test_rlsapi_v1.py
  • tests/unit/models/config/test_dump_configuration.py
  • tests/unit/models/config/test_rlsapi_v1_configuration.py
📜 Review details
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
  • GitHub Check: build-pr
  • GitHub Check: E2E: server mode / ci
  • GitHub Check: E2E: library mode / ci
🧰 Additional context used
📓 Path-based instructions (6)
src/**/*.py

📄 CodeRabbit inference engine (AGENTS.md)

src/**/*.py: Use absolute imports for internal modules (e.g., from authentication import get_auth_dependency)
Use from llama_stack_client import AsyncLlamaStackClient for Llama Stack imports
Check constants.py for shared constants before defining new ones
All modules start with descriptive docstrings explaining purpose
Use logger = get_logger(__name__) from log.py for module logging
Type aliases defined at module level for clarity
All functions require docstrings with brief descriptions
Use complete type annotations for function parameters and return types
Use union types with modern syntax: str | int instead of Union[str, int]
Use Optional[Type] for optional types in type annotations
Use snake_case with descriptive, action-oriented names for functions (get_, validate_, check_)
Avoid in-place parameter modification anti-patterns: return new data structures instead of modifying parameters
Use async def for I/O operations and external API calls
Handle APIConnectionError from Llama Stack in error handling
Use logger.debug() for detailed diagnostic information
Use logger.info() for general information about program execution
Use logger.warning() for unexpected events or potential problems
Use logger.error() for serious problems that prevented function execution
All classes require descriptive docstrings explaining purpose
Use PascalCase for class names with descriptive names and standard suffixes: Configuration, Error/Exception, Resolver, Interface
Use complete type annotations for all class attributes; avoid using Any
Follow Google Python docstring conventions for all modules, classes, and functions
Include Parameters:, Returns:, Raises: sections in function docstrings as needed

Files:

  • src/configuration.py
  • src/app/endpoints/rlsapi_v1.py
  • src/models/config.py
tests/**/*.py

📄 CodeRabbit inference engine (AGENTS.md)

tests/**/*.py: Use pytest for all unit and integration tests; do not use unittest
Use pytest.mark.asyncio marker for async unit tests

Files:

  • tests/unit/models/config/test_dump_configuration.py
  • tests/unit/models/config/test_rlsapi_v1_configuration.py
  • tests/unit/app/endpoints/test_rlsapi_v1.py
tests/unit/**/*.py

📄 CodeRabbit inference engine (AGENTS.md)

Use pytest-mock for AsyncMock objects in unit tests

Files:

  • tests/unit/models/config/test_dump_configuration.py
  • tests/unit/models/config/test_rlsapi_v1_configuration.py
  • tests/unit/app/endpoints/test_rlsapi_v1.py
src/app/**/*.py

📄 CodeRabbit inference engine (AGENTS.md)

src/app/**/*.py: Use from fastapi import APIRouter, HTTPException, Request, status, Depends for FastAPI dependencies
Use FastAPI HTTPException with appropriate status codes for API endpoint error handling

Files:

  • src/app/endpoints/rlsapi_v1.py
src/models/config.py

📄 CodeRabbit inference engine (AGENTS.md)

src/models/config.py: All config uses Pydantic models extending ConfigurationBase
Base class sets extra="forbid" to reject unknown fields in Pydantic configuration models
Use type hints: Optional[FilePath], PositiveInt, SecretStr for configuration fields
Pydantic configuration models should extend ConfigurationBase

Files:

  • src/models/config.py
src/models/**/*.py

📄 CodeRabbit inference engine (AGENTS.md)

src/models/**/*.py: Use @field_validator and @model_validator for custom validation in Pydantic models
Use typing_extensions.Self for model validators in type annotations
Pydantic data models should extend BaseModel
Include Attributes: section in Pydantic model docstrings

Files:

  • src/models/config.py
🧠 Learnings (7)
📓 Common learnings
Learnt from: major
Repo: lightspeed-core/lightspeed-stack PR: 1469
File: src/models/config.py:1928-1933
Timestamp: 2026-04-07T14:44:36.929Z
Learning: In lightspeed-core/lightspeed-stack, `allow_verbose_infer` (previously `customization.allow_verbose_infer`, now `rlsapi_v1.allow_verbose_infer`) is only used internally by the `rlsapi_v1` `/infer` endpoint and has a single known consumer (the PR author). Backward compatibility for this config field relocation is intentionally not required and should not be flagged in future reviews.
📚 Learning: 2026-04-05T12:19:36.009Z
Learnt from: CR
Repo: lightspeed-core/lightspeed-stack PR: 0
File: AGENTS.md:0-0
Timestamp: 2026-04-05T12:19:36.009Z
Learning: Applies to src/models/config.py : All config uses Pydantic models extending `ConfigurationBase`

Applied to files:

  • src/configuration.py
  • tests/unit/models/config/test_rlsapi_v1_configuration.py
  • src/models/config.py
📚 Learning: 2026-04-07T14:44:36.929Z
Learnt from: major
Repo: lightspeed-core/lightspeed-stack PR: 1469
File: src/models/config.py:1928-1933
Timestamp: 2026-04-07T14:44:36.929Z
Learning: In lightspeed-core/lightspeed-stack, `allow_verbose_infer` (previously `customization.allow_verbose_infer`, now `rlsapi_v1.allow_verbose_infer`) is only used internally by the `rlsapi_v1` `/infer` endpoint and has a single known consumer (the PR author). Backward compatibility for this config field relocation is intentionally not required and should not be flagged in future reviews.

Applied to files:

  • tests/unit/models/config/test_dump_configuration.py
  • src/app/endpoints/rlsapi_v1.py
  • tests/unit/app/endpoints/test_rlsapi_v1.py
  • examples/lightspeed-stack-rlsapi-cla.yaml
  • src/models/config.py
📚 Learning: 2026-04-06T20:18:07.852Z
Learnt from: major
Repo: lightspeed-core/lightspeed-stack PR: 1463
File: src/app/endpoints/rlsapi_v1.py:266-271
Timestamp: 2026-04-06T20:18:07.852Z
Learning: In the lightspeed-stack codebase, within `src/app/endpoints/` inference/MCP endpoints, treat `tools: Optional[list[Any]]` in MCP tool definitions as an intentional, consistent typing pattern (used across `query`, `responses`, `streaming_query`, `rlsapi_v1`). Do not raise or suggest this as a typing issue during code review; changing it in isolation could break endpoint typing consistency across the codebase.

Applied to files:

  • src/app/endpoints/rlsapi_v1.py
📚 Learning: 2026-04-06T20:18:07.852Z
Learnt from: major
Repo: lightspeed-core/lightspeed-stack PR: 1463
File: src/app/endpoints/rlsapi_v1.py:266-271
Timestamp: 2026-04-06T20:18:07.852Z
Learning: In the lightspeed-stack codebase (src/app/endpoints/), `tools: Optional[list[Any]]` for MCP tool definitions is an intentional, consistent pattern used across all inference endpoints (query, responses, streaming_query, rlsapi_v1). Do not flag this as a typing issue — changing it in isolation would break consistency.

Applied to files:

  • src/models/config.py
📚 Learning: 2026-01-12T10:58:40.230Z
Learnt from: blublinsky
Repo: lightspeed-core/lightspeed-stack PR: 972
File: src/models/config.py:459-513
Timestamp: 2026-01-12T10:58:40.230Z
Learning: In lightspeed-core/lightspeed-stack, for Python files under src/models, when a user claims a fix is done but the issue persists, verify the current code state before accepting the fix. Steps: review the diff, fetch the latest changes, run relevant tests, reproduce the issue, search the codebase for lingering references to the original problem, confirm the fix is applied and not undone by subsequent commits, and validate with local checks to ensure the issue is resolved.

Applied to files:

  • src/models/config.py
📚 Learning: 2026-02-25T07:46:33.545Z
Learnt from: asimurka
Repo: lightspeed-core/lightspeed-stack PR: 1211
File: src/models/responses.py:8-16
Timestamp: 2026-02-25T07:46:33.545Z
Learning: In the Python codebase, requests.py should use OpenAIResponseInputTool as Tool while responses.py uses OpenAIResponseTool as Tool. This difference is intentional due to differing schemas for input vs output tools in llama-stack-api. Apply this distinction consistently to other models under src/models (e.g., ensure request-related tools use the InputTool variant and response-related tools use the ResponseTool variant). If adding new tools, choose the corresponding InputTool or Tool class based on whether the tool represents input or output, and document the rationale in code comments.

Applied to files:

  • src/models/config.py

Comment thread src/app/endpoints/rlsapi_v1.py
Comment thread src/models/config.py
Comment thread tests/unit/models/config/test_rlsapi_v1_configuration.py
@tisnik tisnik merged commit 6767256 into lightspeed-core:main Apr 7, 2026
23 of 25 checks passed
@major major deleted the rlsapi-v1-quota-enforcement branch April 7, 2026 16:30
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants