Skip to content

Conversation

@yangcao77
Copy link
Contributor

@yangcao77 yangcao77 commented Oct 6, 2025

Description

for issue https://issues.redhat.com/browse/RHDHPAI-1150

add /v1/shields endpoint to list llama-stack enabled shields.
also added tests for the new endpoint

Screenshot 2025-10-06 at 11 42 32 AM

without shield being registered:

Screenshot 2025-10-06 at 11 53 45 AM

Type of change

  • Refactor
  • New feature
  • Bug fix
  • CVE fix
  • Optimization
  • Documentation Update
  • Configuration Update
  • Bump-up service version
  • Bump-up dependent library
  • Bump-up library or tool used for development (does not change the final image)
  • CI configuration change
  • Konflux configuration change
  • Unit tests improvement
  • Integration tests improvement
  • End to end tests improvement

Related Tickets & Documents

  • Related Issue #
  • Closes #

Checklist before requesting a review

  • I have performed a self-review of my code.
  • PR has passed all pre-merge test jobs.
  • If it is a core feature, I have added thorough tests.

Testing

  • Please provide detailed steps to perform tests related to this code change.
  • How were the fix/results from this change verified? Please provide relevant screenshots or results.

Summary by CodeRabbit

  • New Features
    • Added a new GET /v1/shields endpoint to list available shields.
    • Responses now support a shields list via a new ShieldsResponse payload.
  • Documentation
    • OpenAPI spec updated with the /v1/shields path, ShieldsResponse schema, and example payloads.
    • Action list extended to include get_shields; /v1/models examples optionally reference shields.
  • Tests
    • Added unit tests covering successful retrieval, configuration errors, and connection failures for the shields endpoint.
    • Updated router tests to include and validate the new shields route and prefix.

Signed-off-by: Stephanie <yangcao@redhat.com>
Signed-off-by: Stephanie <yangcao@redhat.com>
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Oct 6, 2025

Walkthrough

Adds a new GET /v1/shields API: OpenAPI spec, FastAPI endpoint, router inclusion, response model, and enum action value. Includes unit tests for the endpoint and router updates.

Changes

Cohort / File(s) Summary
OpenAPI spec updates
docs/openapi.json
Added ShieldsResponse schema, new GET /v1/shields path/operation, extended Action enum with get_shields, and example shields payloads.
API endpoint and routing
src/app/endpoints/shields.py, src/app/routers.py
Implemented authenticated /v1/shields endpoint to list shields via Llama Stack client; added router inclusion with /v1 prefix.
Models
src/models/responses.py, src/models/config.py
Introduced ShieldsResponse model; extended Action enum with GET_SHIELDS = "get_shields".
Tests
tests/unit/app/endpoints/test_shields.py, tests/unit/app/test_routers.py
Added endpoint tests covering config states, success, and error paths; updated router tests to include shields router and counts (12→13).

Sequence Diagram(s)

sequenceDiagram
    autonumber
    participant C as Client
    participant R as FastAPI Router (/v1/shields)
    participant H as shields_endpoint_handler
    participant A as Auth
    participant CFG as AppConfig
    participant LSC as AsyncLlamaStackClient
    participant LS as Llama Stack API

    C->>R: GET /v1/shields
    R->>H: Route dispatch
    H->>A: Validate auth
    H->>CFG: Load/validate config
    alt Config invalid
        H-->>C: 500 Configuration error
    else Config valid
        H->>LSC: Construct client
        H->>LS: shields.list()
        alt Success
            LS-->>H: Shields data
            H-->>C: 200 ShieldsResponse {shields: [...]}
        else API connection error
            H-->>C: 500 Unable to connect to Llama Stack
        else Other exception
            H-->>C: 500 Unable to retrieve list of shields
        end
    end
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~25 minutes

Suggested reviewers

  • manstis
  • tisnik

Poem

A hop to the shields, I bound with cheer,
New paths in the map now crystal-clear.
I nibble the routes, responses aligned,
Auth checked, configs read—errors confined.
With twitching nose I ship these yields—
Thump! A happy hare of shining shields. 🐇🛡️

Pre-merge checks and finishing touches

✅ Passed checks (3 passed)
Check name Status Explanation
Title Check ✅ Passed The pull request title includes the relevant ticket identifier and succinctly describes the core change of creating the new /v1/shields endpoint, making it clear and directly aligned with the implemented feature without unnecessary detail.
Docstring Coverage ✅ Passed Docstring coverage is 100.00% which is sufficient. The required threshold is 80.00%.
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@tisnik tisnik left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, nice work

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

🧹 Nitpick comments (5)
tests/unit/app/test_routers.py (1)

15-16: Router updates LGTM; consider avoiding brittle count checks.

Assertions for presence/prefix are good. The hard-coded len(app.routers) == 13 will break on any future router change. Prefer asserting membership and exact expected prefixes without relying on total count.

Example approach:

  • Assert the set of expected routers is a subset of app.get_routers()
  • Assert a mapping of {router: prefix} equals expected for those routers

Also applies to: 65-70, 86-91

src/models/responses.py (1)

39-55: Consider a typed Shield model instead of raw dicts.

Improves validation and OpenAPI schema clarity.

You could define a Shield model and reference it here:

+from typing import Literal, Optional
+
+class Shield(BaseModel):
+    """Shield descriptor."""
+    identifier: str
+    provider_resource_id: str
+    provider_id: str
+    type: Literal["shield"] = "shield"
+    params: dict[str, Any] = Field(default_factory=dict)
+
 class ShieldsResponse(BaseModel):
     """Model representing a response to shields request."""
-
-    shields: list[dict[str, Any]] = Field(
+    shields: list[Shield] = Field(
         ...,
         description="List of shields available",
         examples=[
             {
                 "identifier": "lightspeed_question_validity-shield",
                 "provider_resource_id": "lightspeed_question_validity-shield",
                 "provider_id": "lightspeed_question_validity",
                 "type": "shield",
                 "params": {},
             }
         ],
     )
docs/openapi.json (1)

126-159: Augment /v1/shields responses with auth errors.

Since the route is authenticated/authorized, add 401 and 403 responses referencing UnauthorizedResponse and ForbiddenResponse for parity with other secured endpoints.

Also applies to: 3028-3055

src/app/endpoints/shields.py (1)

66-69: Avoid logging full config at info level.

Log only safe/high-level fields (e.g., mode, url present) and at debug level to reduce risk of leaking sensitive details.

-    logger.info("Llama stack config: %s", llama_stack_configuration)
+    logger.debug(
+        "Llama stack config loaded (use_as_library_client=%s, url_present=%s)",
+        getattr(llama_stack_configuration, "use_as_library_client", None),
+        bool(getattr(llama_stack_configuration, "url", None)),
+    )

Based on coding guidelines

tests/unit/app/endpoints/test_shields.py (1)

166-183: Other tests look fine; minor polish.

  • Good use of AsyncMock to simulate shields data and error paths.
  • Prefer a shared MOCK_AUTH tuple to keep consistency across tests.

As per coding guidelines

Also applies to: 241-310, 312-365

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between f74a8be and 5a98473.

📒 Files selected for processing (7)
  • docs/openapi.json (3 hunks)
  • src/app/endpoints/shields.py (1 hunks)
  • src/app/routers.py (2 hunks)
  • src/models/config.py (1 hunks)
  • src/models/responses.py (1 hunks)
  • tests/unit/app/endpoints/test_shields.py (1 hunks)
  • tests/unit/app/test_routers.py (3 hunks)
🧰 Additional context used
📓 Path-based instructions (10)
src/**/*.py

📄 CodeRabbit inference engine (CLAUDE.md)

Use absolute imports for internal modules (e.g., from auth import get_auth_dependency)

Files:

  • src/models/responses.py
  • src/app/routers.py
  • src/app/endpoints/shields.py
  • src/models/config.py
**/*.py

📄 CodeRabbit inference engine (CLAUDE.md)

**/*.py: All modules start with descriptive module-level docstrings explaining purpose
Use logger = logging.getLogger(name) for module logging after import logging
Define type aliases at module level for clarity
All functions require docstrings with brief descriptions
Provide complete type annotations for all function parameters and return types
Use typing_extensions.Self in model validators where appropriate
Use modern union syntax (str | int) and Optional[T] or T | None consistently
Function names use snake_case with descriptive, action-oriented prefixes (get_, validate_, check_)
Avoid in-place parameter modification; return new data structures instead of mutating arguments
Use appropriate logging levels: debug, info, warning, error with clear messages
All classes require descriptive docstrings explaining purpose
Class names use PascalCase with conventional suffixes (Configuration, Error/Exception, Resolver, Interface)
Abstract base classes should use abc.ABC and @AbstractMethod for interfaces
Provide complete type annotations for all class attributes
Follow Google Python docstring style for modules, classes, and functions, including Args, Returns, Raises, Attributes sections as needed

Files:

  • src/models/responses.py
  • src/app/routers.py
  • tests/unit/app/test_routers.py
  • src/app/endpoints/shields.py
  • tests/unit/app/endpoints/test_shields.py
  • src/models/config.py
src/{models/**/*.py,configuration.py}

📄 CodeRabbit inference engine (CLAUDE.md)

src/{models/**/*.py,configuration.py}: Use @field_validator and @model_validator for custom validation in Pydantic models
Use precise type hints in configuration (e.g., Optional[FilePath], PositiveInt, SecretStr)

Files:

  • src/models/responses.py
  • src/models/config.py
src/models/**/*.py

📄 CodeRabbit inference engine (CLAUDE.md)

src/models/**/*.py: Pydantic models: use BaseModel for data models and extend ConfigurationBase for configuration
Use @model_validator and @field_validator for Pydantic model validation

Files:

  • src/models/responses.py
  • src/models/config.py
src/app/**/*.py

📄 CodeRabbit inference engine (CLAUDE.md)

Use standard FastAPI imports (from fastapi import APIRouter, HTTPException, Request, status, Depends) in FastAPI app code

Files:

  • src/app/routers.py
  • src/app/endpoints/shields.py
src/{app/**/*.py,client.py}

📄 CodeRabbit inference engine (CLAUDE.md)

Use async def for I/O-bound operations and external API calls

Files:

  • src/app/routers.py
  • src/app/endpoints/shields.py
tests/{unit,integration}/**/*.py

📄 CodeRabbit inference engine (CLAUDE.md)

tests/{unit,integration}/**/*.py: Use pytest for all unit and integration tests
Do not use unittest in tests; pytest is the standard

Files:

  • tests/unit/app/test_routers.py
  • tests/unit/app/endpoints/test_shields.py
tests/**/*.py

📄 CodeRabbit inference engine (CLAUDE.md)

tests/**/*.py: Use pytest-mock to create AsyncMock objects for async interactions in tests
Use the shared auth mock constant: MOCK_AUTH = ("mock_user_id", "mock_username", False, "mock_token") in tests

Files:

  • tests/unit/app/test_routers.py
  • tests/unit/app/endpoints/test_shields.py
src/app/endpoints/**/*.py

📄 CodeRabbit inference engine (CLAUDE.md)

In API endpoints, raise FastAPI HTTPException with appropriate status codes for error handling

Files:

  • src/app/endpoints/shields.py
src/{models/config.py,configuration.py}

📄 CodeRabbit inference engine (CLAUDE.md)

src/{models/config.py,configuration.py}: All configuration uses Pydantic models extending ConfigurationBase
Configuration base models must set model_config with extra="forbid" to reject unknown fields

Files:

  • src/models/config.py
🧬 Code graph analysis (3)
src/app/routers.py (1)
tests/unit/app/test_routers.py (1)
  • include_router (33-48)
src/app/endpoints/shields.py (7)
src/authentication/__init__.py (1)
  • get_auth_dependency (14-52)
src/authorization/middleware.py (1)
  • authorize (111-122)
src/client.py (2)
  • AsyncLlamaStackClientHolder (18-55)
  • get_client (49-55)
src/configuration.py (2)
  • configuration (65-69)
  • llama_stack_configuration (79-83)
src/models/config.py (2)
  • config (138-144)
  • Action (327-367)
src/models/responses.py (1)
  • ShieldsResponse (39-54)
src/utils/endpoints.py (1)
  • check_configuration_loaded (68-80)
tests/unit/app/endpoints/test_shields.py (4)
src/app/endpoints/shields.py (1)
  • shields_endpoint_handler (41-96)
src/configuration.py (3)
  • configuration (65-69)
  • AppConfig (36-153)
  • init_from_dict (60-62)
tests/unit/utils/auth_helpers.py (1)
  • mock_authorization_resolvers (9-27)
src/client.py (1)
  • get_client (49-55)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
  • GitHub Check: build-pr
  • GitHub Check: e2e_tests (ci)
🔇 Additional comments (2)
src/models/config.py (1)

361-362: Enum extension looks good; ensure policy wired up.

GET_SHIELDS addition is fine. Verify authorization rules/configs include this action for intended roles in environments (tests bypass auth).

src/app/routers.py (1)

8-9: Inclusion of shields router is correct.

Import and app.include_router(shields.router, prefix="/v1") align with tests and OpenAPI.

Also applies to: 31-31

Comment on lines +6 to +8
from fastapi import APIRouter, HTTPException, Request, status
from fastapi.params import Depends
from llama_stack_client import APIConnectionError
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion | 🟠 Major

Import Depends from fastapi, not fastapi.params.

Aligns with project guidelines for FastAPI imports.

Apply this diff:

-from fastapi import APIRouter, HTTPException, Request, status
-from fastapi.params import Depends
+from fastapi import APIRouter, HTTPException, Request, status, Depends

As per coding guidelines

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
from fastapi import APIRouter, HTTPException, Request, status
from fastapi.params import Depends
from llama_stack_client import APIConnectionError
from fastapi import APIRouter, HTTPException, Request, status, Depends
from llama_stack_client import APIConnectionError
🤖 Prompt for AI Agents
In src/app/endpoints/shields.py around lines 6 to 8, the file currently imports
Depends from fastapi.params; update the import to bring Depends from fastapi
instead (i.e., remove fastapi.params and add Depends to the existing fastapi
import list) so all FastAPI symbols are imported consistently from fastapi per
project guidelines.

Comment on lines +15 to +25
async def test_shields_endpoint_handler_configuration_not_loaded(mocker):
"""Test the shields endpoint handler if configuration is not loaded."""
mock_authorization_resolvers(mocker)

# simulate state when no configuration is loaded
mocker.patch(
"app.endpoints.shields.configuration",
return_value=mocker.Mock(),
)
mocker.patch("app.endpoints.shields.configuration", None)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Fix configuration-not-loaded test patching.

Patch the module attribute directly; the first patch is redundant and incorrect.

Apply this diff:

@@
-    # simulate state when no configuration is loaded
-    mocker.patch(
-        "app.endpoints.shields.configuration",
-        return_value=mocker.Mock(),
-    )
-    mocker.patch("app.endpoints.shields.configuration", None)
+    # simulate state when no configuration is loaded
+    mocker.patch("app.endpoints.shields.configuration", None)
@@
-    with pytest.raises(HTTPException) as e:
-        await shields_endpoint_handler(request=request, auth=auth)
-        assert e.value.status_code == status.HTTP_500_INTERNAL_SERVER_ERROR
-        assert e.detail["response"] == "Configuration is not loaded"
+    with pytest.raises(HTTPException) as e:
+        await shields_endpoint_handler(request=request, auth=auth)
+    assert e.value.status_code == status.HTTP_500_INTERNAL_SERVER_ERROR
+    assert e.value.detail["response"] == "Configuration is not loaded"

Also consider using the shared MOCK_AUTH constant in place of ad-hoc tuples. As per coding guidelines

Also applies to: 34-38

🤖 Prompt for AI Agents
In tests/unit/app/endpoints/test_shields.py around lines 15-25 (and similarly
lines 34-38), the test incorrectly attempts to patch configuration by calling
mocker.patch with return_value and then patching with None; remove the
redundant/incorrect call and patch the module attribute directly to None
(mocker.patch.object or mocker.patch with target string and new=None) so the
shields handler sees configuration as not loaded, and replace ad-hoc auth tuples
with the shared MOCK_AUTH constant per guidelines.

Comment on lines +41 to +88
async def test_shields_endpoint_handler_improper_llama_stack_configuration(mocker):
"""Test the shields endpoint handler if Llama Stack configuration is not proper."""
mock_authorization_resolvers(mocker)

# configuration for tests
config_dict = {
"name": "test",
"service": {
"host": "localhost",
"port": 8080,
"auth_enabled": False,
"workers": 1,
"color_log": True,
"access_log": True,
},
"llama_stack": {
"api_key": "test-key",
"url": "http://test.com:1234",
"use_as_library_client": False,
},
"user_data_collection": {
"transcripts_enabled": False,
},
"mcp_servers": [],
"customization": None,
"authorization": {"access_rules": []},
"authentication": {"module": "noop"},
}
cfg = AppConfig()
cfg.init_from_dict(config_dict)

mocker.patch(
"app.endpoints.shields.configuration",
return_value=None,
)

request = Request(
scope={
"type": "http",
"headers": [(b"authorization", b"Bearer invalid-token")],
}
)
auth = ("test_user", "token", {})
with pytest.raises(HTTPException) as e:
await shields_endpoint_handler(request=request, auth=auth)
assert e.value.status_code == status.HTTP_500_INTERNAL_SERVER_ERROR
assert e.detail["response"] == "Llama stack is not configured"

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Test is inconsistent with endpoint behavior; choose: adjust or remove.

The endpoint does not emit "Llama stack is not configured", and the patch uses return_value incorrectly. Either:

  • Implement a pre-check in the endpoint to raise that specific error, or
  • Update/remove this test to reflect actual behavior.

I can provide a patch either way; which path do you prefer?

Comment on lines +91 to +133
async def test_shields_endpoint_handler_configuration_loaded(mocker):
"""Test the shields endpoint handler if configuration is loaded."""
mock_authorization_resolvers(mocker)

# configuration for tests
config_dict = {
"name": "foo",
"service": {
"host": "localhost",
"port": 8080,
"auth_enabled": False,
"workers": 1,
"color_log": True,
"access_log": True,
},
"llama_stack": {
"api_key": "xyzzy",
"url": "http://x.y.com:1234",
"use_as_library_client": False,
},
"user_data_collection": {
"feedback_enabled": False,
},
"customization": None,
"authorization": {"access_rules": []},
"authentication": {"module": "noop"},
}
cfg = AppConfig()
cfg.init_from_dict(config_dict)

request = Request(
scope={
"type": "http",
"headers": [(b"authorization", b"Bearer invalid-token")],
}
)
auth = ("test_user", "token", {})

with pytest.raises(HTTPException) as e:
await shields_endpoint_handler(request=request, auth=auth)
assert e.value.status_code == status.HTTP_500_INTERNAL_SERVER_ERROR
assert e.detail["response"] == "Unable to connect to Llama Stack"

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Mock the client to trigger the intended connection error path.

Without mocking, get_client() likely raises RuntimeError (not initialized), not APIConnectionError.

Apply this diff to hit the APIConnectionError branch deterministically:

@@
-    with pytest.raises(HTTPException) as e:
-        await shields_endpoint_handler(request=request, auth=auth)
-        assert e.value.status_code == status.HTTP_500_INTERNAL_SERVER_ERROR
-        assert e.detail["response"] == "Unable to connect to Llama Stack"
+    mock_client = mocker.AsyncMock()
+    mock_client.shields.list.side_effect = APIConnectionError(request=None)
+    mocker.patch("app.endpoints.shields.AsyncLlamaStackClientHolder").return_value.get_client.return_value = mock_client
+
+    with pytest.raises(HTTPException) as e:
+        await shields_endpoint_handler(request=request, auth=auth)
+    assert e.value.status_code == status.HTTP_500_INTERNAL_SERVER_ERROR
+    assert e.value.detail["response"] == "Unable to connect to Llama Stack"
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
async def test_shields_endpoint_handler_configuration_loaded(mocker):
"""Test the shields endpoint handler if configuration is loaded."""
mock_authorization_resolvers(mocker)
# configuration for tests
config_dict = {
"name": "foo",
"service": {
"host": "localhost",
"port": 8080,
"auth_enabled": False,
"workers": 1,
"color_log": True,
"access_log": True,
},
"llama_stack": {
"api_key": "xyzzy",
"url": "http://x.y.com:1234",
"use_as_library_client": False,
},
"user_data_collection": {
"feedback_enabled": False,
},
"customization": None,
"authorization": {"access_rules": []},
"authentication": {"module": "noop"},
}
cfg = AppConfig()
cfg.init_from_dict(config_dict)
request = Request(
scope={
"type": "http",
"headers": [(b"authorization", b"Bearer invalid-token")],
}
)
auth = ("test_user", "token", {})
with pytest.raises(HTTPException) as e:
await shields_endpoint_handler(request=request, auth=auth)
assert e.value.status_code == status.HTTP_500_INTERNAL_SERVER_ERROR
assert e.detail["response"] == "Unable to connect to Llama Stack"
async def test_shields_endpoint_handler_configuration_loaded(mocker):
"""Test the shields endpoint handler if configuration is loaded."""
mock_authorization_resolvers(mocker)
# configuration for tests
config_dict = {
"name": "foo",
"service": {
"host": "localhost",
"port": 8080,
"auth_enabled": False,
"workers": 1,
"color_log": True,
"access_log": True,
},
"llama_stack": {
"api_key": "xyzzy",
"url": "http://x.y.com:1234",
"use_as_library_client": False,
},
"user_data_collection": {
"feedback_enabled": False,
},
"customization": None,
"authorization": {"access_rules": []},
"authentication": {"module": "noop"},
}
cfg = AppConfig()
cfg.init_from_dict(config_dict)
request = Request(
scope={
"type": "http",
"headers": [(b"authorization", b"Bearer invalid-token")],
}
)
auth = ("test_user", "token", {})
mock_client = mocker.AsyncMock()
mock_client.shields.list.side_effect = APIConnectionError(request=None)
mocker.patch(
"app.endpoints.shields.AsyncLlamaStackClientHolder"
).return_value.get_client.return_value = mock_client
with pytest.raises(HTTPException) as e:
await shields_endpoint_handler(request=request, auth=auth)
assert e.value.status_code == status.HTTP_500_INTERNAL_SERVER_ERROR
assert e.value.detail["response"] == "Unable to connect to Llama Stack"
🤖 Prompt for AI Agents
In tests/unit/app/endpoints/test_shields.py around lines 91–133, the test
currently does not mock the Llama Stack client and therefore triggers a
RuntimeError path instead of the APIConnectionError branch; also the assertions
are placed inside the pytest.raises context after the await so they never run.
Fix by mocking/patching the module function that returns the client (e.g.,
get_client or the client factory used by shields_endpoint_handler) to raise the
specific APIConnectionError when called, call shields_endpoint_handler inside
pytest.raises(HTTPException) to capture the exception, and move the assertions
about e.value.status_code and e.value.detail to after the context manager so
they execute. Ensure the mock targets the correct import path used by the
handler and raise the exact APIConnectionError type with the expected message.

@yangcao77
Copy link
Contributor Author

@tisnik thanks for the quick review!
if you don't mind, can you also help merge the PR? I do not have permission to do so.

@tisnik tisnik merged commit da80226 into lightspeed-core:main Oct 6, 2025
18 of 19 checks passed
@coderabbitai coderabbitai bot mentioned this pull request Oct 7, 2025
15 tasks
@coderabbitai coderabbitai bot mentioned this pull request Nov 14, 2025
18 tasks
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants