Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
5 changes: 4 additions & 1 deletion run.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -131,7 +131,10 @@ server:
tls_cafile: null
tls_certfile: null
tls_keyfile: null
shields: []
shields:
- shield_id: llama-guard-shield
provider_id: llama-guard
provider_shield_id: "gpt-3.5-turbo" # Model to use for safety checks
vector_dbs:
- vector_db_id: my_knowledge_base
embedding_model: sentence-transformers/all-mpnet-base-v2
Expand Down
5 changes: 4 additions & 1 deletion tests/e2e/configs/run-azure.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -120,7 +120,10 @@ server:
tls_cafile: null
tls_certfile: null
tls_keyfile: null
shields: []
shields:
- shield_id: llama-guard-shield
provider_id: llama-guard
provider_shield_id: "gpt-4o-mini"
models:
- model_id: gpt-4o-mini
model_type: llm
Expand Down
5 changes: 4 additions & 1 deletion tests/e2e/configs/run-ci.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -131,7 +131,10 @@ server:
tls_cafile: null
tls_certfile: null
tls_keyfile: null
shields: []
shields:
- shield_id: llama-guard-shield
provider_id: llama-guard
provider_shield_id: "gpt-4-turbo"
vector_dbs:
- vector_db_id: my_knowledge_base
embedding_model: sentence-transformers/all-mpnet-base-v2
Expand Down
5 changes: 4 additions & 1 deletion tests/e2e/configs/run-rhaiis.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -123,7 +123,10 @@ server:
tls_cafile: null
tls_certfile: null
tls_keyfile: null
shields: []
shields:
- shield_id: llama-guard-shield
provider_id: llama-guard
provider_shield_id: "meta-llama/Llama-3.1-8B-Instruct"
models:
- metadata:
embedding_dimension: 768 # Depends on chosen model
Expand Down
18 changes: 17 additions & 1 deletion tests/e2e/features/info.feature
Original file line number Diff line number Diff line change
Expand Up @@ -37,7 +37,7 @@ Feature: Info tests
And The body of the response has proper model structure


Scenario: Check if models endpoint is working
Scenario: Check if models endpoint reports error when llama-stack in unreachable
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Fix grammar error in scenario title.

The scenario title contains a grammar error: "llama-stack in unreachable" should be "llama-stack is unreachable".

Apply this diff:

-  Scenario: Check if models endpoint reports error when llama-stack in unreachable
+  Scenario: Check if models endpoint reports error when llama-stack is unreachable
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
Scenario: Check if models endpoint reports error when llama-stack in unreachable
Scenario: Check if models endpoint reports error when llama-stack is unreachable
🤖 Prompt for AI Agents
In tests/e2e/features/info.feature around line 40, the scenario title contains a
grammar mistake ("llama-stack in unreachable"); change the word "in" to "is" so
the line reads "Scenario: Check if models endpoint reports error when
llama-stack is unreachable" and save the file.

Given The system is in default state
And The llama-stack connection is disrupted
When I access REST API endpoint "models" using HTTP GET method
Expand All @@ -47,6 +47,22 @@ Feature: Info tests
{"detail": {"response": "Unable to connect to Llama Stack", "cause": "Connection error."}}
"""

Scenario: Check if shields endpoint is working
Given The system is in default state
When I access REST API endpoint "shields" using HTTP GET method
Then The status code of the response is 200
And The body of the response has proper shield structure


Scenario: Check if shields endpoint reports error when llama-stack in unreachable
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Fix grammar error in scenario title.

The scenario title contains the same grammar error: "llama-stack in unreachable" should be "llama-stack is unreachable".

Apply this diff:

-  Scenario: Check if shields endpoint reports error when llama-stack in unreachable
+  Scenario: Check if shields endpoint reports error when llama-stack is unreachable
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
Scenario: Check if shields endpoint reports error when llama-stack in unreachable
Scenario: Check if shields endpoint reports error when llama-stack is unreachable
🤖 Prompt for AI Agents
In tests/e2e/features/info.feature around line 57, the scenario title has a
grammar mistake "llama-stack in unreachable"; update the scenario line to read
"Scenario: Check if shields endpoint reports error when llama-stack is
unreachable" so the verb is correct.

Given The system is in default state
And The llama-stack connection is disrupted
When I access REST API endpoint "shields" using HTTP GET method
Then The status code of the response is 500
And The body of the response is the following
"""
{"detail": {"response": "Unable to connect to Llama Stack", "cause": "Connection error."}}
"""

Scenario: Check if metrics endpoint is working
Given The system is in default state
Expand Down
34 changes: 34 additions & 0 deletions tests/e2e/features/steps/info.py
Original file line number Diff line number Diff line change
Expand Up @@ -63,3 +63,37 @@ def check_model_structure(context: Context) -> None:
assert (
llm_model["identifier"] == f"{expected_provider}/{expected_model}"
), f"identifier should be '{expected_provider}/{expected_model}'"


@then("The body of the response has proper shield structure")
def check_shield_structure(context: Context) -> None:
"""Check that the first shield has the correct structure and required fields."""
response_json = context.response.json()
assert response_json is not None, "Response is not valid JSON"

assert "shields" in response_json, "Response missing 'shields' field"
shields = response_json["shields"]
assert len(shields) > 0, "Response has empty list of shields"

# Find first shield
found_shield = None
for shield in shields:
if shield.get("type") == "shield":
found_shield = shield
break

assert found_shield is not None, "No shield found in response"

expected_model = context.default_model

# Validate structure and values
assert found_shield["type"] == "shield", "type should be 'shield'"
assert (
found_shield["provider_id"] == "llama-guard"
), "provider_id should be 'llama-guard'"
assert (
found_shield["provider_resource_id"] == expected_model
), f"provider_resource_id should be '{expected_model}'"
assert (
found_shield["identifier"] == "llama-guard-shield"
), "identifier should be 'llama-guard-shield'"
Loading