Skip to content

Conversation

@steven10a
Copy link
Collaborator

@steven10a steven10a commented Oct 30, 2025

Fixing safety_identifier implementation

  • Previously we were incorrectly adding safety_identifier as a HTTP header
  • Changed to use the supported safety_identifier parameter

Note safety_identifier is only supported by the normal OpenAI responses and chat.completions endpoint. Not implemented for Azure or third parties.

Copilot AI review requested due to automatic review settings October 30, 2025 20:26
Copy link

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull Request Overview

This PR replaces the header-based safety identifier approach with a parameter-based approach for tracking guardrails library usage with the OpenAI API. The change removes the _openai_utils module that injected headers and instead conditionally adds a safety_identifier parameter to API calls when using the official OpenAI API (excluding Azure and alternative providers).

Key changes:

  • Removed _openai_utils.py module containing header-based safety identifier logic
  • Added _supports_safety_identifier() helper function in three locations to detect compatible clients
  • Updated all OpenAI client instantiations to remove prepare_openai_kwargs() wrapper calls
  • Added conditional safety_identifier parameter to LLM API calls based on client compatibility

Reviewed Changes

Copilot reviewed 13 out of 13 changed files in this pull request and generated 5 comments.

Show a summary per file
File Description
src/guardrails/_openai_utils.py Deleted header-based safety identifier utilities module
src/guardrails/checks/text/llm_base.py Added safety identifier support detection and parameter injection for LLM checks
src/guardrails/resources/chat/chat.py Added safety identifier support detection and parameter injection for chat completions
src/guardrails/resources/responses/responses.py Added safety identifier support detection and parameter injection for responses API
src/guardrails/client.py Removed prepare_openai_kwargs() calls from client initialization
src/guardrails/agents.py Removed prepare_openai_kwargs() calls from default context creation
src/guardrails/runtime.py Removed prepare_openai_kwargs() calls from runtime default context
src/guardrails/evals/guardrail_evals.py Removed prepare_openai_kwargs() calls from eval client creation
src/guardrails/checks/text/moderation.py Removed prepare_openai_kwargs() calls from moderation client
src/guardrails/utils/create_vector_store.py Removed prepare_openai_kwargs() calls from vector store client
tests/unit/test_safety_identifier.py Added comprehensive tests for safety identifier support detection
tests/unit/test_openai_utils.py Deleted tests for removed header-based utilities
tests/unit/test_agents.py Removed assertions checking for safety identifier headers

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

mock_client.base_url = "https://example.openai.azure.com/v1"
mock_client.__class__.__name__ = "AsyncAzureOpenAI"

# Azure detection happens via isinstance check, but we can test with class name
Copy link

Copilot AI Oct 30, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This comment is misleading. The test creates a real Azure client and tests it with isinstance(), which is correct for llm_base.py. However, the chat.py and responses.py implementations use class name string matching instead, not isinstance checks.

Suggested change
# Azure detection happens via isinstance check, but we can test with class name
# In llm_base.py, Azure detection happens via isinstance check (this test is correct).
# Note: chat.py and responses.py use class name string matching instead.

Copilot uses AI. Check for mistakes.
Copy link

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull Request Overview

Copilot reviewed 14 out of 14 changed files in this pull request and generated 1 comment.


💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines 65 to 71
__all__ = [
"LLMConfig",
"LLMOutput",
"LLMErrorOutput",
"create_llm_check_fn",
"LLMOutput",
"create_error_result",
"create_llm_check_fn",
]
Copy link

Copilot AI Oct 30, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[nitpick] The __all__ list was reordered alphabetically, but this appears to be an incidental change unrelated to the PR's main purpose of refactoring the safety identifier. The previous ordering grouped related items together ('LLMConfig', 'LLMOutput', 'LLMErrorOutput', then functions). Consider reverting this change to keep the PR focused on its core objective.

Copilot uses AI. Check for mistakes.
Copy link
Collaborator

@gabor-openai gabor-openai left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM ty!

Copy link
Collaborator

@gabor-openai gabor-openai left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM TY!

@gabor-openai gabor-openai changed the title fixing safety identifier Set safety_identifier to openai-guardrails-python Oct 30, 2025
@gabor-openai gabor-openai merged commit 06fa983 into main Oct 30, 2025
3 checks passed
@gabor-openai gabor-openai deleted the dev/steven/safety_identifiers branch October 30, 2025 20:46
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants