-
Notifications
You must be signed in to change notification settings - Fork 1.4k
Description
I noticed two issues with the Content Filtering behavior of Pydantic AI when used with Azure OpenAI:
-
Inconsistent error handling for content filtering: Azure OpenAI uses two different types of content filters:
- Prompt filter: Azure raises a
BadRequestErrorPromptFilterBadRequestBody.txt - Completion filter: The
finish_reasonon the response will becontent_filter(CompletionFilter.txt)
(Reference: Azure OpenAI Content Filter documentation)
This translates to two different errors in Pydantic AI:
- For prompt filtering:
ModelHTTPError - For completion filtering:
UnexpectedModelBehaviorerror. Here we don't even see the filter type.
- Prompt filter: Azure raises a
-
Lack of model-agnostic content filter handling: Different providers handle content filtering differently. For example, Vertex AI handles content filters differently, which makes the current content filter handling in Pydantic AI not model-agnostic.
Proposed Solution
-
Implement a unique content filter exception that can be used to handle only these specific cases. This exception should:
- Contain filter type information (hate, sexual, violence, and self-harm) to help users identify problematic and sensitive information in their prompts
- Include child exceptions (prompt filter and completion filter) to help users handle these cases separately if needed
- Provide a consistent interface for handling content filtering errors
- If relevant, make sure to not leake any private user information contained in prompt or generation
-
Ensure that Pydantic AI's behavior on content filters is consistent across different model providers, making the library truly model-agnostic.
These improvements would allow users to better handle content filtering scenarios, potentially by altering prompts or model inputs when content filtering is triggered.
References
No response