-
Notifications
You must be signed in to change notification settings - Fork 4.3k
Description
Confirm this is a feature request for the Python library and not the underlying OpenAI API.
- This is a feature request for the Python library
Describe the feature or improvement you're requesting
Related Issue: #1700 (solved, but only for LengthFinishReasonError)
Situation
When calling AsyncOpenAI(...).beta.chat.completions.parse(..., response_format=SomePydanticModel), the OpenAI library raises ContentFilterFinishReasonError
when finish_reason == "content_filter", without providing any information as to what the response contained.
Most of the times a BadRequestError
error is raised if a content filter was applied. In our case an image with a self harm text written on a post it triggers the ContentFilterFinishReasonError
.
Assumption: the image itself overcomes the filter, but afterwards the openai llm text triggers the content filter.
Complication
Because there is no way to retrieve any information about the response, I cannot programmatically save information about the context. For example, I cannot access and track information from the usage object in the chat completion response or get detailed information about which content filter was triggered (self_harm, sexuality etc.).
I also need to treat BadRequestError
fundamentally different from ContentFilterFinishReasonError
exception, even though i like to treat both cases equally.
Desired behavior
As a library user, I always want to know details about responses from LLM calls. For a ContentFilterFinishReasonError
i would assume, to get the information what exact filter was triggered.
I see three potential solutions:
- Include the content filter details in the
ContentFilterFinishReasonError
class - Return the response as an attribute in the exception object similar to
BadRequestError
- Raise
BadRequestError
instead ofContentFilterFinishReasonError
(If its a llm answer issue and not a request issue, this might be misleading)
OpenAI SDK Version: 1.109.1
Code location
- File and line: openai.lib._parsing._completions.py on line 103
- Function: parse_chat_completion
Additional context
No response