Skip to content

Missing thought_signature when using Gemini 3 pro preview #2137

@silvasw

Description

@silvasw

Please read this first

  • Have you read the docs?Agents SDK docs
  • Have you searched for related issues? Others may have faced similar issues.

Describe the bug

Gemini 3 Pro Preview (google/gemini-3-pro-preview) model requires thought_signature similar to response_id for subsequent calls. Using the Agent and Runner the thought_signature is not inject for next loop during call. I have tried to use RunHooks to receive full response from the model but unsuccessfully.

File "C:\Users\dom\anaconda3\envs\genai\Lib\site-packages\openai\_base_client.py", line 1594, in request
    raise self._make_status_error_from_response(err.response) from None
openai.BadRequestError: Error code: 400 - [{'error': {'code': 400, 'message': 'Unable to submit request because function call `default_api:get_current_temperature` in the 2. content block is missing a `thought_signature`. Learn more: https://docs.cloud.google.com/vertex-ai/generative-ai/docs/thought-signatures', 'status': 'INVALID_ARGUMENT'}}]

Debug information

  • Agents SDK version: 0.6.1
  • Python version: 3.11.13
  • OpenAI version: 2.8.1

Repro steps

This snippet using plain OpenAI client returns the thouch_signarute but fails when use Agent and Runner

  • OpenAI library:
# Create an async-compatible OpenAI client pointing to GOOGLE's endpoint
# Set up the OpenAI client pointing to GOOGLE endpoint
client = OpenAI(
    api_key=creds,
    base_url=f"https://aiplatform.googleapis.com/v1/projects/{PROJECT_ID}/locations/global/endpoints/openapi",
)

prompt = """
If it's too hot or too cold in London, set the temperature to a comfortable level.
Make your own reasonable assumption for what 'comfortable' means and do not ask for clarification.
"""

messages = [
    {"role": "user", "content": prompt},
]

response1 = client.chat.completions.create(
    model=MODEL_NAME,
    messages=messages,
    tools=tools,
)

print(response1)

OUTPUT:

ChatCompletion(id='3Kstacq6MerChcIPi9qngAE', choices=[Choice(finish_reason='tool_calls', index=0, logprobs=None, message=ChatCompletionMessage(content=None, refusal=None, role='assistant', annotations=None, audio=None, function_call=None, tool_calls=[ChatCompletionMessageFunctionToolCall(id='function-call-e37e486d-d21e-4203-a7d5-220fa5c27dac', function=Function(arguments='{"location":"London"}', name='get_current_temperature'), type='function', extra_content={'google': {'thought_signature': 'CsEJAePx/15yMy16yNn53ZP0RE9phQWHhzE3j+1x8uferf9Dy+JX2tyZBmZdKhWtPyWf+dsgYLFo3Q8Lt2jv+pmjePJCmsWkKicN4HH5YWF3BFRWzdvxHlQX8Gv+sRqG5dipCBC4+6MswHRuNmrkF9SZ7rNmHBIRhcsp0UvU9v78Fd3lag8VmB5bQSvTfQMZ4g1HEfBvCwW0BqWL6TJyBH/tPwEX4bQ29kOKxi6xvUfuj4wMcY81inYJ0xoDxTAeMrnAJitE8WQdrg3h+4QDuN7ZGp8u0L3lN1LLpX14VR7MDMBAmyvC39DIhVbTFLRWGb/O9O1kROE15jv5VQwBqUaBTbbmriynv8dpr9mBx3eqodvtBRUYKnG0LsrTXnurKP/gGT34wxuPOJGncyBN6puGBWgMDcpBWs5juvl4tlpca23x3IJjF2Ou2klgP38VxufwC3Vbqpd+dH8J5FZ/OmPNaL9nsUowgXxap8+QkNoU02rnKhIeykB7zZp/Vl4Ap/rJ1LVErz3aV/xec6v7NVBZyavwGsMM6Um9kg1YNWOLF2Fpock+oVJUnGNoEiLUqgIHHd5Ps3c90gFIL0ppKaoFQyTmS3fWX0wggQERqNkoL25tVC4Fa9lmYxMwNHtWZ7GmhfuWqp//m28NCPldN44dzwPATdW0BJDghj1uisFpKtuUXok437LT4ovzuTQ8iuegLM1pGVH+vwd/1KBHURnNe8US8m9E4dIQw9hFgA8ItG5COgXqlVkvPR2sLp0llQbUNDrKM5JwmyK8UKbrU3Un/tmvxSn0KrtBt3cLDVOKIUHRxQtoaI4T5eowJhbAxMGzXLjhScvNos9qM5OFmG1EgGo6ItCK9cvhgMVdk0ePLguzyFBsDTstFns/B8Q02dmeti27DqZWCxI3HJ41tDN4aJguu/4EPLlS55loW4dJVBh2g6IdT88N2uz6FQIrQULU10RbOGmGwNsIOnPBnb+lhD48U42HtgfEoFOuIU0GdHmxsIFzAb7b93/Gfyvy1MrZauZiFMZaiYxu43/Pg+VsGVsPze+zhtaxGBtr5gcLF6XDHENIKwSLW4/Cs4iBhxMtLPjR1tRbaFwLKo+tRqzv32xa1f/vowbahdLv42uFdY/NJuzqsCThoo9Z2E6S5wn6KbP/bGEgjL/KmF1S79qj9U1JCKVV8QKJlQacFdNBWSAHSqiTpIpJqVxiThf0DiW9jKQ5RuKUL8jDN9Ht07S4iAmES2Rg6VylasPibBanB99k71VBmlQR1SBmtGFxEqziPMZ5DWV4lrlgnYTCEM7rZcuaUgDwqDJioUTg2WW/l0M7CkPhM+cETcVEtJz/pY/QzmHQBMnxHRd9CQf1Sa3eZryJt+RygQe7OITwjvVB6izTdKRhBHAYZx3hjqCvrvwK9mso0jF5yXdnbGsGkQF9G79Ab144F+fpGJimcMnvaHI+PPfdndthVAgdLyYK8pTqum9JMco311eLJZBDPKEuMoHPutzVBRkNuU1mrST3iJ+x/O1p/DsApBbVKwFrYkkPWy0Y1hRTVmmRhqAfLqjBWA8eGrnnO9CZJWt+C1bKdUIBEmLJ4G+MKnqA2YI9/VxcYSP+axKDIH/BPiU7NGYpA2E='}})]))], created=1764600796, model='google/gemini-3-pro-preview', object='chat.completion', service_tier=None, system_fingerprint='', usage=CompletionUsage(completion_tokens=11, prompt_tokens=70, total_tokens=400, completion_tokens_details=CompletionTokensDetails(accepted_prediction_tokens=None, audio_tokens=None, reasoning_tokens=319, rejected_prediction_tokens=None), prompt_tokens_details=None, extra_properties={'google': {'traffic_type': 'ON_DEMAND'}}))
  • Using OpenAI Agents SDK:
# Create an async-compatible OpenAI client pointing to GOOGLE's endpoint
# Set up the OpenAI client pointing to GOOGLE endpoint
client = AsyncOpenAI(
    api_key=creds,
    base_url=f"https://aiplatform.googleapis.com/v1/projects/{PROJECT_ID}/locations/global/endpoints/openapi",
)

# Define the orchestrator agent that routes tasks
climatizer_agent = Agent[SharedContext](
    name="climatizer_agent",
    instructions=(
        "You are a smart home climatization agent. "
        "Your task is to monitor the temperature in a given city and adjust the thermostat to maintain comfort. "
        "Use the provided tools to read the current temperature and set a new thermostat value. "
        "If the temperature is too high or too low, adjust it to a reasonable 'comfortable' value, "
        "which you must assume without asking for clarification. "
        "Consider comfortable indoor temperature to be around 20–22°C (68–72°F) unless otherwise specified. "
        "Always perform actions using the tools and provide concise confirmation of the action taken."
    ),
    model=OpenAIChatCompletionsModel(model=MODEL_NAME, openai_client=client),
    model_settings=ModelSettings(parallel_tool_calls=False, tool_choice='required', temperature=0, top_p=1.0),
    tools=[function_tool(get_current_temperature), function_tool(set_thermostat_temperature),]
)

prompt = """
If it's too hot or too cold in London, set the temperature to a comfortable level.
Make your own reasonable assumption for what 'comfortable' means and do not ask for clarification.
"""

invoke_inputs = [{"role": "user", "content": prompt}]


async def main():
    while True:
        try:
            with trace("Climatization"):
                run_result = await Runner.run(climatizer_agent, input=invoke_inputs)
                print(run_result)
        except KeyboardInterrupt:
            break

OUTPUT:

File "C:\Users\dom\anaconda3\envs\genai\Lib\site-packages\openai\_base_client.py", line 1594, in request
    raise self._make_status_error_from_response(err.response) from None
openai.BadRequestError: Error code: 400 - [{'error': {'code': 400, 'message': 'Unable to submit request because function call `default_api:get_current_temperature` in the 2. content block is missing a `thought_signature`. Learn more: https://docs.cloud.google.com/vertex-ai/generative-ai/docs/thought-signatures', 'status': 'INVALID_ARGUMENT'}}]

Expected behavior

A clear and concise description of what you expected to happen.

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions