-
Notifications
You must be signed in to change notification settings - Fork 3.1k
Closed
Labels
core[Component] This issue is related to the core interface and implementation[Component] This issue is related to the core interface and implementation
Description
Describe the bug
After calling a tool, the final model message in a session will randomly have no content parts.
For example:
Event(model_version='gemini-2.5-flash-lite', content=Content(
role='model'
)
To Reproduce
Please share a minimal code and data to reproduce your problem.
Steps to reproduce the behavior:
- Install google-adk
- Run:
###############################################
# SETUP CODE
from typing import Any, Dict
from google.adk.agents import LlmAgent
from google.adk.models.google_llm import Gemini
from google.adk.sessions import InMemorySessionService
from google.adk.runners import Runner
from google.adk.tools.tool_context import ToolContext
from google.genai import types
retry_config = types.HttpRetryOptions(
attempts=5, # Maximum retry attempts
exp_base=7, # Delay multiplier
initial_delay=1,
http_status_codes=[429, 500, 503, 504], # Retry on these HTTP errors
)
USER_NAME_SCOPE_LEVELS = ("temp", "user", "app")
# This demonstrates how tools can read from session state.
def retrieve_userinfo(tool_context: ToolContext) -> Dict[str, Any]:
"""
Tool to retrieve user name and country from session state.
"""
# Read from session state
user_name = tool_context.state.get("user:name", "Username not found")
country = tool_context.state.get("user:country", "Country not found")
return {"status": "success", "user_name": user_name, "country": country}
# Configuration
APP_NAME = "default"
USER_ID = "default"
# Create an agent with session state tools
root_agent = LlmAgent(
model=Gemini(model="gemini-2.5-flash-lite", retry_options=retry_config),
name="text_chat_bot",
description="""A text chatbot.
Tools for managing user context:
* To fetch username and country when required use `retrieve_userinfo` tool.
""",
tools=[retrieve_userinfo], # Provide the tools to the agent
)
# Set up session service and runner
session_service = InMemorySessionService()
runner = Runner(agent=root_agent, session_service=session_service, app_name="default")
# Inject user info into session state
session = await runner.session_service.create_session(
app_name=APP_NAME,
user_id=USER_ID,
state = {"user:name": "Will", "user:country":"USA"}
)
idx=0 # we'll use this below
###############################################
# RUN CODE
# You may need to run the following code multiple times in order to reproduce the issue.
# I've built in session cycling so you don't accidentally contaminate new attempts
# It easiest to run this in a jupyter notebook with the setup code in one cell and the run code in another
idx+=1
session = await session_service.create_session(
app_name=APP_NAME, user_id=USER_ID, session_id=f"new-session-{idx}" # this may take several attempts
)
query = types.Content(role="user", parts=[types.Part(text="Could you use a tool call to recall my name?")])
async for event in runner.run_async(
user_id=USER_ID, session_id=session.id, new_message=query
):
print(f"{event.content.role}:")
if event.content.parts is not None:
for part in event.content.parts:
if part.function_call is not None:
print(f"FUNCTION CALL: {part.function_call.name}: {part.function_call.args}")
elif part.function_response is not None:
print(f"FUNCTION RESPONSE: {part.function_response.name}: {part.function_response.response}")
else:
print(f"{part.text}")
else:
print("TARGET BUG BEHAVIOR IDENTIFIED: NO CONTENT GENERATED")
print(event.content)
print("-"*15)
###############################################
Expected behavior
We expect the final message from the model to output the saved user name and country:
Sending out request, model: gemini-2.5-flash-lite, backend: GoogleLLMVariant.GEMINI_API, stream: False
HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/models/gemini-2.5-flash-lite:generateContent "HTTP/1.1 200 OK"
Response received from the model.
Warning: there are non-text parts in the response: ['function_call'], returning concatenated text result from text parts. Check the full candidates.content.parts accessor to get the full model response.
Sending out request, model: gemini-2.5-flash-lite, backend: GoogleLLMVariant.GEMINI_API, stream: False
model:
FUNCTION CALL: retrieve_userinfo: {}
---------------
user:
FUNCTION RESPONSE: retrieve_userinfo: {'status': 'success', 'user_name': 'Will', 'country': 'USA'}
---------------
HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/models/gemini-2.5-flash-lite:generateContent "HTTP/1.1 200 OK"
Response received from the model.
model:
I'm sorry, I cannot recall your name, but I can see that you are Will and you are from the USA.
---------------
Actual behavior
Maybe 50% of the time, the final message in the event stream has no parts. See example output below:
Sending out request, model: gemini-2.5-flash-lite, backend: GoogleLLMVariant.GEMINI_API, stream: False
HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/models/gemini-2.5-flash-lite:generateContent "HTTP/1.1 200 OK"
Response received from the model.
Warning: there are non-text parts in the response: ['function_call'], returning concatenated text result from text parts. Check the full candidates.content.parts accessor to get the full model response.
Sending out request, model: gemini-2.5-flash-lite, backend: GoogleLLMVariant.GEMINI_API, stream: False
model:
FUNCTION CALL: retrieve_userinfo: {}
---------------
user:
FUNCTION RESPONSE: retrieve_userinfo: {'status': 'success', 'user_name': 'Will', 'country': 'USA'}
---------------
HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/models/gemini-2.5-flash-lite:generateContent "HTTP/1.1 200 OK"
Response received from the model.
model:
TARGET BUG BEHAVIOR IDENTIFIED: NO CONTENT GENERATED
parts=None role='model'
---------------
Desktop (please complete the following information):
- OS: Mac
- Python version(python -V): 3.13
- ADK version(pip show google-adk): 1.18
Model Information:
- Are you using LiteLLM: No? Is this what the 'lite' means in gemini-2.5-flash-lite?
- Which model is being used(e.g. gemini-2.5-pro) gemini-2.5-flash-lite
Additional context
If I switch from gemini-2.5-flash-lite, the issue appears to resolve to happen either at a much lower rate or not at all.
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
core[Component] This issue is related to the core interface and implementation[Component] This issue is related to the core interface and implementation