🔴 Required Information
Please ensure all items in this section are completed to allow for efficient
triaging. Requests without complete information may be rejected / deprioritized.
If an item is not applicable to you - please mark it as N/A
Describe the Bug:
Cannot send message gemini model with invalid file type. It raise error.
But LiteLlm model has fallback to text. I want same structure Gemini model.
My concern is not the API restriction itself, but the inconsistency at the ADK abstraction layer.
LiteLlm currently falls back to text extraction when unsupported file MIME types are provided, while Gemini raises a raw API error.
This means application developers need provider-specific handling even though both models are exposed through the same ADK interface.
I think ADK should either:
- consistently raise provider errors for unsupported MIME types across all models, or
- consistently provide fallback behavior across all models.
Right now the behavior differs depending on the backend provider, which feels like an abstraction leak.
Personally, I believe fallback behavior would provide a better developer experience.
Steps to Reproduce:
Please provide a numbered list of steps to reproduce the behavior:
- Install 'uv add google-adk'
- call
runnner.run_async with
runner.run_async(
new_message=types.Content(
role="user",
parts=[
types.Part(
file_data=types.FileData(
display_name="example.pptx",
mime_type="application/vnd.openxmlformats-officedocument.presentationml.presentation",
file_uri="...",
),
)
]
)
)
- Provide error or stacktrace
get google.genai.errors.ClientError: 400 Bad Request
Expected Behavior:
A clear and concise description of what you expected to happen.
fallback text same as litellm model.
Observed Behavior:
stacktrace
| File "/app/.venv/lib/python3.10/site-packages/google/adk/runners.py", line 562, in run_async
| async for event in agen:
| File "/app/.venv/lib/python3.10/site-packages/google/adk/runners.py", line 550, in _run_with_trace
| async for event in agen:
| File "/app/.venv/lib/python3.10/site-packages/google/adk/runners.py", line 779, in _exec_with_plugin
| async for event in agen:
| File "/app/.venv/lib/python3.10/site-packages/google/adk/runners.py", line 539, in execute
| async for event in agen:
| File "/app/.venv/lib/python3.10/site-packages/google/adk/agents/base_agent.py", line 294, in run_async
| async for event in agen:
| File "/app/.venv/lib/python3.10/site-packages/google/adk/agents/llm_agent.py", line 468, in _run_async_impl
| async for event in agen:
| File "/app/.venv/lib/python3.10/site-packages/google/adk/flows/llm_flows/base_llm_flow.py", line 365, in run_async
| async for event in agen:
| File "/app/.venv/lib/python3.10/site-packages/google/adk/flows/llm_flows/base_llm_flow.py", line 442, in _run_one_step_async
| async for llm_response in agen:
| File "/app/.venv/lib/python3.10/site-packages/google/adk/flows/llm_flows/base_llm_flow.py", line 829, in _call_llm_async
| async for event in agen:
| File "/app/.venv/lib/python3.10/site-packages/google/adk/flows/llm_flows/base_llm_flow.py", line 813, in _call_llm_with_tracing
| async for llm_response in agen:
| File "/app/.venv/lib/python3.10/site-packages/google/adk/flows/llm_flows/base_llm_flow.py", line 1061, in _run_and_handle_error
| raise model_error
| File "/app/.venv/lib/python3.10/site-packages/google/adk/flows/llm_flows/base_llm_flow.py", line 1047, in _run_and_handle_error
| async for response in agen:
| File "/app/.venv/lib/python3.10/site-packages/google/adk/models/google_llm.py", line 262, in generate_content_async
| raise ce
| File "/app/.venv/lib/python3.10/site-packages/google/adk/models/google_llm.py", line 210, in generate_content_async
| responses = await self.api_client.aio.models.generate_content_stream(
| File "/app/.venv/lib/python3.10/site-packages/google/genai/models.py", line 7182, in generate_content_stream
| response = await self._generate_content_stream(
| File "/app/.venv/lib/python3.10/site-packages/google/genai/models.py", line 5925, in _generate_content_stream
| response_stream = await self._api_client.async_request_streamed(
| File "/app/.venv/lib/python3.10/site-packages/google/genai/_api_client.py", line 1438, in async_request_streamed
| response = await self._async_request(http_request=http_request, stream=True)
| File "/app/.venv/lib/python3.10/site-packages/google/genai/_api_client.py", line 1354, in _async_request
| return await self._async_retry( # type: ignore[no-any-return]
| File "/app/.venv/lib/python3.10/site-packages/tenacity/asyncio/__init__.py", line 111, in __call__
| do = await self.iter(retry_state=retry_state)
| File "/app/.venv/lib/python3.10/site-packages/tenacity/asyncio/__init__.py", line 153, in iter
| result = await action(retry_state)
| File "/app/.venv/lib/python3.10/site-packages/tenacity/_utils.py", line 99, in inner
| return call(*args, **kwargs)
| File "/app/.venv/lib/python3.10/site-packages/tenacity/__init__.py", line 420, in exc_check
| raise retry_exc.reraise()
| File "/app/.venv/lib/python3.10/site-packages/tenacity/__init__.py", line 187, in reraise
| raise self.last_attempt.result()
| File "/root/.local/share/uv/python/cpython-3.10.20-linux-x86_64-gnu/lib/python3.10/concurrent/futures/_base.py", line 451, in result
| return self.__get_result()
| File "/root/.local/share/uv/python/cpython-3.10.20-linux-x86_64-gnu/lib/python3.10/concurrent/futures/_base.py", line 403, in __get_result
| raise self._exception
| File "/app/.venv/lib/python3.10/site-packages/tenacity/asyncio/__init__.py", line 114, in __call__
| result = await fn(*args, **kwargs)
| File "/app/.venv/lib/python3.10/site-packages/google/genai/_api_client.py", line 1270, in _async_request_once
| await errors.APIError.raise_for_async_response(response)
| File "/app/.venv/lib/python3.10/site-packages/google/genai/errors.py", line 203, in raise_for_async_response
| await cls.raise_error_async(status_code, response_json, response)
| File "/app/.venv/lib/python3.10/site-packages/google/genai/errors.py", line 225, in raise_error_async
| raise ClientError(status_code, response_json, response)
| google.genai.errors.ClientError: 400 Bad Request. {'message': '{\n "error": {\n "code": 400,\n "message": "Unable to submit request because it has a mimeType parameter with value application/vnd.openxmlformats-officedocument.presentationml.presentation, which is not supported. Update the mimeType and try again. Learn more: https://cloud.google.com/vertex-ai/generative-ai/docs/model-reference/gemini",\n "status": "INVALID_ARGUMENT"\n }\n}\n', 'status': 'Bad Request'}
+------------------------------------
Environment Details:
- ADK Library Version (pip show google-adk): 1.15.1
- Desktop OS:Linux (AL2023)
- Python Version (python -V): python.3.10
Model Information:
- Are you using LiteLLM: No
- Which model is being used: Gemini-3-Flash
🟡 Optional Information
Screenshots / Video:
If applicable, add screenshots or screen recordings to help explain
your problem.
Additional Context:
Add any other context about the problem here.
Minimal Reproduction Code:
Please provide a code snippet or a link to a Gist/repo that isolates the issue.
How often has this issue occurred?:
Addtional Content
related: #5471
I want add more information to that. I cannot reopen it, so I re-submit this issue.
Thank you.
🔴 Required Information
Please ensure all items in this section are completed to allow for efficient
triaging. Requests without complete information may be rejected / deprioritized.
If an item is not applicable to you - please mark it as N/A
Describe the Bug:
Cannot send message gemini model with invalid file type. It raise error.
But LiteLlm model has fallback to text. I want same structure Gemini model.
My concern is not the API restriction itself, but the inconsistency at the ADK abstraction layer.
LiteLlm currently falls back to text extraction when unsupported file MIME types are provided, while Gemini raises a raw API error.
This means application developers need provider-specific handling even though both models are exposed through the same ADK interface.
I think ADK should either:
Right now the behavior differs depending on the backend provider, which feels like an abstraction leak.
Personally, I believe fallback behavior would provide a better developer experience.
Steps to Reproduce:
Please provide a numbered list of steps to reproduce the behavior:
runnner.run_asyncwithget
google.genai.errors.ClientError: 400 Bad RequestExpected Behavior:
A clear and concise description of what you expected to happen.
fallback text same as litellm model.
Observed Behavior:
stacktrace
Environment Details:
Model Information:
🟡 Optional Information
Screenshots / Video:
If applicable, add screenshots or screen recordings to help explain
your problem.
Additional Context:
Add any other context about the problem here.
Minimal Reproduction Code:
Please provide a code snippet or a link to a Gist/repo that isolates the issue.
How often has this issue occurred?:
Addtional Content
related: #5471
I want add more information to that. I cannot reopen it, so I re-submit this issue.
Thank you.