Skip to content

[BUG] TypeError: Object of type File is not JSON serializable when using input_files with gemma3 #4498

@johnvan7

Description

@johnvan7

Description

When using the crewai_files module to pass File objects into crew.kickoff(input_files=...), the execution fails during the LLM call. The underlying httpx client attempts to serialize the File object into a JSON payload for the OpenAI-compatible endpoint (Ollama/Gemma 3), leading to a TypeError.

Steps to Reproduce

1- Define a Crew with an agent using a local LLM (e.g., Gemma 3).
2- Create a File object using crewai_files.
3- Call crew.kickoff(input_files={"my_file": File(source="path/to/file.pdf")}).
4- See error: TypeError: Object of type File is not JSON serializable.

Expected behavior

The File object should be correctly handled by the agent's executor.

Screenshots/Code snippets

ERROR: OpenAI API call failed: Object of type File is not JSON serializable
ERROR: OpenAI API call failed: Object of type File is not JSON serializable

[CrewAIEventsBus] Warning: Event pairing mismatch. 'llm_call_failed' closed 'agent_execution_started' (expected 'llm_call_started')
An unknown error occurred. Please check the details below.
Error details: Object of type File is not JSON serializable
An unknown error occurred. Please check the details below.
Error details: Object of type File is not JSON serializable
[CrewAIEventsBus] Warning: Event pairing mismatch. 'agent_execution_error' closed 'task_started' (expected 
'agent_execution_started')
[CrewAIEventsBus] Warning: Event pairing mismatch. 'task_failed' closed 'crew_kickoff_started' (expected 'task_started')
[CrewAIEventsBus] Warning: Ending event 'crew_kickoff_failed' emitted with empty scope stack. Missing starting event?
ERROR: Programmatic analysis failed for 138/2026
Traceback (most recent call last):
  File "/.../project/src/ai_albo_bot/telegram/notifier.py", line 207, in notify_users
    res = analyze.analyze_item(entry)
  File "/.../project/src/ai_albo_bot/agent/analyze.py", line 137, in analyze_item
    crew_result = crew.kickoff(input_files=input_files)
  File "/.../project/.venv/lib/python3.13/site-packages/crewai/crew.py", line 743, in kickoff
    result = self._run_sequential_process()
  File "/.../project/.venv/lib/python3.13/site-packages/crewai/crew.py", line 1150, in _run_sequential_process
    return self._execute_tasks(self.tasks)
           ~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^
  File "/.../project/.venv/lib/python3.13/site-packages/crewai/crew.py", line 1236, in _execute_tasks
    task_output = task.execute_sync(
        agent=exec_data.agent,
        context=context,
        tools=exec_data.tools,
    )
  File "/.../project/.venv/lib/python3.13/site-packages/crewai/task.py", line 499, in execute_sync
    return self._execute_core(agent, context, tools)
           ~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^
  File "/.../project/.venv/lib/python3.13/site-packages/crewai/task.py", line 740, in _execute_core
    raise e  # Re-raise the exception after emitting the event
    ^^^^^^^
  File "/.../project/.venv/lib/python3.13/site-packages/crewai/task.py", line 671, in _execute_core
    result = agent.execute_task(
        task=self,
        context=context,
        tools=tools,
    )
  File "/.../project/.venv/lib/python3.13/site-packages/crewai/agent/core.py", line 493, in execute_task
    result = self.execute_task(task, context, tools)
  File "/.../project/.venv/lib/python3.13/site-packages/crewai/agent/core.py", line 493, in execute_task
    result = self.execute_task(task, context, tools)
  File "/.../project/.venv/lib/python3.13/site-packages/crewai/agent/core.py", line 492, in execute_task
    raise e
  File "/.../project/.venv/lib/python3.13/site-packages/crewai/agent/core.py", line 459, in execute_task
    result = self._execute_without_timeout(task_prompt, task)
  File "/.../project/.venv/lib/python3.13/site-packages/crewai/agent/core.py", line 568, in _execute_without_timeout
    return self.agent_executor.invoke(
           ~~~~~~~~~~~~~~~~~~~~~~~~~~^
        {
        ^
    ...<4 lines>...
        }
        ^
    )["output"]
    ^
  File "/.../project/.venv/lib/python3.13/site-packages/crewai/agents/crew_agent_executor.py", line 207, in invoke
    formatted_answer = self._invoke_loop()
  File "/.../project/.venv/lib/python3.13/site-packages/crewai/agents/crew_agent_executor.py", line 305, in _invoke_loop
    return self._invoke_loop_native_tools()
           ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^
  File "/.../project/.venv/lib/python3.13/site-packages/crewai/agents/crew_agent_executor.py", line 579, in _invoke_loop_native_tools
    raise e
  File "/.../project/.venv/lib/python3.13/site-packages/crewai/agents/crew_agent_executor.py", line 498, in _invoke_loop_native_tools
    answer = get_llm_response(
        llm=self.llm,
    ...<9 lines>...
        verbose=self.agent.verbose,
    )
  File "/.../project/.venv/lib/python3.13/site-packages/crewai/utilities/agent_utils.py", line 372, in get_llm_response
    raise e
  File "/.../project/.venv/lib/python3.13/site-packages/crewai/utilities/agent_utils.py", line 362, in get_llm_response
    answer = llm.call(
        messages,
    ...<5 lines>...
        response_model=response_model,
    )
  File "/.../project/.venv/lib/python3.13/site-packages/crewai/llms/providers/openai/completion.py", line 410, in call
    return self._call_completions(
           ~~~~~~~~~~~~~~~~~~~~~~^
        messages=formatted_messages,
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    ...<4 lines>...
        response_model=response_model,
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    )
    ^
  File "/.../project/.venv/lib/python3.13/site-packages/crewai/llms/providers/openai/completion.py", line 450, in _call_completions
    return self._handle_completion(
           ~~~~~~~~~~~~~~~~~~~~~~~^
        params=completion_params,
        ^^^^^^^^^^^^^^^^^^^^^^^^^
    ...<3 lines>...
        response_model=response_model,
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    )
    ^
  File "/.../project/.venv/lib/python3.13/site-packages/crewai/llms/providers/openai/completion.py", line 1687, in _handle_completion
    raise e from e
  File "/.../project/.venv/lib/python3.13/site-packages/crewai/llms/providers/openai/completion.py", line 1585, in _handle_completion
    response: ChatCompletion = self.client.chat.completions.create(**params)
                               ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^
  File "/.../project/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 287, in wrapper
    return func(*args, **kwargs)
  File "/.../project/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 925, in create
    return self._post(
           ~~~~~~~~~~^
        "/chat/completions",
        ^^^^^^^^^^^^^^^^^^^^
    ...<43 lines>...
        stream_cls=Stream[ChatCompletionChunk],
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    )
    ^
  File "/.../project/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1242, in post
    return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls))
                           ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/.../project/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 958, in request
    request = self._build_request(options, retries_taken=retries_taken)
  File "/.../project/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 535, in _build_request
    return self._client.build_request(  # pyright: ignore[reportUnknownMemberType]
           ~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
        headers=headers,
        ^^^^^^^^^^^^^^^^
    ...<10 lines>...
        **kwargs,
        ^^^^^^^^^
    )
    ^
  File "/.../project/.venv/lib/python3.13/site-packages/httpx/_client.py", line 378, in build_request
    return Request(
        method,
    ...<8 lines>...
        extensions=extensions,
    )
  File "/.../project/.venv/lib/python3.13/site-packages/httpx/_models.py", line 408, in __init__
    headers, stream = encode_request(
                      ~~~~~~~~~~~~~~^
        content=content,
        ^^^^^^^^^^^^^^^^
    ...<7 lines>...
        ),
        ^^
    )
    ^
  File "/.../project/.venv/lib/python3.13/site-packages/httpx/_content.py", line 216, in encode_request
    return encode_json(json)
  File "/.../project/.venv/lib/python3.13/site-packages/httpx/_content.py", line 177, in encode_json
    body = json_dumps(
           ~~~~~~~~~~^
        json, ensure_ascii=False, separators=(",", ":"), allow_nan=False
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    ).encode("utf-8")
    ^
  File "/Library/Frameworks/Python.framework/Versions/3.13/lib/python3.13/json/__init__.py", line 242, in dumps
    **kw).encode(obj)
          ~~~~~~^^^^^
  File "/Library/Frameworks/Python.framework/Versions/3.13/lib/python3.13/json/encoder.py", line 202, in encode
    chunks = self.iterencode(o, _one_shot=True)
  File "/Library/Frameworks/Python.framework/Versions/3.13/lib/python3.13/json/encoder.py", line 263, in iterencode
    return _iterencode(o, 0)
  File "/Library/Frameworks/Python.framework/Versions/3.13/lib/python3.13/json/encoder.py", line 182, in default
    raise TypeError(f'Object of type {o.__class__.__name__} '
                    f'is not JSON serializable')
TypeError: Object of type File is not JSON serializable

Operating System

macOS 26

Python Version

3.13

crewAI Version

1.9.3

crewAI Tools Version

1.9.3

Virtual Environment

Venv

Evidence

INFO: OpenAI: Successfully validated tool 'read_file'
ERROR: OpenAI API call failed: Object of type File is not JSON serializable
ERROR: OpenAI API call failed: Object of type File is not JSON serializable
[CrewAIEventsBus] Warning: Event pairing mismatch. 'llm_call_failed' closed 'agent_execution_started' (expected 'llm_call_started')
An unknown error occurred. Please check the details below.
Error details: Object of type File is not JSON serializable
An unknown error occurred. Please check the details below.
Error details: Object of type File is not JSON serializable

Possible Solution

None

Additional context

LLM: gemma3 via Docker Model Runner

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't working

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions