Bug Description
When using OpenCook with an Ollama provider (specifically qwen3.5:9b), the agent crashes during the iterative loop (typically Step 3) when attempting to send a tool call result back to the model. The crash is triggered by a Pydantic validation error because the message object passed to the Ollama client lacks a mandatory role field.
Error Traceback snippet:
pydantic_core._pydantic_core.ValidationError: 1 validation error for Message role Field required [type=missing, input_value={'call_id': 'e8a01082-5c3... 'function_call_output'}, input_type=dict]
Steps to Reproduce
- Install OpenCook:
pip install git+https://github.com/OpenDataBox/OpenCook.git
- Create a configuration file (
config.json) with an Ollama provider:
{
"provider": "ollama",
"model": "qwen3.5:9b",
"model_base_url": "http://localhost:11434",
"max_steps": 50,
"console_type": "simple",
"agent_type": "code_agent"
}
- Run the agent:
opencook run "some task" --config-file config.json --provider ollama --model qwen3.5:9b --api-key dummy --working-dir .
- Observe the crash: The agent successfully initiates a tool call but fails immediately when processing the output of that tool for the next LLM turn.
Expected Behavior
The tool call result should be formatted as a valid message object containing a role field (e.g., "role": "tool") before being sent back to the Ollama API, allowing the model to process the observation.
Actual Behavior
The ollama_client.py constructs a message dictionary containing the call_id and output, but omits the role key. The underlying Ollama Python library uses Pydantic for schema enforcement and rejects the malformed dictionary.
Environment
- OpenCook version: 0.1.0
- Python: 3.10
- OS: Windows 11
- Ollama version: latest
- Model: qwen3.5:9b
Root Cause
The issue is located in code_agent/utils/llm_clients/ollama_client.py at approximately line 83. The method _create_ollama_response (or the logic handling the message history assembly) fails to inject the required role attribute into the message dictionary representing a tool's output.
Suggested Fix
Ensure that the dictionary for tool responses includes the "role": "tool" key and maps the output to the "content" key as expected by the Ollama/OpenAI-compatible schema.
Proposed Message Structure:
{
"role": "tool",
"tool_call_id": "...", # or call_id depending on library version
"content": "..."
}
Bug Description
When using OpenCook with an Ollama provider (specifically
qwen3.5:9b), the agent crashes during the iterative loop (typically Step 3) when attempting to send a tool call result back to the model. The crash is triggered by a Pydantic validation error because the message object passed to the Ollama client lacks a mandatoryrolefield.Error Traceback snippet:
pydantic_core._pydantic_core.ValidationError: 1 validation error for Message role Field required [type=missing, input_value={'call_id': 'e8a01082-5c3... 'function_call_output'}, input_type=dict]Steps to Reproduce
pip install git+https://github.com/OpenDataBox/OpenCook.gitconfig.json) with an Ollama provider:{ "provider": "ollama", "model": "qwen3.5:9b", "model_base_url": "http://localhost:11434", "max_steps": 50, "console_type": "simple", "agent_type": "code_agent" }opencook run "some task" --config-file config.json --provider ollama --model qwen3.5:9b --api-key dummy --working-dir .Expected Behavior
The tool call result should be formatted as a valid message object containing a
rolefield (e.g.,"role": "tool") before being sent back to the Ollama API, allowing the model to process the observation.Actual Behavior
The
ollama_client.pyconstructs a message dictionary containing thecall_idandoutput, but omits therolekey. The underlying Ollama Python library uses Pydantic for schema enforcement and rejects the malformed dictionary.Environment
Root Cause
The issue is located in
code_agent/utils/llm_clients/ollama_client.pyat approximately line 83. The method_create_ollama_response(or the logic handling the message history assembly) fails to inject the requiredroleattribute into the message dictionary representing a tool's output.Suggested Fix
Ensure that the dictionary for tool responses includes the
"role": "tool"key and maps the output to the"content"key as expected by the Ollama/OpenAI-compatible schema.Proposed Message Structure:
{ "role": "tool", "tool_call_id": "...", # or call_id depending on library version "content": "..." }