Skip to content

Adding a tool to streamText breaks messages using Ollama provider / Assistant-UI #5330

Closed
@klalka-dev

Description

@klalka-dev

Description

When adding a Tool to the streamText options, messages from the assistant are returned as blank. The tools are never called according to the raw responses and if I put a console.log inside the tool execute function I never see any results.

Library versions:

ollama-ai-provider 1.2.0
ai 4.1.47
@assistant-ui/react-ai-sdk 0.8.0

Code example

api/chat/route.ts

import { createOllama } from "ollama-ai-provider";
import { streamText } from "ai";
import { weatherTool } from "@/lib/ai/tools";

const ollama = createOllama({
  baseURL: `${process.env.OLLAMA_API}/api`,
});

export async function POST(req: Request) {
  const { messages } = await req.json();
  const lastMessage = messages[messages.length - 1].content[0].text;

  try {
    const result = streamText({
      model: ollama("llama3.2"),
      system: `You are a helpful assistant. Be polite.`,
      messages,
      async onFinish(completed) {
        console.log(JSON.stringify(completed));
      },
      tools: {
        getWeather: weatherTool
      }
    });

    return result.toDataStreamResponse({
      getErrorMessage: (error) => {
        if (error == null) {
          return "unknown error";
        }

        if (typeof error === "string") {
          return error;
        }

        if (error instanceof Error) {
          return error.message;
        }

        return JSON.stringify(error);
      },
    });
  } catch (error) {
    console.error("Chat POST Error", error);
    return JSON.stringify(error);
  }
}

lib/ai/tools

import { tool } from 'ai'
import { z } from 'zod'

export const weatherTool = tool({
    description: 'Get the weather in a location',
    parameters: z.object({
      location: z.string().describe('The location to get the weather for'),
    }),
    // location below is inferred to be a string:
    execute: async ({ location }) => ({
      location,
      temperature: 72 + Math.floor(Math.random() * 21) - 10,
    }),
  });

app/page.tsx

"use client";

import { Thread } from "../components/assistant-ui/thread";
import { useChatRuntime } from "@assistant-ui/react-ai-sdk";
import { AssistantRuntimeProvider } from "@assistant-ui/react";

export default function Home() {
  const runtime = useChatRuntime({
    api: "/api/chat",
  });

  return (
    <AssistantRuntimeProvider runtime={runtime}>
      <Thread />
    </AssistantRuntimeProvider>
  );
}

Inputting a chat message into the chat window with this code results in the following response. The chat application finishes without an error and displays no text.

RESPONSE

async onFinish(completed) {
        console.log(JSON.stringify(completed));
      },
{"finishReason":"stop","usage":{"promptTokens":177,"completionTokens":15,"totalTokens":192},"text":"","reasoningDetails":[],"sources":[],"toolCalls":[],"toolResults":[],"request":{"body":"{\"model\":\"llama3.2\",\"options\":{\"temperature\":0},\"messages\":[{\"content\":\"You are a helpful assistant. Be polite.\",\"role\":\"system\"},{\"content\":\"Hello, how are you today, chat assistant?\",\"role\":\"user\"}],\"tools\":[{\"function\":{\"description\":\"Get the weather in a location\",\"name\":\"getWeather\",\"parameters\":{\"type\":\"object\",\"properties\":{\"location\":{\"type\":\"string\",\"description\":\"The location to get the weather for\"}},\"required\":[\"location\"],\"additionalProperties\":false,\"$schema\":\"http://json-schema.org/draft-07/schema#\"}},\"type\":\"function\"}]}"},"response":{"id":"aitxt-vZyIq8L9T6mitQZ7vLL96Dt4","timestamp":"2025-03-22T17:04:29.820Z","modelId":"llama3.2","headers":{"content-type":"application/x-ndjson","date":"Sat, 22 Mar 2025 17:04:29 GMT","transfer-encoding":"chunked"},"messages":[{"role":"assistant","content":[{"type":"text","text":""}],"id":"msg-OhJWelHl8tHM09aKtmmo8lvd"}]},"warnings":[],"steps":[{"stepType":"initial","text":"","reasoningDetails":[],"sources":[],"toolCalls":[],"toolResults":[],"finishReason":"stop","usage":{"promptTokens":177,"completionTokens":15,"totalTokens":192},"warnings":[],"request":{"body":"{\"model\":\"llama3.2\",\"options\":{\"temperature\":0},\"messages\":[{\"content\":\"You are a helpful assistant. Be polite.\",\"role\":\"system\"},{\"content\":\"Hello, how are you today, chat assistant?\",\"role\":\"user\"}],\"tools\":[{\"function\":{\"description\":\"Get the weather in a location\",\"name\":\"getWeather\",\"parameters\":{\"type\":\"object\",\"properties\":{\"location\":{\"type\":\"string\",\"description\":\"The location to get the weather for\"}},\"required\":[\"location\"],\"additionalProperties\":false,\"$schema\":\"http://json-schema.org/draft-07/schema#\"}},\"type\":\"function\"}]}"},"response":{"id":"aitxt-vZyIq8L9T6mitQZ7vLL96Dt4","timestamp":"2025-03-22T17:04:29.820Z","modelId":"llama3.2","headers":{"content-type":"application/x-ndjson","date":"Sat, 22 Mar 2025 17:04:29 GMT","transfer-encoding":"chunked"},"messages":[{"role":"assistant","content":[{"type":"text","text":""}],"id":"msg-OhJWelHl8tHM09aKtmmo8lvd"}]},"isContinued":false}]}

messages content is blank => "messages":[{"role":"assistant","content":[{"type":"text","text":""}]

AI provider

ollama-ai-provider 1.2.0

Additional context

I have a local AI server running Ollama and tried to modify the example code provided in the documentation so that it can use the llama3.2 model. I am using the assistant-ui library for the application UI and the ollama-ai-provider for the model. I ran into an issue when adding a tool to make a Supabase rpc call, so I removed my tool and used the weather tool from the documentation and saw the same exact behavior. Im unsure if its a problem with the Ollama model provider, a problem with the assistant-ui library.

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't working

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions