Description
I am trying to get a local ollama-backed tiny-agent, with playwright, to work. I have llama3.1:latest working with ollama. I have a working tiny-agent which I execute via npx @huggingface/tiny-agents run ./my-agent
and such that my agent.json looks like
{
"model": "llama3.1:latest",
"endpointUrl": "http://localhost:11434",
"servers": [
{
"type": "sse",
"config": {
"url": "http://localhost:8931/sse"
}
}
]
}
I spin up playwright, before running the agent, manually via this command npx @playwright/mcp --port 8931
.
The agent starts and I get a cursor, but when I enter a prompt, I always get some version of this error (JSON parse error):
Agent loaded with 25 tools:
- browser_close
- browser_resize
- browser_console_messages
- browser_handle_dialog
- browser_file_upload
- browser_install
- browser_press_key
- browser_navigate
- browser_navigate_back
- browser_navigate_forward
- browser_network_requests
- browser_pdf_save
- browser_take_screenshot
- browser_snapshot
- browser_click
- browser_drag
- browser_hover
- browser_type
- browser_select_option
- browser_tab_list
- browser_tab_new
- browser_tab_select
- browser_tab_close
- browser_generate_playwright_test
- browser_wait_for
> Close browser
browser_close {}
/home/.../playground/repos/huggingface.js/packages/tiny-agents/dist/cli.js:118
throw err;
^
SyntaxError: Unexpected token { in JSON at position 2
at JSON.parse ()
at Agent.processSingleTurnWithTools (/home/.../playground/repos/huggingface.js/packages/mcp-client/dist/src/index.js:233:71)
at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
at async Agent.run (/home/.../playground/repos/huggingface.js/packages/mcp-client/dist/src/index.js:328:9)
at async mainCliLoop (/home/.../playground/repos/huggingface.js/packages/tiny-agents/dist/cli.js:129:22)
at async main (/home/.../playground/repos/huggingface.js/packages/tiny-agents/dist/cli.js:343:5)
Node.js v18.19.1
I think the issue has to do with JSON.parse calls. I've tried stdio, http and sse for connecting to playwright, including letting the tiny-agent software issue the npx command for me. I think it's something to do with my leveraging a local model? This is why I'm here, I couldn't figure it out. Any and all assistance is appreciated.
My versions of packages are:
dependencies:
@huggingface/tiny-agents link:../repos/huggingface.js/packages/tiny-agents
@modelcontextprotocol/sdk 1.12.0
@playwright/mcp 0.0.27
express 5.1.0
ollama 0.5.15
(I was originally on @huggingface/tiny-agents v0.2.3, but I pulled down the repo, built and installed it to see if that would work, it didn't.)
I'm running Ubuntu 24.04.
Activity
sdelahaies commentedon May 30, 2025
I am facing a similar issue with while trying to complete the unit 2 of HF mcp-course Building a Tiny Agent with TypeScript
Borrowing from your json config, I use
and running
npx @huggingface/tiny-agents run ./my-agent
leveraging the local model doesn't seem to be an issue here as we see a tool call with I guess a legit call to the mcp tool... but still a json.parse issue in the end...
sdelahaies commentedon Jun 1, 2025
@alexdconf my problem may be related to yours after all, inspecting
.../node_modules/@huggingface/mcp-client/dist/src/index.js
and puttingconsole.log
everywhere to debug, it seems thattoolCall.function.arguments
in line 233 contains twice the arguments string because of lines 222-224, thus commenting 222-224fixes the parsing issue as we now have a well formed json dump.
I can make it work with the following example
with the following agent config
and the prompt
find details about MNIST dataset
, however when the LLM pass an argument that should be a number as a string, I am getting an error, for instancejulien-c commentedon Jun 2, 2025
Hi!
I think there are a couple of different issues here, but I think one way to debug is to feed back errors into the LLM. Let me try to open a PR to do this.
julien-c commentedon Jun 2, 2025
in the meantime, try using a larger model! It will generate valid JSON more reliably
julien-c commentedon Jun 2, 2025
If you are able, can you please give a try to #1511 locally?
sdelahaies commentedon Jun 2, 2025
Hello, thanks for following up on this, as far as my problem goes it seems that the size of the model is not the problem. I tested it with llama3.2:3b and qwen2.5-coder:7b, both model producing legit json.dump outputs ending with the same error.
The problem came from the doubling of the tool_call arguments which I overcame by commenting lines 222-224 of mcp-client/..../index.js
For the other issue concerning a tool expecting an argument as number, I guess that's a common issue when you deal with functions as mcp/llm tools, you'd better treat the inputs as
str
no matter what and deal with conversion to number or whatever in the tool core logic, otherwise you need to enforce a type checking before you call a tool...I can give a try to #1511, although I am not quite sure how, as as soon as the error strike the tiny-agent exits.
julien-c commentedon Jun 3, 2025
we'll push a new release with #1511 inside, so it's easier to test it locally. (otherwise you have to build locally using
pnpm
)julien-c commentedon Jun 3, 2025
ok ok @sdelahaies i can now reproduce your error. Let me see how to best fix it
julien-c commentedon Jun 3, 2025
Ok, how does this look @sdelahaies? #1512
sdelahaies commentedon Jun 3, 2025
Hello, it does solve the issue indeed, great, thanks!
Handle ollama's deviation from the OpenAI tool streaming spec (#1512)
julien-c commentedon Jun 4, 2025
thanks a lot for the debugging info + resolution proposal @sdelahaies, it's been merged
[mcp-client] Re-inject errors into the LLM (#1511)