-
Notifications
You must be signed in to change notification settings - Fork 13.2k
Add support to ◁think▷...◁/think▷
format and DRY the thinking processing logic
#16364
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
… processing code
Why and how in hell did we end up getting the the frontend to parse thinking tags ? The backend returns thinking content inside dedicated field |
Parsing thinking content on the frontend had been around since the previous version of WebUI. It's necessary as there are cases where we are getting thinking content directly in the message instead of Some models do return thinking content in
llama.cpp/tools/server/webui/src/lib/components/app/chat/ChatMessages/ChatMessage.svelte Lines 48 to 59 in 132d673
llama.cpp/tools/server/webui/src/lib/stores/chat.svelte.ts Lines 329 to 346 in 132d673
|
So it's some kind of "Not implemented on backend (cpp code), but faster to implement on frontend" ? |
Yes. |
…ives - Captured inline <think> segments during streaming, forwarding them to the reasoning UI while keeping the cleaned assistant message stream intact - Tracked when explicit reasoning_content chunks arrive so inline capture is skipped once the server provides dedicated reasoning updates
Your PR already improves the old solution. |
…ats, dropping the redundant <|channel|>analysis check now handled upstream
Tested with GPT-OSS-120B, Qwen3 A3B Thinking, and GLM 4.5 Air on |
There's still one tricky edge case that isn't handled: some models expect the <think> tag to already be opened in the system prompt to start the chain-of-thought, and they were only trained to close it. With SFT done that way, compatibility with other models wasn't really considered because on the very first chunk, how do you know if it's reasoning or the final answer? That means we'd have to hook into /prop / Jinja template again just to propagate extra info, but that feels like a brittle workaround and it doesn't really align with the spirit of OpenAI-Compat. But this is not really a regression : I have not seen any WebUI handle it correctly so far. Or we could, upon detection of the </think>, retroactively render the preceding text as a "thinking block" at the start of the streamed final answer. |
From memory that's what the thinking 2507 QWEN3 model does. They don't open it, we have to consider it already opened. |
In this direction it seems normal, but this one doesn't enforce its opening in the Jinja of the available GGUFs, whereas if I recall correctly, ERNIE-4.5-21B-A3B-Thinking-GGUF won't work without that forced opening in Jinja. https://huggingface.co/unsloth/ERNIE-4.5-21B-A3B-Thinking-GGUF?chat_template=default
|
Backend can uses the chat template to know what to do
Exposing the chat template to the client using an endpoint could fix this issue. But again I still think it would be better to fully implement the logic on the backend and have nothing on the frontend that handles it |
Right now the handling of reasoning/thinking tags is split between backend and frontend. For GPT-OSS/Harmony the backend already parses <|channel|>analysis and streams it into delta.reasoning_content, so the WebUI just consumes message.reasoning_content. For other models (Qwen, GLM, etc.) the WebUI still uses legacy checks like content.includes("") or content.includes("[THINK]"), which duplicates logic and makes the frontend fragile. I think also it would be more appropriate to centralize everything in the backend and I find the idea interesting. I will try an implementation this weekend: common_chat_parse and helpers detect all formats (, [THINK], <|channel|>analysis, etc.). The parsed reasoning always goes into message.reasoning_content. If reasoning_in_content = true, the backend can re-inject it into message.content for legacy clients. The diff and JSON serialization already handle reasoning_content_delta, so OpenAI-compat output stays consistent. This way the frontend no longer parses tags, it only reads message.reasoning_content. All models are normalized, the API is consistent, and adding a new model format only requires updating the backend parser once. |
On another WIP branch to keep this 16364 working as alternative if I fail :
Or
Or Any other legacy format [think], ◁think▷, <seed:think>...
Currently, --reasoning-format has a documented limitation: Goal: Implement a universal streaming-aware C++ parser that works for all reasoning formats in both streaming and non-streaming modes, removing the need for the "(except in streaming mode...)" exception. #16394 -> Works for me as an alternative to this PR (and compatible with llama 3.3 inline ), even with a WebUI without any parsing! |
I don't even see the thinking tokens anymore, I just see the frontend say "processing", I have it enabled in the settings to be displayed. |
The “Show thinking” option in the WebUI only controls whether the panel is opened by default or remains closed until the user expands it. It’s purely a frontend behavior. To better understand your case, could you let us know: which llama.cpp version/commit you’re running (did you pull the latest? there have been many fixes since the React -> Svelte migration), and which model exactly you’re using (ideally a Hugging Face link). |
@ggerganov @ngxson i think that we probably could close this PR in favour of #16394 Lemme know what u guys think! :) |
thinking.ts