-
Notifications
You must be signed in to change notification settings - Fork 13.4k
llama-cli: add support for reasoning #16603
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
I just updated to clean up the system/prompt tags (see description changes), but I will await feedback before changing anything else! 😊 One thing I was contemplating was splitting the display block into a separate abstraction. The display could be a new type because more state was added here, it might be a good time to do refactors like this to encapsulate functionality incrementally. |
Ack, I found an issue with the logic here. When part of the template strings match the "content" part it falsely matches. For example, "You are a wizard" when the template applied (below) will match against "You are ChatGPT". So I think it has to match the surrounding tokens exactly first.
|
After testing a bit I found it was never reliable to get the system prompt tokens exactly back in all cases and opted to simply print the system prompt and user prompt content prior to jumping into the loop. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, but could be improved.
} | ||
|
||
private: | ||
common_chat_syntax syntax_; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This notation of postfixed _
is unfamiliar and also not used anywhere else in this repo.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sure no problem. I am going to go through and just double check things fit the code standards again as it's been a bit since my last contribution.
if (!diff.reasoning_content_delta.empty()) { | ||
result.push_back({diff.reasoning_content_delta, REASONING}); | ||
had_reasoning_ = true; | ||
} | ||
if (!diff.content_delta.empty()) { | ||
if (had_reasoning_) { | ||
result.push_back({"\n", REASONING}); | ||
had_reasoning_ = false; | ||
} | ||
result.push_back({diff.content_delta, CONTENT}); | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since the thinking tags are eaten it makes it really hard to separate thinking from the rest.
Would it be an idea to highlight thinking in another color? Would require some additional logging API to check status of color and/or logging with g_col
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Okay sounds good! What do you think about adding something like "Thinking..." when the reasoning starts as well?
|
Keeping the tags would be hard, I don't think it's much of an issue as long as we have visual separation, the main improvement here is enabling |
If that's intended with jinja, then it's fine, but I would still suggest improving it in future. So long as LLMs can still hallucinate and have mismatched templates, it's always better to double-check. |
@MaggotHATE Any chance you would provide an example of the intended testing scenario? Testing of course provides a nice angle having features in llama-cli that complement the server, which might not want those capabilities built in. Side note: after getting this reasoning in I am going to revisit the tool-call capabilities (as this PR implements much of the required foundation). Part of my initial attempt was too complicated—especially when MCP added OAuth handshakes to the HTTP SSE transport, to me it doesn't make sense to add such complexity and that is the realm of a scripting language. What "take two" will have is: (1) only a single toolcall.cpp/h inside the llama-cli project; (2) only support toolcalls via the stdio transport (because there are nice local nodejs proxies and so-forth). This will add nice testability to the toolcalls. |
Any long, continuous dialog with a model would provide and good understanding if is works correctly and generates all required special tokens; this is especially important with different sampling combinations and settings. For example, old Magistral used to have problems with its thinking tags, which should be fixed in 2509 (I have tested it briefly only, as the model works better without reasoning). Moreover, the idea of "hybrid" reasoning is still in the air, which makes differentiating and outlining reasoning portions of generated text even more important. I don't use Jinja, but my understanding is that it would only "render" correct combinations of tags - still, being able to actually see the entire template would be helpful for testing (maybe an arg?).
|
This change adds a "partial formatter" that processes partially collected messages (like the server streaming logic) in order to render reasoning logic prior to EOG token arrival.
In addition, the chat_add_and_format lambda has been moved to a functor, and this now calls common_chat_templates_apply directly to allow more robust template-application options.
Logic has been put in place to suppress the system/prompt tags to clean up output.
Example output :