-
Notifications
You must be signed in to change notification settings - Fork 13.7k
common : implement parser combinators for chat parsing [WIP] #17136
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
|
Yes! This is exactly what I was thinking about :) can you give me push writes to your repo so I can contribute without doing PRs to PRs? |
Sure. I've never managed permissions on a GitHub repo, but let me know if you can't push. The interface isn't solidified, so hammer away. I do want to clean up the header and move stuff into the source file. Figured I'd handle that as I get further along. The partial parsing works, but does require careful attention if editing. The idea is to "succeed" if the parse tree is partially traversed and the input is marked as incomplete. With some caveats: if a literal is partially matched, it will propagate a result indicating we need more input. I intend to add a I need to clean up the caching. Initially, I thought, maybe we could reuse the cache as we get more and more input. I'm finding it very difficult to find the correct time to cache. So I'm thinking about nixing that idea and just provide a cache per parsing run--as the packrat algorithm originally intended. Then we can profile if caching is beneficial or not on a real example. I suspect there shouldn't be a whole lot of backtracking, so the memory cost might not be worth it if the gains are minuscule. |
|
Aight, let me bounce my original idea - what if we just created a GBNF parser builder and used that to parse the messages? Then we have both problems (tool call / reasoning and compatibility with normal parsing) done in one go. Unless (haven't looked into it) it would just be too inefficient for normal content parsing? Because right now it feels like we're adding another intermediate abstraction while GBNF is already implemented in GGML - so maybe just use a builder as an abstraction layer to create all the needed objects and add any missing partial parse support? This is just an idea, not very fixated on it, just thought I'd share it. Regarding memory coatsnand the packrat parser, I think O(n) with typical LLM inputs is negligible, even with super long contexts we're looking at like a few MB overhead at most. |
|
Sounds like you're thinking of a parser generator. Something like yacc, bison, or ANTLR. The problem I see with those solutions is they require building a parse table upfront, which is less intuitive than building a parse tree such as in this PR. You could create a recursive descent parser but that would have to be done at compile time. If you did it at runtime, I think the solution would look a lot like this! I haven't examined the GBNF code with a scalpel, but taking a brief look it seems like it uses a pushdown automata and may be challenging to extract content. Not that we would want to, since it is part of the core and not common. I believe there is a desire to keep the chat parsing isolated in common. I also think you lose the expressiveness of being able to define the grammar in C++. For example, with this solution we could add a The solutions I mentioned above do this by defining their own language to insert code--not pretty in my experience. That said, I am open to ideas. If you have a clearer picture of what that looks like, I'm happy to review. I understand inserting a new abstraction is a tough ask. I wanted to roll out a PoC to hopefully show value. |
|
@aldehir Nah, you're probably right. I looked at the GBNF code and in fact it would take too much effort to extract the parsed content from there. We're better off just doing it your way. I'll try to code some of the missing pieces. |
|
@pwilkin great! If you have any questions, feel free to ask. |
|
Aight, I'm done with the hybrid ops and convert_hf_to_gguf refactoring cleanup, so I'll probably finally look at this tomorrow :> |
|
No rush. I am getting closer to a set of parsing functions that I'm happy with. The unfortunate part is I had to roll specialized parsers to maintain comparable performance with the existing parsing. A lexer would likely help, but optimized parsers for certain use cases is enough for now. I added a benchmark in the test that implements the Command R2B parser, and compares it to the existing one. It seemed like a good one to illustrate. The existing parsing has a leg up with JSON. That said, it's still a fraction of a millisecond for a full prompt. I think most of the cost will go into the constrained decoding anyway. I'll have to benchmark larger JSON documents. Worst case, we can fall back to the implementation in |
|
I'm thinking we should put the helpers in a separate file. The parser implementation is pretty big. It feels complete, though. |
|
@aldehir Yeah, I split off the helper as a subclass of the main builder, will add any further helpers there, should avoid overfilling the main parser class. I also reverted the old explicit Qwen3 parser builder and added the new helper alongside it. Restructured the test a bit to make it clearer. Now I'm going to try and add as many of the old parsers as possible to see how well it'll go and potentially get good patterns for the helpers. |
|
Aight, Minimax M2 and Seed-OSS are up. With the first one, I did a stupid mistake of doing different tool definitions from tool calls, so couldn't get a proper parse, so I added some debugging prints to go + a live example of how to use them :) BTW, the current solution is if an incorrect function call is detected, it's still marked as a success since zero_or_more always succeeds, not sure if we don't want to pass a failure over somehow (as in, zero_or_more only trivially succeeds if the rest is empty?) |
|
Thanks! I just found a case for keeping tests to 1 source file: it's a little hard to test in isolation :). If they were in a single source file, you can run By incorrect, do you mean if the model generated an invalid tool call? I don't think that should happen in practice. With constrained decoding, we enforce the grammar so it should be parseable. If there are no tools, then we shouldn't constrain and make the reasoning/content parsing as permissive as possible. Also shouldn't build a parser that has a tool calling support and should just consume all content until the end. You can add |
| if (ev.rule.find("arg-string") != std::string::npos && ev.ending() && ev.success()) { | ||
| auto & tc = semantics.tool_calls.back(); | ||
| tc.arguments += "\""; | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This was causing me some grief, because it matches the "arg-string-content" above and was adding two quotes to the end.
I don't think we should do a search here. I assume you mean to match arg-string-<param>. We can still produce those rules, but wrap them with an arg-string rule and use direct string comparison. Just to avoid tiny little bugs like this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actually, never mind. I see why you did that. Ok, I have to rethink this.
|
Ok, to better support writing custom helpers and simplify a few things, I'm going to:
|
|
Yeah, was thinking something similar, either add an extra property or make the rule name itself structured somehow (as in "category" and "name"). |
- Refactor all messaging to use LOG_ERR - Fix lack of argument / tool name capturing - Temporary fix for double event capture
|
Alright, I added some stuff. Besides doing a temporary workaround for the double-event problem (I renamed
Besides that, I fixed in the other tests the one thing that I already fixed in the Minimax-M2 test but forgot to mention: the logic for determining whether you should to a complete parse was wrong, because |
Putting this out there as a proof-of-concept and to gather feedback. It is still a WIP.
cc @pwilkin
Problem
Each model currently requires a custom parser to handle reasoning and tool calls. XML-based models are especially tricky to parse. For example, Qwen3-Coder outputs:
The main issue is the typed arguments. A raw string looks the same as JSON until you try to parse it. One workaround is to treat it as a string only if JSON parsing fails. A better approach is to let the argument types drive the parser directly.
Proposal
I propose using parser combinators to simplify parsing. We can compose parsers suitable for PEG grammars, which should make parsing model output much easier. With this approach, we can generate specialized parsers on the fly that match the argument type specifications. This PR implements a proof-of-concept.
Here's an example of what that currently looks like:
The generated parse tree can be used to produce a GBNF grammar. The plan is to build the parser during chat param initialization and derive grammar rules with support for lazy triggers. This should support both
tool_choice = autoandtool_choice = required.Specifics
NOTE: This is still a WIP. I am iterating over the parsers and seeing what works well.
This PR implements parser combinators for PEG grammars. It uses caching to implement packrat parsing. The following are implemented:
Basic Parsers
literal(string)- Matches an exact literal string.S -> "hello"any()- Matches any single character.S -> .one(classes)- Matches a single character from a character class or range.S -> [a-z]orS -> [^0-9]chars(classes, min, max)- Matches between min and max repetitions of characters from a character class.S -> [a-z]{m,n}. Use -1 for max to represent unbounded repetition{m,}Operators
Parsers can be combined using operator overloading for convenient syntax:
~p- Negative lookahead, equivalent tonegate(p).S -> !Ap1 + p2- Sequence, matches p1 followed by p2, equivalent tosequence({p1, p2}).S -> A Bp1 | p2- Choice, matches p1 or p2, equivalent tochoice({p1, p2}).S -> A | Bp1 << p2- Sequence with whitespace in between, equivalent tosequence({p1, space(), p2}).S -> A [ \t\n]* BOperators also work with string literals on the left side:
"literal" + p- Sequence starting with a literal string"literal" | p- Choice with a literal string as first alternative"literal" << p- Literal followed by whitespace then parserCombinators
sequence(parsers)- Matches a sequence of parsers in order, all must succeed.S -> A B Cchoice(parsers)- Matches the first parser that succeeds from a list of alternatives.S -> A | B | Cone_or_more(p)- Matches one or more repetitions of a parser.S -> A+zero_or_more(p)- Matches zero or more repetitions of a parser, always succeeds.S -> A*optional(p)- Matches zero or one occurrence of a parser, always succeeds.S -> A?repeat(p, min, max)- Matches between min and max repetitions of a parser (inclusive).S -> A{m,n}. Use -1 for max to represent unbounded repetition{m,}repeat(p, n)- Matches exactly n repetitions of a parser.S -> A{n}negate(p)- Negative lookahead: succeeds if child parser fails, consumes no input.S -> !AUtility Parsers
space()- Matches zero or more whitespace characters (space, tab, newline).S -> [ \t\n]*until(delimiter, consume_spaces)- Matches all characters until a delimiter is found (delimiter not consumed).S -> (!delim .)*rule(name)- References a named rule for recursive or reusable grammar definitions.expr -> term | expr "+" termJSON Parsers
json()- Creates a complete JSON parser supporting objects, arrays, strings, numbers, booleans, and null.value -> object | array | string | number | true | false | nulljson_string()- Specialized single-pass JSON string parser with escape sequence handlingGBNF Integration
schema(p, name, schema)- Wraps a parser with JSON schema metadata for grammar generation. Used internally to convert JSON schemas to GBNF grammar rules.trigger(p)- Mark the parser as the start of a trigger rule.Rule Management
add_rule(name, p)- Adds a named rule to the grammar for reuse and recursionThe operators
+,|, and~constructsequence,choice, andnegateparsers respectively. The<<operator includes a space rule between parsers.Drawbacks
Parsers that match content while excluding certain patterns, such as end tags, have a less obvious syntax. For example,
p.zero_or_more(~(space + p.literal("</think>")) + p.any())matches any character that isn't followed by</think>. Thep.until("</think>")parser is intended to simplify this.Packrat parsing requires caching all intermediate parse results, which introduces memory overhead proportional to input size and grammar complexity
Each model still requires a custom parser, though they share a common framework that simplifies implementation
Parser combinators may offer less flexibility for handling malformed model output compared to hand-written parsers, though constrained decoding should prevent malformed tool calls
To do
ImplementRemoved in favor of SAX parsingappend_content()andappend_reasoning()semantic actions to populate content/reasoning fields.ImplementRemoved in favor of SAX parsingadd_tool_call(),capture_tool_call_name(),capture_tool_call_args()semantic actions to handle tool calls.json-schema-to-grammarsupport. The JSON parser will parse any JSON, but the generated GBNF grammar should still be constructed from the user-provided schema.