You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
If we know which characters can start each tokenType match.
We can drastically reduce the number of of regExp we need to attempt after every token has been matched.
This can be done by building a map between the start characters and an array of allowed tokenTypes.
The text was updated successfully, but these errors were encountered:
bd82
changed the title
Investigate Lexer optimizations when there are many tokenTypes.
Lexer optimizations using the next charCode in the remaining text.
Apr 13, 2018
Full flow benchmark has improved by 25% for a simple grammar (JSON),
60% for a medium sized grammar (CSS) and 100% for a grammar with a very large lexer (JDL).
In general the more TokenTypes there are the greater the benefit.
When can optimization be applied?
In general the Lexer always attempts to apply the optimization.
However, in some cases it cannot and will silently revert back to the old (slower) behavior.
See section 2 in How do I Maximize my parser's performance? FAQ. For details how to ensure a Lexer enjoys these optimizations.
How can the optimization be disabled?
As with every new logic there may be bugs, If there exists a suspision that the
optimization has caused incorrect behavior, simply enable "safeMode" to disable it and re-check.
If we know which characters can start each tokenType match.
We can drastically reduce the number of of regExp we need to attempt after every token has been matched.
This can be done by building a map between the start characters and an array of allowed tokenTypes.
The text was updated successfully, but these errors were encountered: