Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Micro Optimization - Use ES2015 Maps for Lookahead function Caching. #812

Closed
bd82 opened this issue Sep 16, 2018 · 1 comment
Closed

Micro Optimization - Use ES2015 Maps for Lookahead function Caching. #812

bd82 opened this issue Sep 16, 2018 · 1 comment

Comments

@bd82
Copy link
Member

bd82 commented Sep 16, 2018

Accessing the dictionary for the lookahead functions cache is a hotspot.
For example a past optimization of creating the lookahead keys using bit operators
yielded a 25% overall performance boost.

Perhaps using ES6 Maps could provide a performance boost
As they have in Jest:

@bd82
Copy link
Member Author

bd82 commented Sep 16, 2018

Looks like there is a smaller performance boost (6-9%) for the parsing flow (excluding lexing flow)
in the more complex grammar samples, With a similar regression for a very simple grammar (JSON).

screen shot 2018-09-16 at 5 03 14 pm

This is probably a worthwhile trade off.

@bd82 bd82 changed the title Investigate using ES6 "Map" instead of Object Micro Optimization - Use ES2015 Maps for Lookahead function Caching. Sep 17, 2018
@bd82 bd82 closed this as completed in #813 Sep 17, 2018
bd82 added a commit that referenced this issue Sep 17, 2018
* Use ES6 Map instead of a plain object to optimize performance.

Fixes #812.

* Upgrade to Safari 11 browser testing.
And use newer macos versions on saucelabs.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

1 participant