-
Notifications
You must be signed in to change notification settings - Fork 46
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Performance improvements #21
Comments
@axefrog You mentioned before on the Cherow repo that it maybe should be easy to extend the parser? Well. I have to admit now after one day in h*** that changing any bit masks in this parser provides unforeseen side effects resulting in at least 2000 broken edge case tests. And it took me 1 day to track and fix it :( :( And when I finaly fixed it, well 450 new broken tests :( Anyway. The base layer is laid for #21 , but the language itself have limitations so I'm not sure how to fix that either.
Due to limitations in the JS language itself, it's almost impossible it seems to improve the performance further. So I'm focusing on reducing memory usage instead, but I will keep trying to improve the performance when I have time. And it's mainly on larger libraries we can see the real difference. "Normal parsing state": With locations and lexical scoping enabled: Regarding comparison with Acorn. This is an another story. It's not even fair to compare this two libraries if you are parsing a single file, parsing on handheld devices or larger libraries. But the performance depends on your use case and computer / device. In the benchmark shown below I have enabled all options on Meriyah including lexicals coping and location tracking. Normally the end user will not use all this options :) This benchmark show the perf difference between Meriyah and Acorn in most common cases. And now we can see that Meriyah is 2x faster than Acorn. The Cherow library is very fast, so it's normal to do a comparison against it. And when it comes to Babel. Well. No need to run benchmark. 3x or 4x times faster I guess. @axefrog @nchanged @aladdin-add Are we good with this numbers so I can focus on memory usage and reduce the code size instead? |
Honestly I've never had any complaints about speed. Any improvements are just the cherry on top, as far as I am concerned. |
Also, I can't seem to find it, but I think you asked me about extensibility somewhere and I can't remember if I replied. My needs in that department have changed. Extensibility is no longer important to me, so it's not something I'll be asking about in the future. |
Extensibility of this parser can't be done. It's only developed for performance reasons :) However my other project - the TypeScript parser - is plugin friendly because of the TS language changes all the time. But it also parses JS. Next release of Meriyah will include changes for reduced memory usage, and improved performance for parsing larger libraries. 1MB or larger. It can already now handle 10 MB files just fine. Do a comparison with my REPL & Meriya vs ASTExplorer and any parser there you will notice that every file larger than 3 MB will freeze AST Explorer. Meriyah will still keep going strong :) |
@KFlash What about adding options that change parsing tactics? Would that help? Of the top of my head right now, the best use case scenario is probably when a user knows there are hot paths in the code, having an option for these would optimise the parser for these specific cases, speeding up the parsing overall. For example, if we take a minified file that's probably under 500kb, we know that most likely it's been minified to ensure that there are no newlines in the executable code and that the local variables and params have been renamed to mostly 1 or 2 character variables. With an option (e.g. 'inputIsMinified') enabled, and if the parser detects the input's length is within an acceptable range, then it can change the parsing logic it uses to factor in these expectations. Just a suggestion, not sure if it will actually help the overall goal 😄 |
@futagoza Interesting idea :) Do you want to try to take a stab at it? A dog found me too delicious, but lucky for me I was not so tasty as it hoped for so I'm undergoing anti-rabies treatment now and not 100% in shape, plus working on some bit mask improvement. |
I would actually love to take a stab at it, but first I'll try and dig my way through the code more to better understand it (so far I've only tried looking at what you've done with bitmasks because I've never actually used them before 😆). Are you okay with the option name Then again, if it goes down the road for optimising for other hot path cases, a better option would be: |
The bitmask logic is straight forward and mostly used for destructuring and assignment. Only thing to be aware of here is that is the binding origin and binding type. This have slightly changed the last days so I will push that code within shot time to avoid conflicts. The rest of the bitmasks is either options
Illustration: const hotpaths; // passed down as func arg for perf reasons?
const enum HotPathKinds {
Minified ....
NoWS ...
}
// depends on options
hotpaths = HotPathKinds .Minified | HotPathKinds .NoWS
// Validations
if (hotpaths & HotPathKinds .Minified) .... minified |
Added some improved changes for parser performance. Part of #21
This benchmark shows the result between 1.3 and 1.4. I can't improve the performance more than this. The first file have a file size of 8 MB. I can't do a comparison with other parsers now because they can't handle such huge files. My browser freezes. V. 1.4 isn't stable enough to get published yet. |
@futagoza If you plan to do any changes. Work against the v. 1.4 branch |
v. 1.4 have landed, and live benchmark can be run from here I think it's very obvious that Acorn have a performance issue 👍 Warm "JIT": Cold "JIT": |
Just now Meriyah is twice as fast as Acorn and uses less memory. Next goal is to make this parser 3x faster than Acorn which means a lot more bit masks, and optimize the hot paths.
The text was updated successfully, but these errors were encountered: