New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
cpu/memory performance regression with 1.1.0 #9
Comments
I was able to run
At a first glance that makes sense to me -- the creation of lots of temporary strings as it builds up the final JSON output vs the single-pass. For our use case, I'd lean towards reverting that, but I'm not sure what the motivator for the change was. |
Interesting, that makes sense but I wouldn't have thought of it. I assume you can confirm that v1.1.0, with that commit reverted, fixes the problem? If so, I'll put up a PR (that's not just a straight revert), and you can try that one before I merge and release it. |
Yep, that seems to work.
|
awesome, even faster than v1.0.2 :-) I'll get a fix for this out tonight. |
ty! |
@mdouglass if you're able to test out #10, just to make absolutely sure it'll solve the problem, then I can merge and release it :-) |
lgtm, thank you again 👍
|
We upgraded our project which deals with large JSON files (~250MiB) to 1.1.0. Using 1.1.0 we are seeing two performance-related regressions depending on whether we have a max-old-space-size setting in place.
node.js v20.7.0
json-stable-stringify 1.0.2 vs 1.1.0
index.mjs
test script
output of running test.sh
The JSON input file I used for this run is 267,437,008 bytes. I am not sure sure if you'll need it or if any sample file in this size range would be able to demonstrate the problem.
The text was updated successfully, but these errors were encountered: