-
Notifications
You must be signed in to change notification settings - Fork 30.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
deepStrictEqual bad performance on larger dataset #57242
Comments
It seems like it's actually the error creation that is now slower. This is definitely the case with Myers being in place instead of the linear algorithm being used before. |
puskin94
added a commit
to puskin94/node
that referenced
this issue
Mar 2, 2025
puskin94
added a commit
to puskin94/node
that referenced
this issue
Mar 3, 2025
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Version
22.12.0+
Platform
Subsystem
assert
What steps will reproduce the bug?
Comparing large objects with
deepStrictEqual
is slow and got worse after version 22.11. Version 22.12.0 is ~3x slower and 22.14.0 is ~8x slower. The bigger the object the time increases expotentially. The same goes with memory, bigger objects easily exceed 4 GB memory.How often does it reproduce? Is there a required condition?
It gets noticeable on larger objects when scripts get out of memory. It happens only for versions >=22.12.0.
What is the expected behavior? Why is that the expected behavior?
Should work at least or better than 22.11.0 version.
What do you see instead?
Longer execution time and larger memory consumption.
The text was updated successfully, but these errors were encountered: