New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
perf: module token factory fast path #11023
perf: module token factory fast path #11023
Conversation
@@ -3,33 +3,51 @@ import { Type } from '@nestjs/common/interfaces/type.interface'; | |||
import { randomStringGenerator } from '@nestjs/common/utils/random-string-generator.util'; | |||
import { isFunction, isSymbol } from '@nestjs/common/utils/shared.utils'; | |||
import stringify from 'fast-safe-stringify'; | |||
import * as hash from 'object-hash'; | |||
import { createHash } from 'crypto'; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Wondering how fast it is compared to @napi-rs/blake-hash 🤔
Also, on a side note, did you have a chance to compare https://github.com/Brooooooklyn/uuid to @lukeed/uuid
(for generating uuids)?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I never tried but I run some benchmarks:
uuid v4 x 17,312,526 ops/sec ±1.17% (91 runs sampled)
uuid v5 x 234,243 ops/sec ±1.20% (87 runs sampled)
sha1 x 502,120 ops/sec ±2.31% (79 runs sampled)
napi-rs/blake x 495,174 ops/sec ±2.27% (71 runs sampled)
nanoid x 3,418,245 ops/sec ±1.43% (90 runs sampled)
uid x 48,565,650 ops/sec ±1.07% (91 runs sampled)
luuked x 5,012,523 ops/sec ±1.36% (90 runs sampled)
napi-rs/uuid x 5,499,031 ops/sec ±0.92% (89 runs sampled)
Fastest is uid
Don't seem to be faster in our scenario.
Benchmark: perf-nanoid-uuid.benchmark.ts.zip
1951f31
to
2d7703a
Compare
Hey, reading this article I found a hash function that is faster than using
The output of The profiler information looks like: Baseline:
With new Hash:
Compared to the I push the changes, so try and see if it's okay to introduce another dependency. |
LGTM |
upd: everything is fine, found problem on my side looks like it breaks runtime in alpine with error
|
What image do you use @silentroach? |
I use official Looks like when I install Seems like now package-lock is different for different platforms, it can be a breaking change for somebody |
@silentroach |
My fault, sorry. After searching in commit history I found that package-lock was corrupted after conflict resolving in different branch and all optional dependencies other that one for my platform was removed. |
PR Checklist
Please check if your PR fulfills the following requirements:
PR Type
What kind of change does this PR introduce?
What is the current behavior?
This PR aims to improve the speed of
ModuleTokenFactory#create
, today, here's the profiler information of the initialization (when we initialize 10k times):The main issues are with methods that belong to
object-hash
.Issue Number: #10844
What is the new behavior?
Now, I made two optimizations:
object-hash
and perform what they will do manually.The first optimization is the simpler one, we don't need to create an object and pass it to the
object-hash
, is way simpler to just get the main information and then generate a hash from it.The second optimization was based on: the usage of
object-hash
is redundant, we don't need a library to traverse all the properties of the object and then generate a hash, we just need a hash, so I just serialize the entire token and then create a hash from it.The results are these:
When we don't have metadata, the improvement is 11x, with metadata the improvement is almost 5x.
Now, the profiler information looks like:
We have more space to optimize it, like changing how the hash is generated but for now I keep it simple.
Does this PR introduce a breaking change?
Other information