-
-
Notifications
You must be signed in to change notification settings - Fork 292
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
perf: replace node object for node class #436
perf: replace node object for node class #436
Conversation
The latest updates on your projects. Learn more about Vercel for Git ↗︎
|
I fear this wouldn't be easily serializable to JSON to be loaded back in using the |
I didn't test the serialization, but you are right, I will add a specific case to handle the |
@micheleriva Actually, this is not a bug? Because when we load it back, and then serialize it again, how do you set |
I fixed the problem with the serialization, I didn't like the recursion, but at least it's not throwing an error with 1M items. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Two minor things, but LGTM. @allevo would appreciate your feedback before merging
convertedNode.docs = node.docs; | ||
convertedNode.word = node.word; | ||
|
||
for (const childrenKey of Object.keys(node.children)) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why not const childrenKey in node.children
? AFAIK that should be a bit faster
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't know exactly if is slower or faster.
For this benchmark, for in
is faster.
For this other, it is slower.
The only thing I know is for in
usually causes deoptmizations
on v8, refs:
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In your benchmark:
for (var i=10000; i > 0; i--) {
Object.keys(obj).forEach(key => console.log(key));
}
there's also an extra function call as a callback for the forEach
method. This adds slight overhead, so I don't think the benchmark itself is reliable enough.
https://github.com/zhangchiqing/OptimizationKillers/blob/master/README.md
https://github.com/petkaantonov/bluebird/wiki/Optimization-killers#5-for-in
These links are interesting and I'd rather follow them, lgtm
@@ -502,6 +515,20 @@ export async function load<R = unknown>(raw: R): Promise<Index> { | |||
fieldLengths, | |||
} = raw as Index | |||
|
|||
const indexes: Index['indexes'] = {}; | |||
|
|||
for (const prop of Object.keys(rawIndexes)) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same here, maybe a for..in
loop here would help performances
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The performance remains almost the same, but the code will be clearer.
Could you address Michele's suggestions?
LGTM.
Calling
Object.defineProperty
is not expensive but it's not cheap too (reference).So, to emulate the same behavior and have no penalty for the performance, we can just change it to a class.