Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

perf: improve typescript type checking performance #10515

Merged
merged 1 commit into from
Jul 30, 2021

Conversation

andreialecu
Copy link
Contributor

@andreialecu andreialecu commented Jul 30, 2021

Summary

Related #10349

You can use the repro at https://github.com/andreialecu/repro-mongoose-slow-ts

Output of yarn tsc --extendedDiagnostics.

Before:

Identifiers:                 82983
Symbols:                    128042
Types:                       42997
Instantiations:             291774
Memory used:               148795K
Assignability cache size:    13123
Identity cache size:             3
Subtype cache size:              0
Strict subtype cache size:       0
I/O Read time:               0.02s
Parse time:                  0.39s
ResolveModule time:          0.03s
ResolveTypeReference time:   0.00s
Program time:                0.45s
Bind time:                   0.19s
Check time:                  1.21s
transformTime time:          0.00s
commentTime time:            0.00s
I/O Write time:              0.00s
printTime time:              0.01s
Emit time:                   0.01s
Total time:                  1.87s
✨  Done in 2.37s.

After:

Identifiers:                 82983
Symbols:                     98359
Types:                       23540
Instantiations:             153096
Memory used:               133310K
Assignability cache size:     7426
Identity cache size:             1
Subtype cache size:              0
Strict subtype cache size:       0
I/O Read time:               0.02s
Parse time:                  0.38s
ResolveModule time:          0.02s
ResolveTypeReference time:   0.00s
Program time:                0.44s
Bind time:                   0.19s
Check time:                  0.76s
transformTime time:          0.00s
commentTime time:            0.00s
I/O Write time:              0.00s
printTime time:              0.01s
Emit time:                   0.01s
Total time:                  1.40s
✨  Done in 1.62s.

Notice a huge decrease in types and instantiations, and also in Check time.

I'm not sure if this is the right change, but there was some overlap with _AllowStringsForIds being used by DocumentDefinition doing most of the same work as _FilterQuery is doing. That seems unnecessary and the DocumentDefinition<T> seems like could be applied to just a specific part of the type.

This doesn't seem to break anything in my code base, and I got most of the speed back.

@andreialecu
Copy link
Contributor Author

I have a related question.

Is it actually ok to allow strings for object ids in query filters? In my experience trying to query something as a string when it's stored as an ObjectId doesn't return any results. I always had to compare it with someId: new ObjectId(somestring).

Are the types actually correct in trying to allow strings for object ids for filters here?

@ghost
Copy link

ghost commented Jul 30, 2021

This PR might be able to fix some issues but not issues with the systems who have half decent specification

my system specifications are:

  • i5 2500k
  • 8 GB ram
  • gt 1030

before applying changes in the PR #10515

Files:                          87
Lines of Library:            28119
Lines of Definitions:        25937
Lines of TypeScript:            19
Lines of JavaScript:             0
Lines of JSON:                   0
Lines of Other:                  0
Nodes of Library:           119708
Nodes of Definitions:       108811
Nodes of TypeScript:            65
Nodes of JavaScript:             0
Nodes of JSON:                   0
Nodes of Other:                  0
Identifiers:                 81545
Symbols:                    128399
Types:                       43073
Instantiations:             291774
Memory used:               149894K
Assignability cache size:    13123
Identity cache size:             3
Subtype cache size:              0
Strict subtype cache size:       0
I/O Read time:               0.02s
Parse time:                  0.64s
ResolveModule time:          0.07s
ResolveTypeReference time:   0.01s
Program time:                0.76s
Bind time:                   0.36s
Check time:                  1.87s
transformTime time:          0.01s
commentTime time:            0.00s
I/O Write time:              0.00s
printTime time:              0.02s
Emit time:                   0.02s
Total time:                  3.02s

after applying changes in the PR #10515

Files:                          87
Lines of Library:            28119
Lines of Definitions:        25937
Lines of TypeScript:            19
Lines of JavaScript:             0
Lines of JSON:                   0
Lines of Other:                  0
Nodes of Library:           119708
Nodes of Definitions:       108811
Nodes of TypeScript:            65
Nodes of JavaScript:             0
Nodes of JSON:                   0
Nodes of Other:                  0
Identifiers:                 81545
Symbols:                     98716
Types:                       23615
Instantiations:             153096
Memory used:               125551K
Assignability cache size:     7426
Identity cache size:             1
Subtype cache size:              0
Strict subtype cache size:       0
I/O Read time:               0.02s
Parse time:                  0.63s
ResolveModule time:          0.06s
ResolveTypeReference time:   0.01s
Program time:                0.74s
Bind time:                   0.36s
Check time:                  1.40s
transformTime time:          0.01s
commentTime time:            0.00s
I/O Write time:              0.00s
printTime time:              0.02s
Emit time:                   0.02s
Total time:                  2.51s

This PR gives a bit more boost in the performance but its still not enough

@andreialecu
Copy link
Contributor Author

@perenSHERE the check time itself is not necessarily conclusive to VS code performance. But it's a decent indicator to check whether changes are for the better or worse.

Did you try this in VS code?

@ghost
Copy link

ghost commented Jul 30, 2021

@perenSHERE the check time itself is not necessarily conclusive to VS code performance. But it's a decent indicator to check whether changes are for the better or worse.

Did you try this in VS code?
Yes I tried this in VS code

Version: 1.58.2 (user setup)
Commit: c3f126316369cd610563c75b1b1725e0679adfb3
Date: 2021-07-14T22:10:15.214Z
Electron: 12.0.13
Chrome: 89.0.4389.128
Node.js: 14.16.0
V8: 8.9.255.25-electron.0
OS: Windows_NT x64 10.0.19043

the intellisense performance is better than before.

Some major changes are needed in the class Schema
though this is not a solution but commenting the whole class Schema in index.d.ts makes a hug difference
After commenting the Class Schema:

Files:                         87
Lines of Library:           28119
Lines of Definitions:       25937
Lines of TypeScript:           19
Lines of JavaScript:            0
Lines of JSON:                  0
Lines of Other:                 0
Nodes of Library:          119708
Nodes of Definitions:      107562
Nodes of TypeScript:           65
Nodes of JavaScript:            0
Nodes of JSON:                  0
Nodes of Other:                 0
Identifiers:                81072
Symbols:                    51733
Types:                         94
Instantiations:                 0
Memory used:               87973K
Assignability cache size:       0
Identity cache size:            0
Subtype cache size:             0
Strict subtype cache size:      0
I/O Read time:              0.03s
Parse time:                 0.90s
ResolveTypeReference time:  0.01s
Program time:               1.05s
Bind time:                  0.47s
Check time:                 0.03s
transformTime time:         0.02s
commentTime time:           0.00s
I/O Write time:             0.00s
printTime time:             0.04s
Emit time:                  0.04s
Total time:                 1.58s

@andreialecu
Copy link
Contributor Author

If I leave all the code in Schema but remove all the code from the Model interface, the check time drops to 0.05. If I only leave one of the update* lines like the one below, it shoots back up to 1+ seconds:

updateOne(filter?: FilterQuery<T>, update?: UpdateQuery<T> | UpdateWithAggregationPipeline, options?: QueryOptions | null, callback?: (err: CallbackError, res: any) => void): QueryWithHelpers<UpdateWriteOpResult, EnforceDocument<T, TMethods>, TQueryHelpers, T>;

If I remove its return type, the check time goes down to 0.2s.


Alternatively, if everything is kept but the code of Query is commented out, the check time goes down to 0.34s.

So the problem isn't in the Schema type, but somewhere deeper down.

@andreialecu
Copy link
Contributor Author

Update: Changing the definition of Schema to this:

class Schema<DocType = Document, M = Model<any, any, any>, SchemaDefinitionType = undefined, TInstanceMethods = any> extends events.EventEmitter {

Results in this trace.

Instantiations:              1029
Memory used:               97021K
Assignability cache size:     161
Identity cache size:            0
Subtype cache size:             0
Strict subtype cache size:      0
I/O Read time:              0.01s
Parse time:                 0.38s
ResolveModule time:         0.02s
ResolveTypeReference time:  0.00s
Program time:               0.43s
Bind time:                  0.20s
Check time:                 0.04s

It appears that one potential problem is with the extends Model<DocType, any, any> check. ExtractMethods<T> also has one.

@andreialecu
Copy link
Contributor Author

I have opened microsoft/TypeScript#45249, perhaps someone can offer some guidance.

@ghost
Copy link

ghost commented Jul 30, 2021

Update: Changing the definition of Schema to this:

class Schema<DocType = Document, M = Model<any, any, any>, SchemaDefinitionType = undefined, TInstanceMethods = any> extends events.EventEmitter {}
It appears that one potential problem is with the `extends Model<DocType, any, any>` check. `ExtractMethods<T>` also has one.

Definitely commit this to the branch dropped the Total Time Decently for me

I am not that experienced with type declaration but after reading some code Model Interface can be optimized decently
interface Model<T, TQueryHelpers = {}, TMethods = {}> extends NodeJS.E1ventEmitter, AcceptsDiscriminator {}

@ghost
Copy link

ghost commented Jul 30, 2021

Just to be sure how optimized the types can be I deleted every package I had and took the bare minimum to launch the project. The only package I had was typescript

Files:                          6
Lines of Library:           24662
Lines of Definitions:           0
Lines of TypeScript:           17
Lines of JavaScript:            0
Lines of JSON:                  0
Lines of Other:                 0
Nodes of Library:          110877
Nodes of Definitions:           0
Nodes of TypeScript:            2
Nodes of JavaScript:            0
Nodes of JSON:                  0
Nodes of Other:                 0
Identifiers:                40314
Symbols:                    24300
Types:                         78
Instantiations:                 0
Memory used:               52110K
Assignability cache size:       0
Identity cache size:            0
Subtype cache size:             0
Strict subtype cache size:      0
I/O Read time:              0.01s
Parse time:                 0.54s
Program time:               0.56s
Bind time:                  0.36s
Check time:                 0.00s
transformTime time:         0.02s
commentTime time:           0.00s
I/O Write time:             0.00s
printTime time:             0.03s
Emit time:                  0.03s
Total time:                 0.70s

I got my time dropped to 0.70s whilst only having typescript as the package with the mongoose package the Total time should be at least we can assume 1.50s (Note: It hugely depends on system specifications).

@vkarpov15
Copy link
Collaborator

Re: "Is it actually ok to allow strings for object ids in query filters?", yes it is.

@vkarpov15
Copy link
Collaborator

I suspect the solution will be removing LeanDocument altogether. Can you try that out and see if it helps?

Copy link
Collaborator

@vkarpov15 vkarpov15 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Strange that this improves perf so much, but it passes our tests so should be fine 👍

@vkarpov15 vkarpov15 added this to the 5.13.5 milestone Jul 30, 2021
@vkarpov15 vkarpov15 merged commit d88f981 into Automattic:master Jul 30, 2021
@andreialecu
Copy link
Contributor Author

andreialecu commented Jul 30, 2021

I suspect the solution will be removing LeanDocument altogether. Can you try that out and see if it helps?

@vkarpov15 Changing it to: export type LeanDocument<T> = T; does help a bit, but not too much. It's about a 10% performance improvement in check time.

I think the type is valuable though, so removing it may not be a good idea.

The underlying problem is related to a high level of recursivity that the typescript compiler goes into to verify the extends Model<...> condition.

If a solution is found there, then LeanDocument is likely fine.

A very simple way to play with this is via: https://github.com/andreialecu/typescript-performance-extends

Just edit the mongoose.d.ts file in the types directory, then run yarn tsc --extendedDiagnostics and compare the output to previous runs.

@ghost
Copy link

ghost commented Jul 31, 2021

Some more changes are needed so opening a new PR with some more fixes and stopping the recursion is necessary

vkarpov15 added a commit that referenced this pull request Aug 11, 2021
…chema for a ~50% perf improvement when compiling TypeScript and using intellisense

Re: #10349
Fix #10536
Re: #10515
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants