Skip to content

Conversation

@ahejlsberg
Copy link
Member

This PR fixes the performance issue demonstrated by #2454. When checking the code in that repro, the compiler repeatedly computes keyof T for the same large object types. We previously didn't cache keyof T computations, leading to excessive amounts of duplicated work computing the same large union types. We now cache the computations. The performance improvement is quite dramatic for the repro, with check times dropping from around 8 seconds to 0.35 seconds.

Fixes #2454.

@ahejlsberg
Copy link
Member Author

We might consider back-porting this to 6.0.

Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR adds caching to the getLiteralTypeFromProperties function to dramatically improve performance when computing keyof T for large object types. The issue manifested when the compiler repeatedly computed the same large union types without caching, leading to excessive duplicated work. The fix reduces check times from approximately 8 seconds to 0.35 seconds for the reproduction case in issue #2454.

Changes:

  • Added PropertiesTypesKey struct to serve as cache key with type ID, include flags, and origin flag
  • Added propertiesTypes map to Checker to cache computed property types
  • Modified getLiteralTypeFromProperties to check cache before computing and store results after computing

Copy link
Member

@jakebailey jakebailey left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Wow!

Before:

Files:                    975
Lines:                 219336
Identifiers:           132045
Symbols:               244052
Types:                 106590
Instantiations:        882604
Memory used:          195388K
Memory allocs:        2574132
Config time:           0.001s
BuildInfo read time:   0.000s
Parse time:            0.063s
Bind time:             0.000s
Check time:           17.989s
Emit time:             0.016s
Changes compute time:  0.035s
Total time:           18.104s

After:

Files:                    975
Lines:                 219336
Identifiers:           132045
Symbols:               244052
Types:                  52726
Instantiations:        882604
Memory used:          174696K
Memory allocs:        2395400
Config time:           0.001s
BuildInfo read time:   0.000s
Parse time:            0.061s
Bind time:             0.000s
Check time:            0.639s
Emit time:             0.021s
Changes compute time:  0.035s
Total time:            0.757s

I'll take a 24x speedup any day

@ahejlsberg ahejlsberg added this pull request to the merge queue Jan 10, 2026
Merged via the queue into main with commit 6e1e2c2 Jan 10, 2026
28 checks passed
@ahejlsberg ahejlsberg deleted the fix-2454 branch January 10, 2026 04:54
@ahejlsberg
Copy link
Member Author

ahejlsberg commented Jan 10, 2026

Yeah, surprising to find a 24x opportunity, and I don't think this is a totally uncommon pattern. Also, I should add that the repeated operation that takes up all the time isn't computing the union type itself, but computing the key for the union type, which consists of a sorted list of thousands of literal types (the property names). That particular operation is more expensive in Corsa because we have our (more computationally costly) concurrency-stable type ordering.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

tsgo slow - too much type and type instantiation without --singleThreaded options

3 participants