-
Notifications
You must be signed in to change notification settings - Fork 12.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Attempt at implementing propagated inference for free type parameters (#9366) #24626
Conversation
A summary of the bigger pieces I can think of: * When strictFunctionTypes is true if there are no inferences for a type parameter then the type parameter itself will be returned instead of defaulting to `{}`. After chooseOverload returns a successful signature, the return type of that signature is checked and if the return type is a function type with a single call signature then any free type parameters found in that call signature are converted to type parameters for the call signature. If the return type is not a call signature then any free type parameters are converted to `{}` at that point. * inferFromSignatures will instantiate the source in the context of the target if the source signature has type parameters and strictFunctionTypes is true. * StrictFunctionTypes is required in both of the above cases because of errors that pop up otherwise when there are not contravariant inferences. * Seemingly free type parameters that are outerTypeParameters of the call being inferred are ignored instead of being erased. This is to avoid erasing type parameters when calling functions inside the body of a generic function. * When choosing an overload with multiple signatures that have contextually typed arguments, any contextual types that are assigned are reset when trying a new candidate. Updated baselines will be committed in the following commit.
20264f0
to
ccb5dd4
Compare
This is such an important missing feature in TS, I really hope this gets picked up. |
@sandersn @ahejlsberg Can you take a look? |
best addition since conditional types! Ramda badly needs this. /cc @ikatyang |
Any updates from the TS team? TypeScript needs this feature. |
@kpdonn Thanks for taking time to experiment with this. It definitely would be nice to be able to infer free type variables--but it is quite complicated to do right, as I'm sure you've discovered. Your particular approach works for some scenarios, but there are several common ones that really only can be solved using a unification algorithm. For example: declare function compose<A, B, C>(f: (x: A) => B, g: (y: B) => C): (x: A) => C;
declare function arrayToList<T>(x: Array<T>): List<T>;
declare function box<U>(x: U): { value: U };
const f = compose(arrayToList, box); // Should infer <T>(x: Array<T>) => { value: List<T> } Ideally this would infer type But I totally understand the value of the feature and it is certainly something I continue to think about. |
I'd like to add that your typical FP library is not the only place where the lack of this feature is making things hard. Writing react components with generics, even though it's supported, is pretty much impossible because of this (as soon as you drop your component into a typical HOC, you'll get |
For React, we've moved away from using HOCs and moved to using render props instead since it plays a lot better with TypeScript at the moment. |
Is this patch dead? Perhaps TS could create some tests they want to see pass, and we can either get the job done or bounty it off? |
I realize the degree of complexity involved here, but is there any way we could get an update from a TS core gatekeeper? Despite the (already considerably high) level of sophistication & expressiveness of the type system, it is still effectively impossible to do real, type-safe FP beyond naïve function composition. A proper implementation of this feature would be a massive step forward. If we could at least know what the goalposts are for getting this accepted, I'm sure more than a few of us (myself included) would be happy to step up and help, but it's hard to want to put up the effort when it feels like you might just as likely be shouting into the void, so to speak. Anyway, I appreciate that this is a large open source project with many competing priorities, I appreciate the efforts made on it, and your time and attention. Thanks. |
Three things here. First, we do have far too many open PRs at the moment, and we have "dropped the ball" on following up on community members. I apologize for that, and we're taking steps to address this: all open PRs have been assigned to a core team member for investigation and follow-up. For transparency, this backlog started building while I was on paternity leave coinciding with two engineers taking other opportunities, and we haven't backfilled those positions yet, so we're quite short-staffed -- but we need to reprioritize things here to keep things moving. My goal here is that we don't any actionable community PRs sitting open for more than a few days. Second, a key aspect of this is "actionable" - we're basically staffed at a level where we can readily accept PRs for bugs and features which are already "approved", meaning that we understand the impact on the language and are comfortable accepting a correct PR implementing that change. We're also barely staffed to work through the current backlog of 1,200 open "Suggestions" before the heat death of the universe. A PR for a language feature with no approved issue tracking it is just not something we're equipped for in terms of process: The design needs to be hammered out and approved before the PR can be looked at, otherwise there are just too many free variables in play. Again for transparency, while there's substantial overlap between design and implementation on the team, in general there are different people and processes addressing design issues vs implementation issues. This is basically to say that this PR is "outside our process", hence the "Experiment" tag and subsequent lack of responses. Without an issue describing the design being implemented, the only thing we (as code reviewers) can comment on is whether the code is facially correct (not useful), not whether or not we cant merge it. IOW, we (as designers) can't take PRs to the design process because it's not useful to just put the code up on screen and reverse engineer what's happening. A useful next step to make this more actionable would be a fresh issue outlining what proposal is being implemented and how. Third, the comments above from @ahejlsberg are effectively blocking to the design being implemented. Speaking with him offline, we're in agreement that only a full (or "nearly full") unification algorithm will produce correct inferences in all the cases of interest here. Going back to an issue outlining the proposed design would be a clearer venue for going over various examples and how a potential unification algorithm would handle them. |
I started playing with this as a tangent off of #23809 after I was starting to add some type inference there and was running into a lot of
{}
inferences. Initially I wasn't planning on getting anywhere near something worth sharing but after I played with it for a while it started to feel like it would be possible to come up with something that could at least serve as a working straw-man implementation of #9366, which I think I ended up accomplishing.Check out the updated baselines (especially
inferringGenericFunctionsFromGenericFunctions2.ts
which is mostly borrowed from @gcnew's extremely helpful test suite) for full details on how it behaves, but it mostly does what it should for the examples I've tried:It also manages to only make a few changes to existing baselines, and the changes it does have seemed positive or neutral to me.
For a high level summary of the implementation that glosses over a lot of the details:
getInferredType()
to return the type parameter being inferred if there are no inferences available for it instead of defaulting to{}
. The inference does not become fixed if this happens.resolveCall()
, immediately afterchooseOverload()
successfully returns a signature I resolve the return type of the chosen signature and check to see if it contains any free type parameters. (This is done eagerly now. A polished version should probably be refactored to be lazy like everything else){}
like they would have been before this PR.outerTypeParameters
of the call expression node. This avoids erasing type parameters for calls inside the body of a function with generics.strictFunctionTypes
is disabled then the old behavior of defaulting to{}
when there are no inferences still happens. This is because I ran into assignability problems if contravariant inferences of type parameters were treated as covariant inferences, which is what happens withoutstrictFunctionTypes
.inferFromSignatures()
will try to instantiate a source signature with type parameters in the context of the target signature in the same way ascompareSignaturesRelated()
. This also only happens ifstrictFunctionTypes
is enabled.chooseOverload()
has multiple candidate signatures to choose from and has some contextually typed arguments, any contextual types that were assigned are reset when trying the next candidate signature.Plan from here
I don't have any specific plans for this from here, I just wanted to share what I have since it turned out pretty well. If anyone tries it out and runs into things that aren't working right let me know and I'll see if I can handle them. Otherwise though I'm probably going to let this stand as it is now unless the TS team expresses interest in this approach, which they very easily may not since this obviously makes significant changes to the core inference and signature resolution code.