Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Design Meeting Notes, 5/26/2017 #16114

Closed
DanielRosenwasser opened this issue May 26, 2017 · 7 comments
Closed

Design Meeting Notes, 5/26/2017 #16114

DanielRosenwasser opened this issue May 26, 2017 · 7 comments
Labels
Design Notes Notes from our design meetings

Comments

@DanielRosenwasser
Copy link
Member

DanielRosenwasser commented May 26, 2017

Propagated Inference for Uninstantiated (Free) Type Parameters (#9366)

Given #16072, we can now make more powerful inferences given contextual types on the return type of a call.

However, given the following:

function compose<A, B, C>(
    f: (x: A) => B,
    g: (y: B) => C,
): (x: A) => C;

compose(
    x => [x],
    y => ({ foo: x })
)
  • Right now we just fall back to the instantiations of A = {}, B = {}, C = {}.
  • This is unfortunate because in this scenario, there is no real constraint on the types of the parameters.
    • Ideally, because these are universal functions, the return type should still be a generic function: `(x: A) => { foo: A }
  • We've managed to get this working.
  • This pretty much always works well if your function arguments are less specific in their constraints than those of the invoked expression.
    • Doesn't work for certain cases when the opposite is true.
  • @gcnew has an implementation that can handle some of these cases.
  • Will investigate on a combined approach.

@mhegazy may have notes on weak types (#16047) and synthesized namespace records (#16093)

@DanielRosenwasser DanielRosenwasser added the Design Notes Notes from our design meetings label May 26, 2017
@mhegazy
Copy link
Contributor

mhegazy commented May 26, 2017

Weak type detection

  • Weak type detection #16047

  • A Weak type is a type with only optional members, no index no call no construct and no props

  • A weak type can be a target of an assignment iff there is at least one common prop with the source

  • Good breaking change, should only catch bugs 👍

  • need to remove the global and use a flag instead in the implementation

default import interop

  • Synthesize namespace records on CommonJS imports if necessary #16093

  • users today need a loader or a transpiler to make things work

  • proposed emit

    • import {x, y, z} from "foo" -- do nothing here, current behaviour is correct
    • import * as ns from "foo" -- add a new helper __importStar
      • this will break exiting code
      • shim functions? that would impact perf...
      • this should be under a flag
    • import d from "foo' -- write `obj && obj.__esModule ? obj : {default: obj}
  • propsed type checks

    • {x, y, z} does not change
    • * as ns does not change
    • default import behaves like --allowSynthaticDefaults
      • how do we know if this is an ES6 module
      • we need a mraker
      • what is the marker?
  • Conclusion, @DanielRosenwasser to come back next week with a proposal for the flag and its behavior and a proposal for the marker.

@saschanaz
Copy link
Contributor

5/26/2017, probably?

@mhegazy mhegazy changed the title Design Meeting Notes, 3/26/2017 Design Meeting Notes, 5/26/2017 May 26, 2017
@mhegazy
Copy link
Contributor

mhegazy commented May 26, 2017

yup. fixed.

@gcnew
Copy link
Contributor

gcnew commented May 27, 2017

TL;DR

I've laid out my thoughts on how #9366 might be solved and some of the experience I've acquired while trying to make it work.

#9366

I made several attempts to tackle this, however unfortunately all of them had considerable drawbacks. Recently I came up with a new scheme, that I haven't tried implementing yet, but has some potential to actually work.

The only languages that I know that support proper polymorphic unification are all ML based. They use unification solvers, that's why I tried to use one as well. The big hurdles that hit me were:

  • error reporting
  • keeping the current literal/widened behaviour
  • union inference
  • type bounds (extends)
  • contextually typed arguments

Sketches of the new idea:

  • step 1: constraint generation - walking the AST and generating (type variable, type) pairs

    type InferenceContext = {
        constraints: [TypeVariable, Type][],
        deferredConstraints: [Type, Type][],
        typeParameters: Map<Type, boolean>, // only original signatures type parameters
        polyVars: Map<Type, boolean>,       // the original typeParamaters & any argument
                                            // type parameters we encounter
    }
    • insight: use two constraint lists: one of 100% correct constraints, the other with deferred constraints such as union inferences.
    • keep track of the type variables introduced by arguments and parameters
    • context sensitive arguments should always be a deferred constraint
  • step 2: solve the constraints

    • insight: the solution environment should be a map from Type to a "unification context" (not merely a type)
    type UnificationCtx = {
        types: Type[],  // accumulate constituents here
        firstMet: Type, // the first met polymorphic variable that is not one of the
                        // original type parameters
        bounds: Type[]  // type bounds encountered while unifying
                        // (might not be needed if we revise `checkApplicableSignature`)
    }
    • the unification context aggregates all simple types in an array as is the case now
    • insight: types must not be reduced eagerly; eager reduction prevents user provided unions and base types to take effect
    • when unifying two type variables, they start to point to the same UnificationCtx dictionary (merge if necessary)
    • a reference is kept to the first met generic variable that is not one of the original signature's type parameters (i.e. it has to be from a generic argument). It will be used as the return type if necessary and/or error reporting.
    • when all non-deferred constraints have been processed, accumulated types are reduced as in getInferredType
    • if some of the types cannot be reduced because they are polymorphic, they are added to the deferredConstraints list for another round
    • deferred constraints are instantiated with what is inferred so far
    • if no deferred constraints are left unification completes successfully
    • if none of the deferred constrains can be solved that's an error
    • unification proceeds recursively until the deferred constraints list becomes empty or an error is encountered

How are the outlined problems solved?

  • Error reporting - in my previous implementation I was reducing types eagerly. Accumulating a list with the probable types and finding the first type parameter that breaks inference should keep error reporting quality on par with the current implementation.
  • Keeping widening/literal behaviour - whether the type is used at a top position can be computed easily without keeping that in the context. Should be doable when reducing the type.
  • Union inference - this should be deferred. No backtracking should be done, however unions with many constituents might have to be solved in steps. Each step should factor out the known combinations and leave just a few types for the unification to proceed. If multiple paths forward are available, that would be a type error. (At every iteration at least one of the deferred constraints must be solved)
  • Type bounds - type bounds could be accumulated while doing unification. The inferred type should be checked against all bounds including transitive ones.
  • Contextually typed arguments - I'm still not sure how this would work, however they should definitely be deferred. Seem to map to a deferred constraint rather well.

Is this the only approach?

Arbitrary polymorphic unification seems to preclude a unification solver. Another solution would be truly novel.

On unification

Unification can be done in an ad-hoc fashion, however the separation of the process into constraint generation and subsequent solving adds a lot of freedom and simplicity from a logical perspective.

Ideas that I've tried but have lead to a dead end:

  • variance is poison, one should not try to devise solutions based on variance

PS: I'll try to play around with the above idea during the weekend. If something happens I'll put up a PR.

@Jessidhia
Copy link

Jessidhia commented May 29, 2017

EDIT: Comment moved to the appropriate issue: #16093 (comment)

@gcnew
Copy link
Contributor

gcnew commented Jun 6, 2017

I have a basic implementation of the proposed logic from #16114 (comment). I've shared my current progress and comments in #9949 (comment).

@zpdDG4gta8XKpMCd
Copy link

@gcnew get yourself hired at TS@MS, stop wasting everyone's time! :)

@microsoft microsoft locked and limited conversation to collaborators Jun 14, 2018
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
Design Notes Notes from our design meetings
Projects
None yet
Development

No branches or pull requests

6 participants