Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Covariance / Contravariance Annotations #1394

Open
Igorbek opened this issue Dec 7, 2014 · 49 comments

Comments

Projects
None yet
@Igorbek
Copy link
Contributor

commented Dec 7, 2014

(It's a question as well as a suggestion)

Update: a proposal #10717

I've supposed that for structural type system as TypeScript is, type variance isn't applicable since type-compatibility is checked by use.

But when I had read @RyanCavanaugh 's TypeScript 1.4 sneak peek (specifically 'Stricter Generics' section) I realized that there's some lack in this direction by design or implementation.

I wondered this code is compiled:

function push<T>(arr: T[], a: T) { arr.push(a); }
var ns = [1];
push<{}>(ns, "a"); // ok, we added string to number[];

More clear code:

interface A { a: number; }
interface B extends A { b: number; }
interface C extends B { c: number; }

var a: A, b: B, c: C;

var as: A[], bs: B[];

as.push(a); // ok
as.push(b); // ok, A is contravariant here (any >=A can be pushed) 

bs.push(a); // error, B is contravariant here, so since A<B, A cannot be pushed -- fair 
bs.push(b); // ok
bs.push(c); // ok, C>=B

as = bs;    // ok, covariance used?

as.push(a); // ok, but actually we pushed A to B[]

How could B[] be assignable to A[] if at least on member push is not compatible. For B[].push it expects parameters of type B, but A[].push expects A and it's valid to call it with A.

To illustrate:

var fa: (a: A) => void;
var fb: (b: B) => void;

fa(a); fa(b);
fb(a);  // error, as expected
fa = fb;    // no error
fa(a);  // it's fb(a)

Do I understand it correctly that is by design?
I don't think it can be called type-safe.

Actually, such restriction that could make B[] to be unassignable to A[] isn't desirable.
To solve it I suggest to introduce variance on some level (variable/parameter, type?).

Syntax

var as: A[]; // no variance
var bs: out B[]; // covariant, for "read"
<out A[]>bs; // ok, covariance used
<A[]>bs; // same is before, out should be inferred

(<A[]>bs)[0]; // means, allow to get covariant type
(<A[]>bs).push(a); // means, disallow to pass covariant type

<in A[]>bs; // fails

function push<T>(data: in T[], val: out T): void {
  data.push(val);
}
push(animals, dog); // allowed, T is Animal
push(dogs, animal); // disallow, T can't be inferred

// In opposite
function find<T>(data: out T[], val: out T): bool { ... } // allow only get from data
find(animals, dog); // allowed
find(cats, dog); // allowed T can be inferred as Mammal

I'm not sure where variance should be applied - to variable or type?
Looks like it closer to variable it self, so the syntax could be like:

var in a: number[];
function<T>(in a: T[], out b: T): { ... }

Questions to clarification

  • inference variance from usage (especially for functions)
  • is it applicable for types (like C# uses for interfaces/delegates)
  • how to describe variance on member-level (does it need?)
  • can variable be in out (fixed type)?
  • can variable be neither in nor out (open for upcast/downcast)?

So this topic is a discussion point.

@danquirk danquirk changed the title Covariance / Contravariance Covariance / Contravariance Annotations Dec 8, 2014

@danquirk

This comment has been minimized.

Copy link
Member

commented Dec 8, 2014

The assignments you've noted are by design. When checking assignability of function types their parameters are considered bivariant. So when assigning fa = fb it is valid if fa's parameters are assignable to fb's or vice versa. Likewise for the function typed members of Array when determining whether as = bs is valid. See #274 for the suggestion to check these more strictly all the time.

Co/contravariance is not an easy concept for a lot of folks to grasp (even though there is an intuitive nature to the assignment questions). Explicit annotations for them can be especially difficult for people to make use of (particularly if they have to author the annotations in their own code). C# has these types of annotations now but was highly successful for many years without the additional layer of checking the variance annotations afford. My gut feeling is that I would be surprised if these ended up in TypeScript given their complexity and how 'heavy' a concept they are.

@MgSam

This comment has been minimized.

Copy link

commented Dec 9, 2014

@danquirk I disagree that covariance/contravariance are such "heavy" concepts. In practice, it's generally just library authors that use them (not end users), and they're easy to ignore, so I don't think you're adding a burden to the language. Yet on the flip side, you're adding a lot of expressiveness to situations that would otherwise be less type safe.

Also, at a syntax level- having optional keywords of "in" and an "out" seems to me to be as lightweight as language constructs get.

I certainly wouldn't consider this feature high enough priority to want it added to the backlog anytime soon, but once TypeScript is more mature it could certainly be a useful addition.

@Igorbek

This comment has been minimized.

Copy link
Contributor Author

commented Dec 9, 2014

So, if go with some "strict" mode only, there's no way to annotate variance, because it shouldn't introduce any new keywords.
I believe in most cases, TypeScript will be able to infer variance from usage. And bivariance for functions could be a fallback if modifier wouldn't be specified.

I've read Function Argument Bivariance article, and it looks like the example might be rewritten more clear and useful without bivariance requirement:

enum EventType { Mouse, Keyboard }

interface Event { timestamp: number; }
interface MouseEvent extends Event { x: number; y: number }
interface KeyEvent extends Event { keyCode: number }

function listenEvent<TEvent extends Event>(eventType: EventType, handler: (n: TEvent) => void) {
    /* ... */
}

listenEvent<MouseEvent>(EventType.Mouse, e => console.log(e.x + ',' + e.y));

@MgSam thanks for support.

@metaweta

This comment has been minimized.

Copy link

commented Dec 13, 2014

Outputs are always covariant and arguments are usually contravariant. The only point where we'd need an annotation is when an argument is also used as an output. I think it would be a lot easier to grasp if all type constructors and generics came equipped with their associated map functions, since then you just look at what order the mapped functions get applied.

@Igorbek

This comment has been minimized.

Copy link
Contributor Author

commented Dec 13, 2014

@metaweta callbacks are bivariant now. It also needs clarification/make stricter.

@metaweta

This comment has been minimized.

Copy link

commented Dec 13, 2014

I don't know what you mean by "callbacks are bivariant": variance is a concept that applies to a specific use of a type in a signature. I was suggesting that types used in arguments be contravariant by default; now that I think more about it, that would cause new errors in currently working code, which is unacceptable.

To avoid the issue of a new keyword, I suggest using +/- like Scala does.

How about a directive "use variance" to opt into that assumption? That way a library author can avoid having to add the contravariant modifier everywhere, while the user of the library doesn't have to worry about it. Directives are a part of ECMAScript specifically designed for this kind of scoped alteration of semantics.

@Igorbek

This comment has been minimized.

Copy link
Contributor Author

commented Dec 13, 2014

I mean that callback is an argument which is a function:

class A { ... } class B extends A { ... } class C extends B { ... }
function f(callback: (b: B) => void) { ... }
f((x: A) => { ... }); // no error, callback accepts x: A, fair any B is A
f((x: C) => { ... }); // no error, callback accepts x: C, but it might be called with B which isn't C

That's what "callback are bivariant" means.

Regarding +/-, do you mean to use it with type specifier? Like:

function f<T>(x: T-[], y: T) { x.push(y); } // y: T+, by default for function arguments

Directives are a part of ECMAScript specifically designed for this kind of scoped alteration of semantics.

Which directives do you mean?

@metaweta

This comment has been minimized.

Copy link

commented Dec 13, 2014

The arrow functor is contravariant in its first argument and covariant in its second. Contravariance acts somewhat like multiplication by -1.

In (a:A) => B, A is contravariant and B is covariant.

In (f: (a: A) => B) => C, C is covariant and (a: A) => B is contravariant, which means that A is covariant (-1 * -1 = 1) and B is contravariant (-1 * 1 = 1).

Regarding +/-, do you mean to use it with type specifier?

I mean to use it in the type parameters rather than the function signature:

interface Foo <-A, +B> {
  map<+C>(f:(a:A) => C): Foo<C, B>;
}

Which directives do you mean?

The directive "use strict" changes the semantics of the language within a function block or program production (usually an HTML script block). I imagine a "use variance" directive in a module production.

@Igorbek

This comment has been minimized.

Copy link
Contributor Author

commented Dec 13, 2014

The arrow functor is contravariant in its first argument and covariant in its second. Contravariance acts somewhat like multiplication by -1.

I understand this. But now TS violates this rule in case of callbacks. I didn't say how it should work, I did say how it works now.

@metaweta

This comment has been minimized.

Copy link

commented Dec 13, 2014

Yikes! That's terrible.

@RyanCavanaugh

This comment has been minimized.

Copy link
Member

commented Dec 17, 2014

If function parameters weren't bivariant, Array<Dog> would not be a structural subtype of Array<Animal> due to members like forEach.

@metaweta

This comment has been minimized.

Copy link

commented Dec 17, 2014

I take back my "that's terrible" assesment; that would only be fair for a purely functional language. Every mutable data structure is going to have this trouble. Getters are covariant while setters are contravariant: given f:(x:X)=>Y, and arr:Array<X>, arr.map(f) is an array where a getter returns a Y; for instance, this could be implemented lazily by having a getter look up an element x and then return f(x). A map using X contravariantly would take a function g:(w: W)=>X and arr.comap(g) would be an array where the setter would store g(w) in the array. (I'm not suggesting comap be implemented or that map have a different implementation, just pointing out how the variance shows up in getters and setters.)

I think the article's right that the sound alternatives are too cumbersome for not enough benefit.

@Igorbek

This comment has been minimized.

Copy link
Contributor Author

commented Dec 17, 2014

@RyanCavanaugh I thought Array<Dog> wouldn't be structural subtype of Array<Animal> due to members like push. Methods like forEach is fully compartible. Indeed:

interface X<T> {
  out(): T;
  cb(cb: (v: T) => void)): void; // like Array.forEach
  in(v: T): void; // like Array.push
}
var xdog: X<Dog>;
var xanimal = <X<Animal>>xdog; // is it convertible?

// out is convertible, fair - covariance
xanimal.out(); // <Dog>xdog.out(), ok

// cb is convertible, fair - covariance
xanimal.cb(animal => cb(animal)); // xdog.cb(dog => cb(<Animal>dog)), ok

// in is not convertible - contravariance, but no error
xanimal.in(animal); // it same as...
xdog.in(animal); // error -- animal is not a Dog, but same call to xanimal don't produce an error

// here is used bivariance, but it's not fair
xanimal.cb((dog: Dog) => ...); // no error due bivariance, but it should be

So and my purpose to mark variables/parameters with variance modifier to make them convertible.
So if you convert Dog[] to Animal[] you actually should get out Animal[] which means it's impossible to call push method on it.

@Igorbek

This comment has been minimized.

Copy link
Contributor Author

commented Dec 17, 2014

And even more clear example of bivariance which shouldn't be:

var f: (x: { x: number; }) => void;
var g: (x: { x: number; y: number; }) => void;

g = f; // fair
f = g; // no error, but it should be
@metaweta

This comment has been minimized.

Copy link

commented Dec 18, 2014

If I understand right, you're suggesting keywords that would construct the co- or contravariant supertype of a given generic type, somewhat like extracting the real and imaginary part of a complex number. I like that idea.

@rk4n

This comment has been minimized.

Copy link

commented Jul 15, 2015

Recently found similar issue, I guess.

class Base { n() { return 0; }}
class Derived extends Base { m() { return 1; }}

var base = (b: Base) => b.n();
var derived = (d: Derived) => d.m();

base = derived;

base(new Base()); // TypeError: undefined is not a function (evaluating 'd.m()')

Wondering if it could be fixed easily or not?

@isiahmeadows

This comment has been minimized.

Copy link

commented Jul 15, 2016

Found an area where covariance (IIRC) would really help: strongly (and provably) typing the element tree. You could correctly narrow a vdom's children to only the correct types, so something like React could prevent say <title> to be a child of a <div>, which even though HTML would accept and ignore it, React shouldn't (and doesn't AFAIK). Without covariant types, though, I've already discovered that if you use generics for attributes, TypeScript often requires an explicit parameter to prevent a type error where it wouldn't if the parameter was covariant. A concrete example would be something like this, which doesn't check without explicitly specifying types, but could be entirely inferred with covariant types:

type Child = // things...

interface Attrs {
  // things...
}

interface VNode<T extends string, A extends Attrs, C extends Child> {
  type: T;
  attrs: A;
  children: C[];
}

export const m<T extends string, C extends Children>(type: T, c?: C | C[]): VNode<T, {}, C>;
export const m<T extends string, A extends Attrs, C extends Children>(type: T, a: A, c?: C | C[]): VNode<T, A, C>;

In effect, with covariant types and what exists today (and enough patience - it'd take a long while), you could mostly prove the well-formedness of virtual DOM trees (minus quantity) and largely correctly type the DOM itself.


I discovered this working on a vdom experiment, where components can be in charge of rendering themselves (mod tree diffing). I optimally want to track both permitted attributes and permitted children, so covariance would help.

@zpdDG4gta8XKpMCd

This comment has been minimized.

Copy link

commented Jul 15, 2016

It's rather a simple tweak in the compiler code to turn off covariance for parameters. I tried it once to get a feeling of what it takes to be a happy owner of the code that works right. The biggest pain are the overloaded DOM event handlers lib.d.ts that wont compile after such tweak. I tend to think that using overloading was a poor design choice for event handlers, because effectively they are a bunch of separate functions (despite sharing the same name) rather than one with a base basic signature coupled tons of covariant overloads:
#6102 (comment)

@masaeedu

This comment has been minimized.

Copy link
Contributor

commented Sep 29, 2016

@Igorbek Ah, okay. That one is just nice to have though, I don't think anyone really cares about using array indexing contravariantly. You could just bake it into the invariant Array type. E.g. in .NET you have the IEnumerable covariant interface implemented by lists, but they didn't bother making a separate contravariant interface for assignment.

Your approach is probably the most pragmatic, but I would still really like contra and covariant type annotations to be owned by whoever defines the interface, rather than whoever consumes it. From my pattern of coding at least, having to constantly remember and specify the modifiers where I'm consuming the code would not help me catch bugs. YMMV.

@AlexGalays

This comment has been minimized.

Copy link

commented Dec 4, 2016

Another completely unsafe example:

interface MyComponent<T> {
  items: T[]
  selectedItem: T
  onChange: Handler<T>
}

type Handler<T> = (val: T) => void

function onChange(item: string) {
  console.log(item.slice(2))
}

const comp: MyComponent<string | null> = {
  items: ['a', 'b', 'c'],
  selectedItem: null,
  onChange
}

Everything compiles fine but the onChange function can actually get passed a null reference, and that will throw at runtime.

@PyroVortex

This comment has been minimized.

Copy link

commented Mar 22, 2017

With the available tools present in the language, we are nearly to the point where having functions default to contravariant will accomplish most usage scenarios:

declare func((x: string) => void);
func(() => { }); // Valid
func((x: 'foo') => { }); // Compiler error

// Want a covariant function? Generics will let you write one
declare func2<S extends string>((x: S) => void);
func(() => { }); // Invalid
func2((x: 'foo') => { }); // Valid

// How about a function that can operate on an array of any type of Animal?
declare function sortByWeight<A extends Animal>(animals: Array<A>): void;

Edit: adjusted last examples.

@magnushiie

This comment has been minimized.

Copy link
Contributor

commented Apr 4, 2017

Regarding the argument that co-/contravariance are hard-to-grasp concepts - I think this is irrelevant considering that interface assignment compatibility tracks co-/contravariance even without annotations, and therefore in order for a person to understand what's going on, they need to understand the co-/contravariance (though I think it's covariance vs bivariance currently) distinction anyway.

interface Setter<T> {
    set(value: T);
}

let sA1: Setter<{}> = { set() {} };
const sA2: Setter<number> = sA1; 

let sB1: Setter<number> = { set(value: number) {} };
const sB2: Setter<{}> = sB1; // succeeds due to parameter bivariance

interface Getter<T> {
    get(): T;
}

let gA1: Getter<{}> = { get() { return {}; } };
const gA2: Getter<number> = gA1; // Type '{}' is not assignable to type 'number'.

let gB1: Getter<number> = { get() { return 1; } };
const gB2: Getter<{}> = gB1; // succeeds due to covariant returns

For a long time, I didn't understand that TypeScript tracks the generic variance, and therefore the distinction between Promise and Iterable was totally confusing - both are reader interfaces, and same variance should apply.

EDIT: Or rather, I understand now that structural typing results in effects as if TypeScript tracked variance across interface boundaries.

@nicojs

This comment has been minimized.

Copy link

commented Jan 9, 2019

With the recent addition of --strictFunctionTypes co- and contra-variance are now supported on function types without function types. This was a major step up and we had to fix dozens of actual issues in our code (which is awesome!).

Are there any plans to also support co- and contra-variance for other types? (as reported in this issue)? Or nothing on the roadmap?

@aaronla-ms

This comment has been minimized.

Copy link
Member

commented Jan 9, 2019

@nicojs the original proposal seemed to suggest adding co/contra variant annotations as a means of opting out of unsoundness on a per type basis. With --strictFunctionalTypes, that variance of a type will now be inferred correctly for other types... my covariant "interface Stream<T> { next(): T; }" behaves covariant with respect to T, and my contravariant "interface Bin<T> { put(item: T); }" behaves contravariant with respect to T.

Could you clarify, are you asking if there are any remaining soundness holes, or asking for an explicit way to declare variance (in addition to the inferred behavior)?

@Igorbek

This comment has been minimized.

Copy link
Contributor Author

commented Jan 10, 2019

The proposal initially discussed here and later outlined primarily in #10717 is mostly concerned of use-site covariance annotations. The motivation use cases are still not addressed even with recent enstricten rules.

The simplest example is an array type which can be used both co- and contravariantly. With the proposal an array of type Array<out T> can only be used where type T in covariant positions (getters, methods like pop). Similarly Array<in T> can be used where type T in contravariant positions (setters, methods like push). Note, in general, Array<out T> is not the same as ReadonlyArray<T>.

@isiahmeadows

This comment has been minimized.

Copy link

commented Jan 10, 2019

@Igorbek Could you elaborate on how Array<out T> would be different from Readonly<Array<T>> (not ReadonlyArray<T> - I'm not including methods here) or Array<in T> from a theoretical Writeonly<Array<T>> (assuming some Writeonly analogue to the built-in type Readonly<T> = {+readonly [P in keyof T]: T[P]})?

@Igorbek

This comment has been minimized.

Copy link
Contributor Author

commented Jan 10, 2019

@isiahmeadows consider Array's methods like pop, reverse, shift, sort, and many others. Although they are modifying array and are excluded from ReadonlyArray<T>, the type T there is in covariant position and therefore is part of Array<out T>.

@isiahmeadows

This comment has been minimized.

Copy link

commented Jan 10, 2019

@Igorbek Re-read my comment. You missed my nuance between Readonly< Array< T > > and ReadonlyArray< T > (spaces here added for emphasis).

@Igorbek

This comment has been minimized.

Copy link
Contributor Author

commented Jan 11, 2019

ah, sorry @isiahmeadows I thought you were asking in the context of my previous note about their differences.

In fact, Readonly<Array<T>> is even less covariant in respect to T than ReadonlyArray<T>.
Assuming it is defined as type Readonly<T> = { readonly [P in keyof T]: T[P]; }, it only transforms own fields (not even methods) to become read-only.

Actually, TS does not currently make Readonly<T[]> be actually read-only:

function test<T>(a: Readonly<T[]>, v: T) {
    a[0] = v; // no error
}

In general, generic type variance has nothing to do with read/write-ability.
A simple test would be:

interface X<T> {
  value: T; // read-write field with T in covariant position for reads and in contravariant position for writes
  set: (value: T) => void; // read-write field with T in contravariant position
}

type Readonly<X<T>> = { // effectively equivalent to this
  readonly value: T; // this is now covariant in respect to T
  readonly set: (value: T) => void; // read field with T in contravariant position
}

type X<out T> = {
  readonly value T; // same as Readonly
  writeonly set: (value: T) => void; // see the difference
}

type X<in T> = {
  writeonly value T;
  readonly set: (value: T) => void;
}

And a final thing, even if in many respects having a readonly version is mostly what you want, a simple Readonly<T> cannot deal with multiple type arguments. Like imaging Processor<TIn, TOut> is contravariant in respect to TIn and covariant in respect to TOut, and therefore cannot be expressed with Readonly/Writeonly as it have mixed type arguments' variance.

@isiahmeadows

This comment has been minimized.

Copy link

commented Jan 12, 2019

Okay, I'd find a[i] = 0 succeeding when a: Readonly<number[]> to be a bug.

@Igorbek

This comment has been minimized.

Copy link
Contributor Author

commented Jan 16, 2019

Okay, I'd find a[i] = 0 succeeding when a: Readonly<number[]> to be a bug.

being fixed in #29435

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.