Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Breaking Change] Make the context for await expressions consistent #55418

Closed
stereotype441 opened this issue Apr 10, 2024 · 7 comments
Closed
Assignees
Labels
area-fe-analyzer-shared Assigned by engineers; when triaging, prefer either area-front-end or area-analyzer. breaking-change-approved breaking-change-request This tracks requests for feedback on breaking changes

Comments

@stereotype441
Copy link
Member

The context used by the compiler front end to perform type inference on the operand of await expressions will be changed to match the behavior of the analyzer. The change is as follows: when the context for the entire await expression is dynamic, the context for the operand of the await expression will be FutureOr<_>.

Although this is technically a breaking change, it's not expected to have any effect on real-world code.

Background

When the compiler needs to perform type inference on an expression, it does so using a type schema known as the "context". A type schema is a generalization of the normal Dart type syntax, in which _ (called the "unknown type") can appear where a type is expected.

In the analyzer, any time an expression would be analyzed with a context of dynamic, the context is coerced to _ before performing the analysis; this causes contexts of dynamic and _ to behave identically1. In the compiler front end, dynamic is coerced to _ when analyzing a generic invocation, but dynamic and _ behave differently for a few expression types. This breaking change addresses one of those differences, which is in the behavior of await expressions.

The current behavior for await expressions is as follows. If the context for the await expression is K, then the operand of the await expression will be inferred using a context of L, where L is computed as follows:

  1. If K is FutureOr<P> or FutureOr<P>? for some P, then L is K.
  2. Otherwise, if K is dynamic, then in the compiler front end, L is dynamic; in the analyzer, L is FutureOr<_>.2
  3. Otherwise, L is FutureOr<K>.

Intended Change

The above rules will be changed so that in the compiler front end, if K is dynamic, then L will be FutureOr<_>, as it is in the analyzer.

Justification

Any difference in type inference behavior between the analyzer and the compiler front end is a bug. In this case the bug has a low user impact, because the type schemas dynamic and FutureOr<_> behave very similarly in type inference (see dart-lang/language#3649 for further discussion about this). To reduce the impact of the bug fix, it makes sense to standardize on either the analyzer's behavior or the compiler front end's behavior. In this case, standardizing on the analyzer behavior is better, since the analyzer is more self-consistent (it always treats contexts of dynamic and _ the same). Once this change is made, there will be only one remaining scenario in which the compiler front end treats dynamic and _ contexts differently, which I plan to address in a future breaking change (see dart-lang/language#3650).

Expected Impact

A prototype of this change caused zero test failures in Google's internal codebase, so the impact is expected to be extremely low for real-world code.

But it is theoretically possible for a program to behave differently with the change. Here is a contrived example:

Future<T> g<T>(T t) => Future.value(t);
T h<T>(T t) => t;

extension<T> on Future<T> {
  void foo() {
    print(T);
  }
}

test(Future<num>? f) async {
  dynamic x = await h(f ?? (g(0)..foo()));
}

main() {
  test(null);
}

Today, this program prints num; with the change, it will print int.

Today, the compiler front end performs type inference for this program as follows:
  • await h(f ?? (g(0)..foo())) is inferred using a context of dynamic.
  • Therefore, h(f ?? g(0)..foo())) is inferred using a context of dynamic.
  • Since h is a generic invocation, the context dynamic is changed to _, so f ?? (g(0)..foo()) is inferred using a context of _.
  • When an if-null (??) expression is inferred using a context of _, the static type of the left hand side (f) is used as the context for inferring the right hand side (g(0)..foo()).
  • Therefore, g(0)..foo() is inferred using a context of Future<num>? (the static type of f).
  • Therefore, g(0) is inferred using a context of Future<num>?.
  • Since the return type of g is Future<T>, and that satisfies FutureOr<num>? only if T <: num, the type of T is set to num during downwards inference.
  • Therefore, g(0) has static type Future<num>.
  • Therefore, the extension method ..foo() is invoked with the type parameter T bound to num.

With the change, here's what will happen instead:

  • await h(f ?? (g(0)..foo())) will be inferred using a context of dynamic.
  • Therefore, h(f ?? g(0)..foo())) will be inferred using a context of FutureOr<_>.
  • Therefore, f ?? (g(0)..foo()) will be inferred using a context of FutureOr<_>.
  • When an if-null (??) expression is inferred using a context other than _, that context is propagated to the right hand side (g(0)..foo()).
  • Therefore, g(0)..foo() will be inferred using a context of FutureOr<_>.
  • Therefore, g(0) will be inferred using a context of FutureOr<_>.
  • Since the return type of g is Future<T>, and that satisfies FutureOr<_> for all T, downwards inference of g(0) won't constrain the type of T. So the type of T will be set to int during upwards inference.
  • Therefore, g(0) will have static type Future<int>.
  • Therefore, the extension method ..foo() will be invoked with the type parameter T bound to int.

Mitigation

In the unlikely event that some real-world customer code is affected, the effect will be limited to type inference. So the old behavior can be restored by supplying explicit types. For example, the above example could be changed to:

Future<T> g<T>(T t) => Future.value(t);
T h<T>(T t) => t;

extension<T> on Future<T> {
  void foo() {
    print(T);
  }
}

test(Future<num>? f) async {
  dynamic x = await h(f ?? (g<num>(0)..foo()));
}

main() {
  test(null);
}

(Note that g(0) has been changed to g<num>(0).)

Footnotes

  1. This coercion doesn't happen when dynamic appears more deeply inside the context; for example, a context of List<dynamic> is not changed to List<_>.

  2. This is a consequence of the fact that the analyzer coerces dynamic to _, therefore rule 3 applies.

@stereotype441 stereotype441 added the breaking-change-request This tracks requests for feedback on breaking changes label Apr 10, 2024
@itsjustkevin
Copy link
Contributor

@vsmenon @Hixie @grouma can you take a look at this breaking change request?

@itsjustkevin itsjustkevin self-assigned this Apr 10, 2024
@devoncarew devoncarew added the area-fe-analyzer-shared Assigned by engineers; when triaging, prefer either area-front-end or area-analyzer. label Apr 10, 2024
@Hixie
Copy link
Contributor

Hixie commented Apr 11, 2024

Sounds good to me; the change seems like an improvement.

@stereotype441
Copy link
Member Author

@vsmenon @grouma ping

@grouma
Copy link
Member

grouma commented Apr 29, 2024

cc @leonsenft who will handle breaking change requests for ACX going forward.

@leonsenft
Copy link

LGTM 👍

@vsmenon
Copy link
Member

vsmenon commented May 1, 2024

lgtm

@itsjustkevin
Copy link
Contributor

@stereotype441 your breaking change request is approved!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area-fe-analyzer-shared Assigned by engineers; when triaging, prefer either area-front-end or area-analyzer. breaking-change-approved breaking-change-request This tracks requests for feedback on breaking changes
Projects
Status: Complete
Development

No branches or pull requests

7 participants