Proposal: arbitrary task-like types returned from async methods #7169

Closed
ljw1004 opened this Issue Dec 2, 2015 · 51 comments

Comments

Projects
None yet
9 participants
@ljw1004
Contributor

ljw1004 commented Dec 2, 2015

Download

  • ArbitraryAsyncReturns.zip [22mb]
  • INSTALL: Unzip the file. Quit VS. Double-click to install in this order: (1) Roslyn.VisualStudio.Setup.vsix, (2) Roslyn.Compilers.Extension.vsix, (3) ExpressionEvaluatorPackage.vsix. I don't think the others are needed.
  • TRY IT OUT: the zip file contains a sample project
  • UNINSTALL: I've usually been able to go to within Visual Studio to Tools>Extensions, search for Roslyn, and uninstall in the order (1) Expression Evaluators, (2) Compilers, (3) Language Service. Once that didn't work, and I recovered my VS by deleting the entire folder %userprofile%\AppData\Roaming\Microsoft\VisualStudio and %userprofile%\AppData\Local\Microsoft\VisualStudio. Doing so will reset all your VS settings to default.

Introduction

It’s an ongoing user request for async methods to return arbitrary types. We’ve heard this particularly from people who want to return value Task-like types from their async methods for efficiency. In UWP development, it would be nice also to return IAsyncAction directly rather than having to return a Task and then write a wrapper.

The initial C# prototype of async did allow to return arbitrary types from async methods, but that was cut because we didn’t find a neat way to integrate it with generic type inference. Here’s a recap of the problem:

var xf = f(async () => {return 3;});
var xg = g(async () => {return 3;});
var xh = h(async () => {return 3;});
var xk = h(async () => {return 3;});

T f(Func<Task<T>> lambda) => lambda().Result;
T g(Func<MyTasklike<T>> lambda) => lambda().MyResultlike;
T h(Func<MyPerverse<T>> lambda) => lambda().MyPerverseResult;
T k(Func<MyWeird<T>> lambda) => lambda.MyWeirdResult;

// How to we infer what "T" should be based on the return statements in the lambda?
// In full generality it would depend on the details of how the tasklike things (Task<T>,
// MyTasklike<T>, MyPerverse<T>, MyWeird<T>) are built, and in particular how
// their builder's SetResult(...) method relates to the generic type parameter of the Task
// property of that builder...

// f: the builder for Task<T> lets you do “return 3;” and it infers that T should be int
// g: the builder for MyTasklike<T> lets you do “return 3;” and it infers that T should be int
// h: the builder for MyPerverse<T> lets you do “return 3;” but it infers that T should IEnumerable<int>
// k: the builder for MyWeird<T> lets you do “return 3;” but this has no bearing on what T should be: type inference failure

The feature was cut because we had no solution for the full generality of situations like MyPerverse<T> or MyWeird<T>: no way to make the leap from the type of the operand of the return statement, to the generic type arguments of the Task-like type.

Proposal: let’s just abandon the full generality cases h and k. They’re perverse and weird. Let’s say that async methods can return any type Foo (with similar semantics to Task) or can return any type Foo<T> (with similar semantics to Task<T>).

More concretely, let's say that the return type of an async method or lambda can be any task-like type. A task-like type is one that has either have zero generic type parameters (like Task) or one generic type parameter (like Task<T>), and which is declared in one of these two forms, with an attribute:

[TaskLike(typeof(FooBuilder))] struct Foo { … }
struct FooBuilder { … similar to AsyncTaskMethodBuilder … }

[TaskLike(typeof(FooBuilder<T>))] struct Foo<T> { … }
struct FooBuilder<T> { … similar to AsyncTaskMethodBuilder<T> … }

All the type inference and overload resolution that the compiler does today for Task<T>, it should also do the same for other tasklikes. In particular it should work for target-typing of async lambdas, and it should work for overload resolution.

For backwards-compatibility, if the return type is System.Threading.Tasks.Task or Task<T> and also lacks the [TaskLike] attribute, then the compiler implicitly picks System.Runtime.CompilerServices.AsyncTaskMethodBuilder or AsyncTaskMethodBuilder<T> for the builder.

Semantics

[edited to incorporate @ngafter's comments below...]

The VB spec currently explains the semantics of async methods+lambdas like this:

If the async method is a Function with return type Task or Task(Of T) for some T, then an object of that type implicitly created, associated with the current invocation. This is called an async object and represents the future work that will be done by executing the instance of the async method. When control resumes in the caller of this async method instance, the caller will receive this async object as the result of its invocation.

This would be changed to say that, by assumption, the async method's return type is a Tasklike, which has an associated builder type (specified by attribute). An object of that builder type is implicitly created by invoking the builder's static Create method, and the return value of the async method is the result of the .Task property on that builder instance. (or void).

The spec goes on to explain the semantics of exiting the method:

If control flow exits through an unhandled exception, then the async object’s status is set to TaskStatus.Faulted and its Exception.InnerException property is set to the exception (except: certain implementation-defined exceptions such as OperationCanceledException change it to TaskStatus.Canceled). Control flow resumes in the current caller. Otherwise, the async object’s status is set to TaskStatus.Completed. Control flow resumes in the current caller.

This would be amended to say that if control flow exits through an unhandled exception ex then .SetException(ex) is called on the builder instance associated with this method instance. Otherwise, the .SetResult() is called, with or without an argument depending on the type.

The spec currently gives semantics for how an async method is started:

The instance's control point is then set at the first statement of the async method body, and immediately starts to execute the method body from there.

This would be amended to say that the builder's Start method is invoked, passing an instance that satisfies the IAsyncStateMachine interface. The async method will start to execute when the builder first calls MoveNext on that instance, (which it may do inside Start for a hot task, or later for a cold task). If the builder calls MoveNext more than once (other than in response to AwaitOnCompleted) the behavior is undefined. If ever the builder's SetStateMachine method is invoked, then any future calls it makes to MoveNext must be through this new value.

Note: this allows for a builder to produce cold tasklikes.

The spec currently gives semantics for how the await operator is executed:

Either ICriticalNotifyCompletion.UnsafeOnCompleted is invoked on the awaiter (if the awaiter's type E implements ICriticalNotifyCompletion) or INotifyCompletion.OnCompleted (otherwise). In both cases it passes a resumption delegate associated with the current instance of the async method. The control point of the current async method instance is suspended, and control flow resumes in the current caller. If later the resumption delegate is invoked, the resumption delegate first restores System.Threading.Thread.CurrentThread.ExecutionContext to what it was at the time OnCompleted was called, then it resumes control flow at the control point of the async method instance.

This would be amended to say that either builder.AwaitOnCompleted or builder.AwaitUnsafeOnCompleted is invoked. The builder is expected to synthesizes a delegate such that, when the delegate is invoked, then it winds up calling MoveNext on the state machine. Often the delegate restores context.

Note: this allows for a builder to be notified in when a cold await operator starts, and when it finishes.

Note: the Task type is not sealed. I might chose to write MyTask<int> FooAsync() which returns a subtype of Task, but also performs additional logging or other work.

Semantics relating to overload resolution and type inference

Currently the rules for Inferred return type say that the inferred return type for an async lambda is always either Task or Task<T> for some T derived from the lambda body. This should be changed to say that the inferred return type of an async lambda can be whatever tasklike the target context expects.

Currently the overload resolution rules for Better function member say that if two overload candidates have identical parameter lists then tie-breakers such as "more specific" are used to chose between then, otherwise "better conversion from expression" is used. This should be changed to say that tie-breakers will be used when parameter lists are identical up to tasklikes.

Limitations

This scheme wouldn't be able to represent the WinRT types IAsyncOperationWithProgress or IAsyncActionWithProgress. It also wouldn't be able to represent the fact that WinRT async interfaces have a cancel method upon them. We might consider allowing the async method to access its own builder instance via some special keyword, e.g. _thisasync.cancel.ThrowIfCancellationRequested(), but that seems too hacky and I think it's not worth it.

Compilation notes and edge cases

Concrete tasklike. The following kind of thing is conceptually impossible, because the compiler needs to know the concrete type of the tasklike that's being constructed (in order to construct it).

class C<T> where T : MyTasklike {
    async T f() { } // error: the return type must be a concrete type
}

Incomplete builder: binding. The compiler should recognize the following as an async method that doesn't need a return statement, and should bind it accordingly. There is nothing wrong with the async modifier nor the absence of a return keyword. The fact that MyTasklike's builder doesn't fulfill the pattern is an error that comes later on: it doesn't prevent the compiler from binding method f.

class C { async MyTasklike f() { } }
[Tasklike(typeof(string))] class MyTasklike {}

Wrong generic. To be a tasklike, a type must (1) have a [Tasklike] attribute on it, (2) have arity 0 or 1. If it has the attribute but the wrong arity then it's not a Tasklike.

class C { async MyTasklike f() { } } // okay: return type has arity 0
class C { async MyTasklike<int> f() { return 1;} } // okay: return type has arity 1:int
class C { async MyTasklike<int> f() { return "s";} } // error: should return an int, not a string
class C { async MyTasklike<int,int> f() { } } // error

Incomplete builder: codegen. If the builder doesn't fulfill the pattern, well, that's an edge case. It should definitely give errors (like it does today e.g. if you have an async Task-returning method and target .NET4.0), but it doesn't matter to have high-quality errors (again it doesn't have high quality errors today). One unusual case of failed builder is where the builder has the wrong constraints on its generic type parameters. As far as I can tell, constraints aren't taken into account elsewhere in the compiler (the only other well known methods with generic parameters are below, and they're all in mscorlib, so no chance of them ever getting wrong)

System.Array.Empty
System_Runtime_InteropServices_WindowsRuntime_WindowsRuntimeMarshal__AddEventHandler_T
System_Runtime_InteropServices_WindowsRuntime_WindowsRuntimeMarshal__RemoveEventHandler_T
System_Activator__CreateInstance_T
System_Threading_Interlocked__CompareExchange_T
Microsoft_VisualBasic_CompilerServices_Conversions__ToGenericParameter_T_Object
@ljw1004

This comment has been minimized.

Show comment
Hide comment
@ljw1004

ljw1004 Dec 2, 2015

Contributor

The "value-tasklike" motivation is discussed more fully here:
dotnet/corefx#4708 (comment)

Contributor

ljw1004 commented Dec 2, 2015

The "value-tasklike" motivation is discussed more fully here:
dotnet/corefx#4708 (comment)

@gafter

This comment has been minimized.

Show comment
Hide comment
@gafter

gafter Dec 2, 2015

Member

So... what would be the runtime semantics of these lambda expressions? We would need to know that so we can know what code to generate.

Member

gafter commented Dec 2, 2015

So... what would be the runtime semantics of these lambda expressions? We would need to know that so we can know what code to generate.

@ljw1004

This comment has been minimized.

Show comment
Hide comment
@ljw1004

ljw1004 Dec 2, 2015

Contributor

The VB spec currently explains the semantics of async methods+lambdas like this:

If the async method is a Function with return type Task or Task(Of T) for some T, then an object of that type implicitly created, associated with the current invocation. This is called an async object and represents the future work that will be done by executing the instance of the async method. When control resumes in the caller of this async method instance, the caller will receive this async object as the result of its invocation.

This would be changed to say that, by assumption, the async method's return type is a Tasklike, which has an associated builder type (specified by attribute). An object of that builder type is implicitly created, and the return value of the async method is the result of the .Task property on that builder instance.

The spec goes on to explain the semantics of exiting the method:

If control flow exits through an unhandled exception, then the async object’s status is set to TaskStatus.Faulted and its Exception.InnerException property is set to the exception (except: certain implementation-defined exceptions such as OperationCanceledException change it to TaskStatus.Canceled). Control flow resumes in the current caller. Otherwise, the async object’s status is set to TaskStatus.Completed. Control flow resumes in the current caller.

This would be amended to say that if control flow exits through an unhandled exception ex then .SetException(ex) is called on the builder instance associated with this method instance. Otherwise, the .SetResult() is called, with or without an argument depending on the type.

Contributor

ljw1004 commented Dec 2, 2015

The VB spec currently explains the semantics of async methods+lambdas like this:

If the async method is a Function with return type Task or Task(Of T) for some T, then an object of that type implicitly created, associated with the current invocation. This is called an async object and represents the future work that will be done by executing the instance of the async method. When control resumes in the caller of this async method instance, the caller will receive this async object as the result of its invocation.

This would be changed to say that, by assumption, the async method's return type is a Tasklike, which has an associated builder type (specified by attribute). An object of that builder type is implicitly created, and the return value of the async method is the result of the .Task property on that builder instance.

The spec goes on to explain the semantics of exiting the method:

If control flow exits through an unhandled exception, then the async object’s status is set to TaskStatus.Faulted and its Exception.InnerException property is set to the exception (except: certain implementation-defined exceptions such as OperationCanceledException change it to TaskStatus.Canceled). Control flow resumes in the current caller. Otherwise, the async object’s status is set to TaskStatus.Completed. Control flow resumes in the current caller.

This would be amended to say that if control flow exits through an unhandled exception ex then .SetException(ex) is called on the builder instance associated with this method instance. Otherwise, the .SetResult() is called, with or without an argument depending on the type.

@bartdesmet

This comment has been minimized.

Show comment
Hide comment
@bartdesmet

bartdesmet Dec 2, 2015

FWIW, one case where I ran this limitation was when I was designing an async variant of a generic interface that was covariant in T and I had to turn the T-returning methods into async variants. The requirement for Task forced me to give up on covariance or led me to the introduction of a covariant IFuture which allows to retain the covariant interface design but makes implementation using async methods impossible in a straightforward manner (i.e. without weird wrappers).

FWIW, one case where I ran this limitation was when I was designing an async variant of a generic interface that was covariant in T and I had to turn the T-returning methods into async variants. The requirement for Task forced me to give up on covariance or led me to the introduction of a covariant IFuture which allows to retain the covariant interface design but makes implementation using async methods impossible in a straightforward manner (i.e. without weird wrappers).

@gafter

This comment has been minimized.

Show comment
Hide comment
@gafter

gafter Dec 3, 2015

Member

@ljw1004 What code would the compiler generate for an await expression?

Member

gafter commented Dec 3, 2015

@ljw1004 What code would the compiler generate for an await expression?

@ljw1004

This comment has been minimized.

Show comment
Hide comment
@ljw1004

ljw1004 Dec 3, 2015

Contributor

Thanks @gafter. I updated the first post to flesh out more of the semantics, i.e. to complete the list of "builder" methods that the compiler's generated code will invoke... (1) it will invoke the static Create method to create an instance of the builder; (2) it will invoke the Start method; (3) it may invoke the SetAsyncStateMachine method; (4) and the code the compiler generates for an await expression is a call to the builder's AwaitOnCompleted or AwaitUnsafeOnCompleted method.

Contributor

ljw1004 commented Dec 3, 2015

Thanks @gafter. I updated the first post to flesh out more of the semantics, i.e. to complete the list of "builder" methods that the compiler's generated code will invoke... (1) it will invoke the static Create method to create an instance of the builder; (2) it will invoke the Start method; (3) it may invoke the SetAsyncStateMachine method; (4) and the code the compiler generates for an await expression is a call to the builder's AwaitOnCompleted or AwaitUnsafeOnCompleted method.

@stephentoub stephentoub referenced this issue in aspnet/HttpAbstractions Mar 23, 2016

Closed

Use System.Buffers + ValueTask in FormReading #556

@Tratcher Tratcher referenced this issue in aspnet/HttpAbstractions Apr 6, 2016

Closed

Allocations in FormReader #553

@ljw1004

This comment has been minimized.

Show comment
Hide comment
@ljw1004

ljw1004 Apr 8, 2016

Contributor

ValueTask example

This shows producing (and consuming) a ValueTask. ValueTask provides substantial perf benefits by reducing the number of heap allocations, as shown in aspnet/HttpAbstraction#556 and aspnet/HttpAbstractions#553

The ValueTask is the one in System.Threading.Tasks.Extensions library - I copied the source code over. The idea is that ValueTask is a struct which contains either a result (if one has been provided), or a Task (if a Task was requested but the async method hadn't yet produced a result).

var i1 = await g(0);
Console.WriteLine(i1);
var i2 = await g(100);
Console.WriteLine(i2);

static async ValueTask<int> g(int delay)
{
    if (delay > 0) await Task.Delay(delay);
    return delay;
}

namespace System.Threading.Tasks
{
    [Tasklike(typeof(ValueTaskMethodBuilder<>))]
    public struct ValueTask<TResult> : IEquatable<ValueTask<TResult>>
    {
        // A ValueTask holds *either* a value _result, *or* a task _task. Not both.
        // The idea is that if it's constructed just with the value, it avoids the heap allocation of a Task.
        internal readonly Task<TResult> _task;
        internal readonly TResult _result;
        public ValueTask(TResult result) { _result = result; _task = null; }
        public ValueTask(Task<TResult> task) { _task = task; _result = default(TResult);  if (_task == null) throw new ArgumentNullException(nameof(task)); }
        public static implicit operator ValueTask<TResult>(Task<TResult> task) => new ValueTask<TResult>(task);
        public static implicit operator ValueTask<TResult>(TResult result) => new ValueTask<TResult>(result);
        public override int GetHashCode() => _task != null ? _task.GetHashCode() : _result != null ? _result.GetHashCode() : 0;
        public override bool Equals(object obj) => obj is ValueTask<TResult> && Equals((ValueTask<TResult>)obj);
        public bool Equals(ValueTask<TResult> other) => _task != null || other._task != null ? _task == other._task : EqualityComparer<TResult>.Default.Equals(_result, other._result);
        public static bool operator ==(ValueTask<TResult> left, ValueTask<TResult> right) => left.Equals(right);
        public static bool operator !=(ValueTask<TResult> left, ValueTask<TResult> right) => !left.Equals(right);
        public Task<TResult> AsTask() => _task ?? Task.FromResult(_result);
        public bool IsCompleted => _task == null || _task.IsCompleted;
        public bool IsCompletedSuccessfully => _task == null || _task.Status == TaskStatus.RanToCompletion;
        public bool IsFaulted => _task != null && _task.IsFaulted;
        public bool IsCanceled => _task != null && _task.IsCanceled;
        public TResult Result => _task == null ? _result : _task.GetAwaiter().GetResult();
        public ValueTaskAwaiter<TResult> GetAwaiter() => new ValueTaskAwaiter<TResult>(this);
        public ConfiguredValueTaskAwaitable<TResult> ConfigureAwait(bool continueOnCapturedContext) => new ConfiguredValueTaskAwaitable<TResult>(this, continueOnCapturedContext: continueOnCapturedContext);
        public override string ToString() => _task == null ? _result.ToString() : _task.Status == TaskStatus.RanToCompletion ? _task.Result.ToString() : _task.Status.ToString();
    }
}



namespace System.Runtime.CompilerServices
{
    class ValueTaskMethodBuilder<TResult>
    {
        // This builder contains *either* an AsyncTaskMethodBuilder, *or* a result.
        // At the moment someone retrieves its Task, that's when we collapse to the real AsyncTaskMethodBuilder
        // and it's task, or just use the result.
        internal AsyncTaskMethodBuilder<TResult> _taskBuilder; internal bool GotBuilder;
        internal TResult _result; internal bool GotResult;

        public static ValueTaskMethodBuilder<TResult> Create() => new ValueTaskMethodBuilder<TResult>();
        public void Start<TStateMachine>(ref TStateMachine stateMachine) where TStateMachine : IAsyncStateMachine => stateMachine.MoveNext();
        public void SetStateMachine(IAsyncStateMachine stateMachine) { EnsureTaskBuilder(); _taskBuilder.SetStateMachine(stateMachine); }
        public void SetResult(TResult result)
        {
            if (GotBuilder) _taskBuilder.SetResult(result);
            else _result = result;
            GotResult = true;
        }
        public void SetException(System.Exception exception)
        {
            EnsureTaskBuilder();
            _taskBuilder.SetException(exception);
        }
        private void EnsureTaskBuilder()
        {
            if (!GotBuilder && GotResult) throw new InvalidOperationException();
            if (!GotBuilder) _taskBuilder = new AsyncTaskMethodBuilder<TResult>();
            GotBuilder = true;
        }
        public ValueTask<TResult> Task
        {
            get
            {
                if (GotResult && !GotBuilder) return new ValueTask<TResult>(_result);
                EnsureTaskBuilder();
                return new ValueTask<TResult>(_taskBuilder.Task);
            }
        }

        public void AwaitOnCompleted<TAwaiter, TStateMachine>(ref TAwaiter awaiter, ref TStateMachine stateMachine) where TAwaiter : INotifyCompletion where TStateMachine : IAsyncStateMachine
        {
            EnsureTaskBuilder();
            _taskBuilder.AwaitOnCompleted(ref awaiter, ref stateMachine);
        }
        public void AwaitUnsafeOnCompleted<TAwaiter, TStateMachine>(ref TAwaiter awaiter, ref TStateMachine stateMachine) where TAwaiter : ICriticalNotifyCompletion where TStateMachine : IAsyncStateMachine
        {
            EnsureTaskBuilder();
            _taskBuilder.AwaitUnsafeOnCompleted(ref awaiter, ref stateMachine);
        }
    }

    public struct ValueTaskAwaiter<TResult> : ICriticalNotifyCompletion
    {
        private readonly ValueTask<TResult> _value;
        internal ValueTaskAwaiter(ValueTask<TResult> value) { _value = value; }
        public bool IsCompleted => _value.IsCompleted;
        public TResult GetResult() => (_value._task == null) ? _value._result : _value._task.GetAwaiter().GetResult(); 
        public void OnCompleted(Action continuation) => _value.AsTask().ConfigureAwait(continueOnCapturedContext: true).GetAwaiter().OnCompleted(continuation);
        public void UnsafeOnCompleted(Action continuation) => _value.AsTask().ConfigureAwait(continueOnCapturedContext: true).GetAwaiter().UnsafeOnCompleted(continuation);
    }

    public struct ConfiguredValueTaskAwaitable<TResult>
    {
        private readonly ValueTask<TResult> _value;
        private readonly bool _continueOnCapturedContext;
        internal ConfiguredValueTaskAwaitable(ValueTask<TResult> value, bool continueOnCapturedContext) { _value = value; _continueOnCapturedContext = continueOnCapturedContext; }
        public ConfiguredValueTaskAwaiter GetAwaiter() => new ConfiguredValueTaskAwaiter(_value, _continueOnCapturedContext);
        public struct ConfiguredValueTaskAwaiter : ICriticalNotifyCompletion
        {
            private readonly ValueTask<TResult> _value;
            private readonly bool _continueOnCapturedContext;
            internal ConfiguredValueTaskAwaiter(ValueTask<TResult> value, bool continueOnCapturedContext) { _value = value; _continueOnCapturedContext = continueOnCapturedContext; }
            public bool IsCompleted => _value.IsCompleted;
            public TResult GetResult() => _value._task == null ? _value._result : _value._task.GetAwaiter().GetResult();
            public void OnCompleted(Action continuation) => _value.AsTask().ConfigureAwait(_continueOnCapturedContext).GetAwaiter().OnCompleted(continuation);
            public void UnsafeOnCompleted(Action continuation) => _value.AsTask().ConfigureAwait(_continueOnCapturedContext).GetAwaiter().UnsafeOnCompleted(continuation);
        }
    }
}
Contributor

ljw1004 commented Apr 8, 2016

ValueTask example

This shows producing (and consuming) a ValueTask. ValueTask provides substantial perf benefits by reducing the number of heap allocations, as shown in aspnet/HttpAbstraction#556 and aspnet/HttpAbstractions#553

The ValueTask is the one in System.Threading.Tasks.Extensions library - I copied the source code over. The idea is that ValueTask is a struct which contains either a result (if one has been provided), or a Task (if a Task was requested but the async method hadn't yet produced a result).

var i1 = await g(0);
Console.WriteLine(i1);
var i2 = await g(100);
Console.WriteLine(i2);

static async ValueTask<int> g(int delay)
{
    if (delay > 0) await Task.Delay(delay);
    return delay;
}

namespace System.Threading.Tasks
{
    [Tasklike(typeof(ValueTaskMethodBuilder<>))]
    public struct ValueTask<TResult> : IEquatable<ValueTask<TResult>>
    {
        // A ValueTask holds *either* a value _result, *or* a task _task. Not both.
        // The idea is that if it's constructed just with the value, it avoids the heap allocation of a Task.
        internal readonly Task<TResult> _task;
        internal readonly TResult _result;
        public ValueTask(TResult result) { _result = result; _task = null; }
        public ValueTask(Task<TResult> task) { _task = task; _result = default(TResult);  if (_task == null) throw new ArgumentNullException(nameof(task)); }
        public static implicit operator ValueTask<TResult>(Task<TResult> task) => new ValueTask<TResult>(task);
        public static implicit operator ValueTask<TResult>(TResult result) => new ValueTask<TResult>(result);
        public override int GetHashCode() => _task != null ? _task.GetHashCode() : _result != null ? _result.GetHashCode() : 0;
        public override bool Equals(object obj) => obj is ValueTask<TResult> && Equals((ValueTask<TResult>)obj);
        public bool Equals(ValueTask<TResult> other) => _task != null || other._task != null ? _task == other._task : EqualityComparer<TResult>.Default.Equals(_result, other._result);
        public static bool operator ==(ValueTask<TResult> left, ValueTask<TResult> right) => left.Equals(right);
        public static bool operator !=(ValueTask<TResult> left, ValueTask<TResult> right) => !left.Equals(right);
        public Task<TResult> AsTask() => _task ?? Task.FromResult(_result);
        public bool IsCompleted => _task == null || _task.IsCompleted;
        public bool IsCompletedSuccessfully => _task == null || _task.Status == TaskStatus.RanToCompletion;
        public bool IsFaulted => _task != null && _task.IsFaulted;
        public bool IsCanceled => _task != null && _task.IsCanceled;
        public TResult Result => _task == null ? _result : _task.GetAwaiter().GetResult();
        public ValueTaskAwaiter<TResult> GetAwaiter() => new ValueTaskAwaiter<TResult>(this);
        public ConfiguredValueTaskAwaitable<TResult> ConfigureAwait(bool continueOnCapturedContext) => new ConfiguredValueTaskAwaitable<TResult>(this, continueOnCapturedContext: continueOnCapturedContext);
        public override string ToString() => _task == null ? _result.ToString() : _task.Status == TaskStatus.RanToCompletion ? _task.Result.ToString() : _task.Status.ToString();
    }
}



namespace System.Runtime.CompilerServices
{
    class ValueTaskMethodBuilder<TResult>
    {
        // This builder contains *either* an AsyncTaskMethodBuilder, *or* a result.
        // At the moment someone retrieves its Task, that's when we collapse to the real AsyncTaskMethodBuilder
        // and it's task, or just use the result.
        internal AsyncTaskMethodBuilder<TResult> _taskBuilder; internal bool GotBuilder;
        internal TResult _result; internal bool GotResult;

        public static ValueTaskMethodBuilder<TResult> Create() => new ValueTaskMethodBuilder<TResult>();
        public void Start<TStateMachine>(ref TStateMachine stateMachine) where TStateMachine : IAsyncStateMachine => stateMachine.MoveNext();
        public void SetStateMachine(IAsyncStateMachine stateMachine) { EnsureTaskBuilder(); _taskBuilder.SetStateMachine(stateMachine); }
        public void SetResult(TResult result)
        {
            if (GotBuilder) _taskBuilder.SetResult(result);
            else _result = result;
            GotResult = true;
        }
        public void SetException(System.Exception exception)
        {
            EnsureTaskBuilder();
            _taskBuilder.SetException(exception);
        }
        private void EnsureTaskBuilder()
        {
            if (!GotBuilder && GotResult) throw new InvalidOperationException();
            if (!GotBuilder) _taskBuilder = new AsyncTaskMethodBuilder<TResult>();
            GotBuilder = true;
        }
        public ValueTask<TResult> Task
        {
            get
            {
                if (GotResult && !GotBuilder) return new ValueTask<TResult>(_result);
                EnsureTaskBuilder();
                return new ValueTask<TResult>(_taskBuilder.Task);
            }
        }

        public void AwaitOnCompleted<TAwaiter, TStateMachine>(ref TAwaiter awaiter, ref TStateMachine stateMachine) where TAwaiter : INotifyCompletion where TStateMachine : IAsyncStateMachine
        {
            EnsureTaskBuilder();
            _taskBuilder.AwaitOnCompleted(ref awaiter, ref stateMachine);
        }
        public void AwaitUnsafeOnCompleted<TAwaiter, TStateMachine>(ref TAwaiter awaiter, ref TStateMachine stateMachine) where TAwaiter : ICriticalNotifyCompletion where TStateMachine : IAsyncStateMachine
        {
            EnsureTaskBuilder();
            _taskBuilder.AwaitUnsafeOnCompleted(ref awaiter, ref stateMachine);
        }
    }

    public struct ValueTaskAwaiter<TResult> : ICriticalNotifyCompletion
    {
        private readonly ValueTask<TResult> _value;
        internal ValueTaskAwaiter(ValueTask<TResult> value) { _value = value; }
        public bool IsCompleted => _value.IsCompleted;
        public TResult GetResult() => (_value._task == null) ? _value._result : _value._task.GetAwaiter().GetResult(); 
        public void OnCompleted(Action continuation) => _value.AsTask().ConfigureAwait(continueOnCapturedContext: true).GetAwaiter().OnCompleted(continuation);
        public void UnsafeOnCompleted(Action continuation) => _value.AsTask().ConfigureAwait(continueOnCapturedContext: true).GetAwaiter().UnsafeOnCompleted(continuation);
    }

    public struct ConfiguredValueTaskAwaitable<TResult>
    {
        private readonly ValueTask<TResult> _value;
        private readonly bool _continueOnCapturedContext;
        internal ConfiguredValueTaskAwaitable(ValueTask<TResult> value, bool continueOnCapturedContext) { _value = value; _continueOnCapturedContext = continueOnCapturedContext; }
        public ConfiguredValueTaskAwaiter GetAwaiter() => new ConfiguredValueTaskAwaiter(_value, _continueOnCapturedContext);
        public struct ConfiguredValueTaskAwaiter : ICriticalNotifyCompletion
        {
            private readonly ValueTask<TResult> _value;
            private readonly bool _continueOnCapturedContext;
            internal ConfiguredValueTaskAwaiter(ValueTask<TResult> value, bool continueOnCapturedContext) { _value = value; _continueOnCapturedContext = continueOnCapturedContext; }
            public bool IsCompleted => _value.IsCompleted;
            public TResult GetResult() => _value._task == null ? _value._result : _value._task.GetAwaiter().GetResult();
            public void OnCompleted(Action continuation) => _value.AsTask().ConfigureAwait(_continueOnCapturedContext).GetAwaiter().OnCompleted(continuation);
            public void UnsafeOnCompleted(Action continuation) => _value.AsTask().ConfigureAwait(_continueOnCapturedContext).GetAwaiter().UnsafeOnCompleted(continuation);
        }
    }
}
@ljw1004

This comment has been minimized.

Show comment
Hide comment
@ljw1004

ljw1004 Apr 8, 2016

Contributor

ITask example

This shows an async method that returns a covariant ITask, rather than the normal invariant Task.

// Test ITask
ITask<string> its = f();
ITask<object> ito = its;
var io = await ito;
Console.WriteLine(io);

static async ITask<string> f()
{
    await Task.Yield();
    return "hello";
}

namespace System.Threading.Tasks
{
    [Tasklike(typeof(ITaskMethodBuilder<>))]
    interface ITask<out T>
    {
        ITaskAwaiter<T> GetAwaiter();
    }
}

namespace System.Runtime.CompilerServices
{

    interface ITaskAwaiter<out T> : INotifyCompletion, ICriticalNotifyCompletion
    {
        bool IsCompleted { get; }
        T GetResult();
    }

    struct ITaskMethodBuilder<T>
    {
        private class ConcreteITask<U> : ITask<U>
        {
            private readonly Task<U> _task;
            public ConcreteITask(Task<U> task) { _task = task; }
            public ITaskAwaiter<U> GetAwaiter() => new ConcreteITaskAwaiter<U>(_task.GetAwaiter());
        }

        private class ConcreteITaskAwaiter<U> : ITaskAwaiter<U>
        {
            private readonly TaskAwaiter<U> _awaiter;
            public ConcreteITaskAwaiter(TaskAwaiter<U> awaiter) { _awaiter = awaiter; }
            public bool IsCompleted => _awaiter.IsCompleted;
            public U GetResult() => _awaiter.GetResult();
            public void OnCompleted(Action continuation) => _awaiter.OnCompleted(continuation);
            public void UnsafeOnCompleted(Action continuation) => _awaiter.UnsafeOnCompleted(continuation);
        }

        private AsyncTaskMethodBuilder<T> _taskBuilder;
        private ConcreteITask<T> _task;

        public static ITaskMethodBuilder<T> Create() => new ITaskMethodBuilder<T>() { _taskBuilder = AsyncTaskMethodBuilder<T>.Create() };
        public void Start<TStateMachine>(ref TStateMachine stateMachine) where TStateMachine : IAsyncStateMachine => _taskBuilder.Start(ref stateMachine);
        public void SetStateMachine(IAsyncStateMachine stateMachine) => _taskBuilder.SetStateMachine(stateMachine);
        public void SetResult(T result) => _taskBuilder.SetResult(result);
        public void SetException(Exception exception) => _taskBuilder.SetException(exception);
        public ITask<T> Task => (_task == null) ? _task = new ConcreteITask<T>(_taskBuilder.Task) : _task;
        public void AwaitOnCompleted<TAwaiter, TStateMachine>(ref TAwaiter awaiter, ref TStateMachine stateMachine) where TAwaiter : INotifyCompletion where TStateMachine : IAsyncStateMachine => _taskBuilder.AwaitOnCompleted(ref awaiter, ref stateMachine);
        public void AwaitUnsafeOnCompleted<TAwaiter, TStateMachine>(ref TAwaiter awaiter, ref TStateMachine stateMachine) where TAwaiter : ICriticalNotifyCompletion where TStateMachine : IAsyncStateMachine => _taskBuilder.AwaitUnsafeOnCompleted(ref awaiter, ref stateMachine);
    }
}
Contributor

ljw1004 commented Apr 8, 2016

ITask example

This shows an async method that returns a covariant ITask, rather than the normal invariant Task.

// Test ITask
ITask<string> its = f();
ITask<object> ito = its;
var io = await ito;
Console.WriteLine(io);

static async ITask<string> f()
{
    await Task.Yield();
    return "hello";
}

namespace System.Threading.Tasks
{
    [Tasklike(typeof(ITaskMethodBuilder<>))]
    interface ITask<out T>
    {
        ITaskAwaiter<T> GetAwaiter();
    }
}

namespace System.Runtime.CompilerServices
{

    interface ITaskAwaiter<out T> : INotifyCompletion, ICriticalNotifyCompletion
    {
        bool IsCompleted { get; }
        T GetResult();
    }

    struct ITaskMethodBuilder<T>
    {
        private class ConcreteITask<U> : ITask<U>
        {
            private readonly Task<U> _task;
            public ConcreteITask(Task<U> task) { _task = task; }
            public ITaskAwaiter<U> GetAwaiter() => new ConcreteITaskAwaiter<U>(_task.GetAwaiter());
        }

        private class ConcreteITaskAwaiter<U> : ITaskAwaiter<U>
        {
            private readonly TaskAwaiter<U> _awaiter;
            public ConcreteITaskAwaiter(TaskAwaiter<U> awaiter) { _awaiter = awaiter; }
            public bool IsCompleted => _awaiter.IsCompleted;
            public U GetResult() => _awaiter.GetResult();
            public void OnCompleted(Action continuation) => _awaiter.OnCompleted(continuation);
            public void UnsafeOnCompleted(Action continuation) => _awaiter.UnsafeOnCompleted(continuation);
        }

        private AsyncTaskMethodBuilder<T> _taskBuilder;
        private ConcreteITask<T> _task;

        public static ITaskMethodBuilder<T> Create() => new ITaskMethodBuilder<T>() { _taskBuilder = AsyncTaskMethodBuilder<T>.Create() };
        public void Start<TStateMachine>(ref TStateMachine stateMachine) where TStateMachine : IAsyncStateMachine => _taskBuilder.Start(ref stateMachine);
        public void SetStateMachine(IAsyncStateMachine stateMachine) => _taskBuilder.SetStateMachine(stateMachine);
        public void SetResult(T result) => _taskBuilder.SetResult(result);
        public void SetException(Exception exception) => _taskBuilder.SetException(exception);
        public ITask<T> Task => (_task == null) ? _task = new ConcreteITask<T>(_taskBuilder.Task) : _task;
        public void AwaitOnCompleted<TAwaiter, TStateMachine>(ref TAwaiter awaiter, ref TStateMachine stateMachine) where TAwaiter : INotifyCompletion where TStateMachine : IAsyncStateMachine => _taskBuilder.AwaitOnCompleted(ref awaiter, ref stateMachine);
        public void AwaitUnsafeOnCompleted<TAwaiter, TStateMachine>(ref TAwaiter awaiter, ref TStateMachine stateMachine) where TAwaiter : ICriticalNotifyCompletion where TStateMachine : IAsyncStateMachine => _taskBuilder.AwaitUnsafeOnCompleted(ref awaiter, ref stateMachine);
    }
}
@ljw1004

This comment has been minimized.

Show comment
Hide comment
@ljw1004

ljw1004 Apr 8, 2016

Contributor

Weaving example

This shows how we can use the feature to do "code-weaving", or "aspect-oriented programming". Here we make it so a specified action is executed prior to every cold await suspension, and after every await resumption.

Also this shows an interesting trick whereby the body of an async method can configure its very own "async method builder" instance while it's in flight, by awaiting a special configurator object, and having the builder's AwaitUnsafeOnCompleted method recognize that configurator object.

static async WeavingTask h()
{
    await new WeavingConfiguration(() => Console.WriteLine("Weave suspend"), () => Console.WriteLine("Weave resume"));

    Console.WriteLine("h.A");
    await Delay(0);
    Console.WriteLine("h.B");
    await Delay(10);
    Console.WriteLine("h.C");
    await Delay(100);
    Console.WriteLine("h.D");
}

static async Task Delay(int i)
{
    Console.WriteLine($"  about to delay {i}");
    await Task.Delay(i);
    Console.WriteLine($"  done delay {i}");
}

This is the output:

h.A
  about to delay 0
  done delay 0
h.B
  about to delay 10
Weave suspend
  done delay 10
Weave resume
h.C
  about to delay 100
Weave suspend
  done delay 100
Weave resume
h.D
namespace System.Threading.Tasks
{
    [Tasklike(typeof(WeavingTaskMethodBuilder))]
    class WeavingTask
    {
        private Task _task;
        public WeavingTask(Task task) { _task = task; }
        public TaskAwaiter GetAwaiter() => _task.GetAwaiter();
    }
}

namespace System.Runtime.CompilerServices
{

    class WeavingConfiguration : ICriticalNotifyCompletion
    {
        public readonly Action _beforeYield, _afterYield;
        public WeavingConfiguration(Action beforeYield, Action afterYield) { _beforeYield = beforeYield; _afterYield = afterYield; }
        public WeavingConfiguration GetAwaiter() => this;
        public bool IsCompleted => false;
        public void UnsafeOnCompleted(Action continuation) { }
        public void OnCompleted(Action continuation) { }
        public void GetResult() { }
    }

    struct WeavingTaskMethodBuilder
    {
        private AsyncTaskMethodBuilder _taskBuilder;
        private WeavingTask _task;
        private WeavingConfiguration _config;

        public static WeavingTaskMethodBuilder Create() => new WeavingTaskMethodBuilder() { _taskBuilder = AsyncTaskMethodBuilder.Create() };
        public void Start<TStateMachine>(ref TStateMachine stateMachine) where TStateMachine : IAsyncStateMachine => _taskBuilder.Start(ref stateMachine);
        public void SetStateMachine(IAsyncStateMachine stateMachine) => _taskBuilder.SetStateMachine(stateMachine);
        public void SetResult() => _taskBuilder.SetResult();
        public void SetException(Exception exception) => _taskBuilder.SetException(exception);
        public WeavingTask Task => (_task == null) ? _task = new WeavingTask(_taskBuilder.Task) : _task;
        public void AwaitOnCompleted<TAwaiter, TStateMachine>(ref TAwaiter awaiter, ref TStateMachine stateMachine) where TAwaiter : INotifyCompletion where TStateMachine : IAsyncStateMachine => _taskBuilder.AwaitOnCompleted(ref awaiter, ref stateMachine);
        public void AwaitUnsafeOnCompleted<TAwaiter, TStateMachine>(ref TAwaiter awaiter, ref TStateMachine stateMachine) where TAwaiter : ICriticalNotifyCompletion where TStateMachine : IAsyncStateMachine
        {
            if (awaiter is WeavingConfiguration)
            {
                _config = (WeavingConfiguration)(object)awaiter;
                stateMachine.MoveNext();
                return;
            }
            var myAwaiter = new MyAwaiter(awaiter, _config);
            _taskBuilder.AwaitUnsafeOnCompleted(ref myAwaiter, ref stateMachine);
        }

        class MyAwaiter : ICriticalNotifyCompletion
        {
            private readonly ICriticalNotifyCompletion _awaiter;
            private readonly WeavingConfiguration _config;
            public MyAwaiter(ICriticalNotifyCompletion awaiter, WeavingConfiguration config) { _awaiter = awaiter; _config = config; }
            public void OnCompleted(Action continuation) => _awaiter.OnCompleted(continuation);
            public void UnsafeOnCompleted(Action continuation)
            {
                _config?._beforeYield?.Invoke();
                _awaiter.UnsafeOnCompleted(() =>
                {
                    _config?._afterYield?.Invoke();
                    continuation();
                });
            }
        }
    }
}
Contributor

ljw1004 commented Apr 8, 2016

Weaving example

This shows how we can use the feature to do "code-weaving", or "aspect-oriented programming". Here we make it so a specified action is executed prior to every cold await suspension, and after every await resumption.

Also this shows an interesting trick whereby the body of an async method can configure its very own "async method builder" instance while it's in flight, by awaiting a special configurator object, and having the builder's AwaitUnsafeOnCompleted method recognize that configurator object.

static async WeavingTask h()
{
    await new WeavingConfiguration(() => Console.WriteLine("Weave suspend"), () => Console.WriteLine("Weave resume"));

    Console.WriteLine("h.A");
    await Delay(0);
    Console.WriteLine("h.B");
    await Delay(10);
    Console.WriteLine("h.C");
    await Delay(100);
    Console.WriteLine("h.D");
}

static async Task Delay(int i)
{
    Console.WriteLine($"  about to delay {i}");
    await Task.Delay(i);
    Console.WriteLine($"  done delay {i}");
}

This is the output:

h.A
  about to delay 0
  done delay 0
h.B
  about to delay 10
Weave suspend
  done delay 10
Weave resume
h.C
  about to delay 100
Weave suspend
  done delay 100
Weave resume
h.D
namespace System.Threading.Tasks
{
    [Tasklike(typeof(WeavingTaskMethodBuilder))]
    class WeavingTask
    {
        private Task _task;
        public WeavingTask(Task task) { _task = task; }
        public TaskAwaiter GetAwaiter() => _task.GetAwaiter();
    }
}

namespace System.Runtime.CompilerServices
{

    class WeavingConfiguration : ICriticalNotifyCompletion
    {
        public readonly Action _beforeYield, _afterYield;
        public WeavingConfiguration(Action beforeYield, Action afterYield) { _beforeYield = beforeYield; _afterYield = afterYield; }
        public WeavingConfiguration GetAwaiter() => this;
        public bool IsCompleted => false;
        public void UnsafeOnCompleted(Action continuation) { }
        public void OnCompleted(Action continuation) { }
        public void GetResult() { }
    }

    struct WeavingTaskMethodBuilder
    {
        private AsyncTaskMethodBuilder _taskBuilder;
        private WeavingTask _task;
        private WeavingConfiguration _config;

        public static WeavingTaskMethodBuilder Create() => new WeavingTaskMethodBuilder() { _taskBuilder = AsyncTaskMethodBuilder.Create() };
        public void Start<TStateMachine>(ref TStateMachine stateMachine) where TStateMachine : IAsyncStateMachine => _taskBuilder.Start(ref stateMachine);
        public void SetStateMachine(IAsyncStateMachine stateMachine) => _taskBuilder.SetStateMachine(stateMachine);
        public void SetResult() => _taskBuilder.SetResult();
        public void SetException(Exception exception) => _taskBuilder.SetException(exception);
        public WeavingTask Task => (_task == null) ? _task = new WeavingTask(_taskBuilder.Task) : _task;
        public void AwaitOnCompleted<TAwaiter, TStateMachine>(ref TAwaiter awaiter, ref TStateMachine stateMachine) where TAwaiter : INotifyCompletion where TStateMachine : IAsyncStateMachine => _taskBuilder.AwaitOnCompleted(ref awaiter, ref stateMachine);
        public void AwaitUnsafeOnCompleted<TAwaiter, TStateMachine>(ref TAwaiter awaiter, ref TStateMachine stateMachine) where TAwaiter : ICriticalNotifyCompletion where TStateMachine : IAsyncStateMachine
        {
            if (awaiter is WeavingConfiguration)
            {
                _config = (WeavingConfiguration)(object)awaiter;
                stateMachine.MoveNext();
                return;
            }
            var myAwaiter = new MyAwaiter(awaiter, _config);
            _taskBuilder.AwaitUnsafeOnCompleted(ref myAwaiter, ref stateMachine);
        }

        class MyAwaiter : ICriticalNotifyCompletion
        {
            private readonly ICriticalNotifyCompletion _awaiter;
            private readonly WeavingConfiguration _config;
            public MyAwaiter(ICriticalNotifyCompletion awaiter, WeavingConfiguration config) { _awaiter = awaiter; _config = config; }
            public void OnCompleted(Action continuation) => _awaiter.OnCompleted(continuation);
            public void UnsafeOnCompleted(Action continuation)
            {
                _config?._beforeYield?.Invoke();
                _awaiter.UnsafeOnCompleted(() =>
                {
                    _config?._afterYield?.Invoke();
                    continuation();
                });
            }
        }
    }
}
@ljw1004

This comment has been minimized.

Show comment
Hide comment
@ljw1004

ljw1004 Apr 8, 2016

Contributor

IAsyncAction example

It's kind of a pain that we can't in C# write methods that directly return an IAsyncAction. This example shows how to use the proposed feature to make that possible.

static async IAsyncAction uwp()
{
    await Task.Delay(100);
}

[Tasklike(typeof(IAsyncActionBuilder))]
interface IAsyncAction { }

class IAsyncActionBuilder
{

    private AsyncTaskMethodBuilder _taskBuilder;
    private IAsyncAction _task;

    public static IAsyncActionBuilder Create() => new IAsyncActionBuilder() { _taskBuilder = AsyncTaskMethodBuilder.Create() };
    public void Start<TStateMachine>(ref TStateMachine stateMachine) where TStateMachine : IAsyncStateMachine => _taskBuilder.Start(ref stateMachine);
    public void SetStateMachine(IAsyncStateMachine stateMachine) => _taskBuilder.SetStateMachine(stateMachine);
    public void SetResult() => _taskBuilder.SetResult();
    public void SetException(Exception exception) => _taskBuilder.SetException(exception);
    public IAsyncAction Task => (_task == null) ? _task = _taskBuilder.Task.AsAsyncAction() as IAsyncAction : _task;
    public void AwaitOnCompleted<TAwaiter, TStateMachine>(ref TAwaiter awaiter, ref TStateMachine stateMachine) where TAwaiter : INotifyCompletion where TStateMachine : IAsyncStateMachine => _taskBuilder.AwaitOnCompleted(ref awaiter, ref stateMachine);
    public void AwaitUnsafeOnCompleted<TAwaiter, TStateMachine>(ref TAwaiter awaiter, ref TStateMachine stateMachine) where TAwaiter : ICriticalNotifyCompletion where TStateMachine : IAsyncStateMachine => _taskBuilder.AwaitUnsafeOnCompleted(ref awaiter, ref stateMachine);
}
Contributor

ljw1004 commented Apr 8, 2016

IAsyncAction example

It's kind of a pain that we can't in C# write methods that directly return an IAsyncAction. This example shows how to use the proposed feature to make that possible.

static async IAsyncAction uwp()
{
    await Task.Delay(100);
}

[Tasklike(typeof(IAsyncActionBuilder))]
interface IAsyncAction { }

class IAsyncActionBuilder
{

    private AsyncTaskMethodBuilder _taskBuilder;
    private IAsyncAction _task;

    public static IAsyncActionBuilder Create() => new IAsyncActionBuilder() { _taskBuilder = AsyncTaskMethodBuilder.Create() };
    public void Start<TStateMachine>(ref TStateMachine stateMachine) where TStateMachine : IAsyncStateMachine => _taskBuilder.Start(ref stateMachine);
    public void SetStateMachine(IAsyncStateMachine stateMachine) => _taskBuilder.SetStateMachine(stateMachine);
    public void SetResult() => _taskBuilder.SetResult();
    public void SetException(Exception exception) => _taskBuilder.SetException(exception);
    public IAsyncAction Task => (_task == null) ? _task = _taskBuilder.Task.AsAsyncAction() as IAsyncAction : _task;
    public void AwaitOnCompleted<TAwaiter, TStateMachine>(ref TAwaiter awaiter, ref TStateMachine stateMachine) where TAwaiter : INotifyCompletion where TStateMachine : IAsyncStateMachine => _taskBuilder.AwaitOnCompleted(ref awaiter, ref stateMachine);
    public void AwaitUnsafeOnCompleted<TAwaiter, TStateMachine>(ref TAwaiter awaiter, ref TStateMachine stateMachine) where TAwaiter : ICriticalNotifyCompletion where TStateMachine : IAsyncStateMachine => _taskBuilder.AwaitUnsafeOnCompleted(ref awaiter, ref stateMachine);
}
@ljw1004

This comment has been minimized.

Show comment
Hide comment
@ljw1004

ljw1004 Apr 8, 2016

Contributor

Note: all of the above example assume this attribute:

namespace System.Runtime.CompilerServices
{
    public class TasklikeAttribute : Attribute
    {
        public TasklikeAttribute(Type builderType) { }
    }

}
Contributor

ljw1004 commented Apr 8, 2016

Note: all of the above example assume this attribute:

namespace System.Runtime.CompilerServices
{
    public class TasklikeAttribute : Attribute
    {
        public TasklikeAttribute(Type builderType) { }
    }

}
@omariom

This comment has been minimized.

Show comment
Hide comment
@omariom

omariom Apr 8, 2016

Great!

omariom commented Apr 8, 2016

Great!

@ufcpp

This comment has been minimized.

Show comment
Hide comment
@ufcpp

ufcpp Apr 9, 2016

Isn't this listed on the Language Feature Status?

ufcpp commented Apr 9, 2016

Isn't this listed on the Language Feature Status?

@benaadams

This comment has been minimized.

Show comment
Hide comment
@benaadams

benaadams Apr 9, 2016

Contributor

While you're there, messing with the async state machine.... 😉

There is currently a faster await path for completed tasks; however an async function still comes with a cost. There are faster patterns to avoid the state machine; which involve greater code complexity so it would be nice if the compiler could generate them as part of the state machine construction.

Tail call clean up

async Task MethodAsync()
{
    // ... some code

    await OtherMethodAsync();
}

becomes

Task MethodAsync()
{
    // ... some code

    return OtherMethodAsync();
}

Like-wise single tail await

async Task MethodAsync()
{
    if (condition)
    {
        return;
    }
    else if (othercondition)
    {
        await OtherMethodAsync();
    }
    else
    {
        await AnotherMethodAsync();
    }
}

becomes

Task MethodAsync()
{
    if (condition)
    {
        return Task.CompletedTask;
    }
    else if (othercondition)
    {
        return OtherMethodAsync();
    }
    else
    {
        return AnotherMethodAsync();
    }
}

Mid async

async Task MethodAsync()
{
    if (condition)
    {
        return;
    }

    await OtherMethodAsync();

    // some code
}

splits at first non completed task into async awaiting function

Task MethodAsync()
{
    if (condition)
    {
        return Task.CompletedTask;
    }

    var task = OtherMethodAsync();

    if (!task.IsCompleted)
    {
        return MethodAsyncAwaited(task);
    }

    MethodAsyncRemainder();
}
async Task MethodAsyncAwaited(Task task)
{
    await task;

    MethodAsyncRemainder();
}

void MethodAsyncRemainder()
{
    // some code
}

Similar patterns with ValueTask of postponing async and Task<T> until a task is incomplete by splitting the functions. Code gets really messy though manually splitting it (see https://github.com/benaadams/HttpAbstractions/blob/3307eebcc9f3f3700dfe2ea9610253358f7fff15/src/Microsoft.AspNetCore.WebUtilities/FormReader.cs#L93-L332)

Contributor

benaadams commented Apr 9, 2016

While you're there, messing with the async state machine.... 😉

There is currently a faster await path for completed tasks; however an async function still comes with a cost. There are faster patterns to avoid the state machine; which involve greater code complexity so it would be nice if the compiler could generate them as part of the state machine construction.

Tail call clean up

async Task MethodAsync()
{
    // ... some code

    await OtherMethodAsync();
}

becomes

Task MethodAsync()
{
    // ... some code

    return OtherMethodAsync();
}

Like-wise single tail await

async Task MethodAsync()
{
    if (condition)
    {
        return;
    }
    else if (othercondition)
    {
        await OtherMethodAsync();
    }
    else
    {
        await AnotherMethodAsync();
    }
}

becomes

Task MethodAsync()
{
    if (condition)
    {
        return Task.CompletedTask;
    }
    else if (othercondition)
    {
        return OtherMethodAsync();
    }
    else
    {
        return AnotherMethodAsync();
    }
}

Mid async

async Task MethodAsync()
{
    if (condition)
    {
        return;
    }

    await OtherMethodAsync();

    // some code
}

splits at first non completed task into async awaiting function

Task MethodAsync()
{
    if (condition)
    {
        return Task.CompletedTask;
    }

    var task = OtherMethodAsync();

    if (!task.IsCompleted)
    {
        return MethodAsyncAwaited(task);
    }

    MethodAsyncRemainder();
}
async Task MethodAsyncAwaited(Task task)
{
    await task;

    MethodAsyncRemainder();
}

void MethodAsyncRemainder()
{
    // some code
}

Similar patterns with ValueTask of postponing async and Task<T> until a task is incomplete by splitting the functions. Code gets really messy though manually splitting it (see https://github.com/benaadams/HttpAbstractions/blob/3307eebcc9f3f3700dfe2ea9610253358f7fff15/src/Microsoft.AspNetCore.WebUtilities/FormReader.cs#L93-L332)

@benaadams

This comment has been minimized.

Show comment
Hide comment
@benaadams

benaadams Apr 9, 2016

Contributor

Re: ValueTask vs Task<T> results for splitting with completed results are as follows for 1M ops

Arch 64 bit - Cores 4
1 Thread - Sync: 5.756ms
1 Thread - Async: 146.579ms
1 Thread - ValueTask Async: 16.917ms
Parallel - Async: 86.191ms
Parallel - ValueTask Async: 4.988ms
Parallel - Sync: 1.661ms

See atemerev/skynet#40

Contributor

benaadams commented Apr 9, 2016

Re: ValueTask vs Task<T> results for splitting with completed results are as follows for 1M ops

Arch 64 bit - Cores 4
1 Thread - Sync: 5.756ms
1 Thread - Async: 146.579ms
1 Thread - ValueTask Async: 16.917ms
Parallel - Async: 86.191ms
Parallel - ValueTask Async: 4.988ms
Parallel - Sync: 1.661ms

See atemerev/skynet#40

@benaadams

This comment has been minimized.

Show comment
Hide comment
@benaadams

benaadams Apr 9, 2016

Contributor

ValueTask by itself saves in allocations; but you don't get the full throughput in performance until you also delay the state machine creation (the actual async part of the function) until the first task that is not completed.

Contributor

benaadams commented Apr 9, 2016

ValueTask by itself saves in allocations; but you don't get the full throughput in performance until you also delay the state machine creation (the actual async part of the function) until the first task that is not completed.

@bartdesmet

This comment has been minimized.

Show comment
Hide comment
@bartdesmet

bartdesmet Apr 9, 2016

"Tail await elimination" is definitely a promising pattern we manually optimize for quite regularly in our code base; it's easy enough to have a function that used to do multiple awaits and ultimately ends up being rewritten such that the async and await combo is no longer necessary.

There are some caveats though, such as getting rid of a single return await if it occurs in a try or using block etc. but nothing a compiler wouldn't be able to determine. Another more tricky one is when this takes away the side-effect of await capturing the SynchronizationContext, which is unknown to the compiler because ConfigureAwait is a library function rather than a language feature. It'd be strange if the compiler would start knowing about this, especially since it occurs in the await site where the language does not prescribe a fixed operand type but instead supports the awaiter pattern.

If we'd have the (much desirable) flexibility on the return type for an async method, we got two points of interaction with opaque user code: the async builder and the awaiter. When things get optimized away, any (potentially desirable) side-effects also vanish. For example, in Rx, the GetAwaiter extension method for observables has the effect of creating a subscription (unlike Task, an observable is typically born "cold"). If we were to support returning an observable from an async method through this proposal, we'd still be unable to perform a tail await elimination optimization because it'd take away the side-effect of await kicking off the subscription. A caller may simply store the observable returned by the async method, without immediately calling Subscribe, and it'd be unclear whether the observable has become "hot" or not.

ConfigureAwait just happens to be one such case of an extensibility point in today's world with the awaiter pattern. Maybe a Roslyn analyzer and code fix for some of these optimizations is a good first step, ultimately putting the user in control of such rewrites? I'm not sure though whether there's such a thing as a "suggested potential fix" where the lightbulb in the IDE shows links to more information for the user to digest prior to committing to making a code change that could potentially introduce subtle changes in behavior.

"Tail await elimination" is definitely a promising pattern we manually optimize for quite regularly in our code base; it's easy enough to have a function that used to do multiple awaits and ultimately ends up being rewritten such that the async and await combo is no longer necessary.

There are some caveats though, such as getting rid of a single return await if it occurs in a try or using block etc. but nothing a compiler wouldn't be able to determine. Another more tricky one is when this takes away the side-effect of await capturing the SynchronizationContext, which is unknown to the compiler because ConfigureAwait is a library function rather than a language feature. It'd be strange if the compiler would start knowing about this, especially since it occurs in the await site where the language does not prescribe a fixed operand type but instead supports the awaiter pattern.

If we'd have the (much desirable) flexibility on the return type for an async method, we got two points of interaction with opaque user code: the async builder and the awaiter. When things get optimized away, any (potentially desirable) side-effects also vanish. For example, in Rx, the GetAwaiter extension method for observables has the effect of creating a subscription (unlike Task, an observable is typically born "cold"). If we were to support returning an observable from an async method through this proposal, we'd still be unable to perform a tail await elimination optimization because it'd take away the side-effect of await kicking off the subscription. A caller may simply store the observable returned by the async method, without immediately calling Subscribe, and it'd be unclear whether the observable has become "hot" or not.

ConfigureAwait just happens to be one such case of an extensibility point in today's world with the awaiter pattern. Maybe a Roslyn analyzer and code fix for some of these optimizations is a good first step, ultimately putting the user in control of such rewrites? I'm not sure though whether there's such a thing as a "suggested potential fix" where the lightbulb in the IDE shows links to more information for the user to digest prior to committing to making a code change that could potentially introduce subtle changes in behavior.

@benaadams

This comment has been minimized.

Show comment
Hide comment
@benaadams

benaadams Apr 9, 2016

Contributor

Removed the .ConfigureAwait from the example; however that could be lifted if the existing could has one, e.g. rather than taking a Task the second function would take a ConfiguratedAwaitable or whatever type was awaited.

try,catch,finally,using would probably have to block the rewrites at least as a first pass; often this can be moved up a function - but potentially also causes a behaviour change; so perhaps highlight as a caveat.

The issue with an analyzer rewrite for splitting the functions into pre-completed and async parts is it does generate horribly unmaintainable code; though it is a pattern that is used in some places e.g. https://github.com/dotnet/corefx/blob/master/src/System.IO/src/System/IO/MemoryStream.cs#L468-L531

Contributor

benaadams commented Apr 9, 2016

Removed the .ConfigureAwait from the example; however that could be lifted if the existing could has one, e.g. rather than taking a Task the second function would take a ConfiguratedAwaitable or whatever type was awaited.

try,catch,finally,using would probably have to block the rewrites at least as a first pass; often this can be moved up a function - but potentially also causes a behaviour change; so perhaps highlight as a caveat.

The issue with an analyzer rewrite for splitting the functions into pre-completed and async parts is it does generate horribly unmaintainable code; though it is a pattern that is used in some places e.g. https://github.com/dotnet/corefx/blob/master/src/System.IO/src/System/IO/MemoryStream.cs#L468-L531

@ljw1004

This comment has been minimized.

Show comment
Hide comment
@ljw1004

ljw1004 Apr 10, 2016

Contributor

@ufcpp - this isn't yet listed on the "language feature status" page because so far I've only done the easy bits (write the proposal and implement a prototype). I haven't yet started on the difficult bit of presenting it to LDM and trying to persuade people that it's a good idea and modifying its design in response to feedback!

@benaadams - those are interesting ideas. I need to think more deeply about them. I appreciate that you linked to real-world cases of complexity that's (1) forced by the compiler not doing the optimizations, (2) has real-world perf numbers to back it up. But let's move the compiler-optimizations-of-async-methods into a different discussion topic.

Contributor

ljw1004 commented Apr 10, 2016

@ufcpp - this isn't yet listed on the "language feature status" page because so far I've only done the easy bits (write the proposal and implement a prototype). I haven't yet started on the difficult bit of presenting it to LDM and trying to persuade people that it's a good idea and modifying its design in response to feedback!

@benaadams - those are interesting ideas. I need to think more deeply about them. I appreciate that you linked to real-world cases of complexity that's (1) forced by the compiler not doing the optimizations, (2) has real-world perf numbers to back it up. But let's move the compiler-optimizations-of-async-methods into a different discussion topic.

@benaadams

This comment has been minimized.

Show comment
Hide comment
@benaadams

benaadams Apr 10, 2016

Contributor

@ljw1004 no probs will create an issue for them, with deeper analysis.

Contributor

benaadams commented Apr 10, 2016

@ljw1004 no probs will create an issue for them, with deeper analysis.

@i3arnon

This comment has been minimized.

Show comment
Hide comment
@i3arnon

i3arnon Apr 10, 2016

@benaadams automagically removing async from these methods changes exception semantics. It would be extremely confusing for someone using this code:

async Task MainAsync()
{
    var task1 = FooAsync("hamster");
    var task2 =  FooAsync(null);
    try
    {
        await Task.WhenAll(task1 , task2);
    }
    catch (Exception e)
    {
        Console.WriteLine(e.Message);
    }
}

async Task FooAsync(string value)
{
    if (value == null) throw new ArgumentNullException();

    await SendAsync(value);
}

Their app will crash without any obvious reason.

i3arnon commented Apr 10, 2016

@benaadams automagically removing async from these methods changes exception semantics. It would be extremely confusing for someone using this code:

async Task MainAsync()
{
    var task1 = FooAsync("hamster");
    var task2 =  FooAsync(null);
    try
    {
        await Task.WhenAll(task1 , task2);
    }
    catch (Exception e)
    {
        Console.WriteLine(e.Message);
    }
}

async Task FooAsync(string value)
{
    if (value == null) throw new ArgumentNullException();

    await SendAsync(value);
}

Their app will crash without any obvious reason.

@benaadams

This comment has been minimized.

Show comment
Hide comment
@benaadams

benaadams Apr 10, 2016

Contributor

@i3arnon it should work? First function's conversion would be blocked due to await in try catch, second function would become:

Task FooAsync(string value)
{
    if (value == null) return Task.FromException(new ArgumentNullException());

    return SendAsync(value);
}
Contributor

benaadams commented Apr 10, 2016

@i3arnon it should work? First function's conversion would be blocked due to await in try catch, second function would become:

Task FooAsync(string value)
{
    if (value == null) return Task.FromException(new ArgumentNullException());

    return SendAsync(value);
}
@i3arnon

This comment has been minimized.

Show comment
Hide comment
@i3arnon

i3arnon Apr 10, 2016

@benaadams no.
We now changed where the exception is thrown.
With async the exception is stored in the returned task and so is rethrown when the task is awaited (i.e. inside the try clause of the calling method).
When async is removed this becomes a simple standard method which means the exception is thrown synchronously when FooAsync is called (i.e. just before entering the try clause). So the handled exception is now unhandled because the compiler changed things implicitly.

i3arnon commented Apr 10, 2016

@benaadams no.
We now changed where the exception is thrown.
With async the exception is stored in the returned task and so is rethrown when the task is awaited (i.e. inside the try clause of the calling method).
When async is removed this becomes a simple standard method which means the exception is thrown synchronously when FooAsync is called (i.e. just before entering the try clause). So the handled exception is now unhandled because the compiler changed things implicitly.

@benaadams

This comment has been minimized.

Show comment
Hide comment
@benaadams

benaadams Apr 10, 2016

Contributor

@i3arnon not in the changed function I showed; where the exception is not thrown but Task.FromException is returned instead - behaviour should remain the same?

Contributor

benaadams commented Apr 10, 2016

@i3arnon not in the changed function I showed; where the exception is not thrown but Task.FromException is returned instead - behaviour should remain the same?

@i3arnon

This comment has been minimized.

Show comment
Hide comment
@i3arnon

i3arnon Apr 10, 2016

@benaadams Oh.. right, I missed that. However it doesn't really matter as exceptions aren't necessarily raised explicitly, for example:

async Task FooAsync(object value)
{
    var message = value.ToString();
    await SendAsync(message);
}

Though you can add try-catches everywhere and emulate the behavior of an async method but I'm not sure it will improve performance.

i3arnon commented Apr 10, 2016

@benaadams Oh.. right, I missed that. However it doesn't really matter as exceptions aren't necessarily raised explicitly, for example:

async Task FooAsync(object value)
{
    var message = value.ToString();
    await SendAsync(message);
}

Though you can add try-catches everywhere and emulate the behavior of an async method but I'm not sure it will improve performance.

@benaadams

This comment has been minimized.

Show comment
Hide comment
@benaadams

benaadams Apr 10, 2016

Contributor

Forked async rewriting discussion to #10449

Contributor

benaadams commented Apr 10, 2016

Forked async rewriting discussion to #10449

@ljw1004

This comment has been minimized.

Show comment
Hide comment
@ljw1004

ljw1004 Apr 11, 2016

Contributor

Now that I've finished the prototype, I've encountered an obstacle in language-design that I don't know how to solve...

We already allow type-inference to infer a Task<T> return type. Thus, for instance,

void Run<T>(Func<T> lambda) // [0]
Run(async () => { return 1; } );  // uses [0] with T=Task<int>

We also have overload resolution rules to help with cases like this:

void Run<T>(Func<T> lambda)  // [1]
void Run<T>(Func<Task<T>> lambda) // [2]

Run(async () => { return 1; } );  // prefers [2] with T=int

Obviously we'd want to be able to write similar overloads for our own custom Tasklikes:

void Run<T>(Func<T> lambda)  // [3]
void Run<T>(Func<MyTask<T>> lambda) // [4]

Run(async () => { return 1; } );  // hopefully prefers [4] with T=int

But there's a problem. The way the C# language handles overload resolution between cases like [1] and [2] is it first gathers together the fully-inferred candidates spec:method-invocations:

candidate1: Run(Func<Task<int>> lambda)  // [1] with T=Task<int>
candidate2: Run(Func<Task<T>> lambda) // [2] with T=int

Next it does overload resolution, which calls into the rules for better function members. In this case it looks to see whether the instantiated parameter types of candidate1 or candidate2 are better for the argument expression async () => { return 1; }. The parameter types of the two candidates are identical, so it falls back to tie-breakers. It uses the tie-breaker rule that the "smaller" T is better. That's why it prefers [2].

But overload resolution doesn't work at all for choosing between [3] and [4]. Here, I'll write out the two candidates in this case:

candidate3: Run(Func<Task<T>> lambda) // [3] with T=Task<int>
candidate4: Run(Func<MyTask<T>> lambda) // [4] with T=int

As before, it looks to see whether the instantiated parameter types of candidate3 or candidate4 are better for the argument expression. Neither is better. The two parameter types both are reasonable. There's nothing in the parameter types to distinguish which one is better. However, the parameter types are not identical in this case, so it doesn't fall back to the tie-breakers. The compiler merely gives an error message that this overload is ambiguous.


I don't have any ideas at the moment on how to solve this problem. Overload resolution is a really delicate area of the spec.

Contributor

ljw1004 commented Apr 11, 2016

Now that I've finished the prototype, I've encountered an obstacle in language-design that I don't know how to solve...

We already allow type-inference to infer a Task<T> return type. Thus, for instance,

void Run<T>(Func<T> lambda) // [0]
Run(async () => { return 1; } );  // uses [0] with T=Task<int>

We also have overload resolution rules to help with cases like this:

void Run<T>(Func<T> lambda)  // [1]
void Run<T>(Func<Task<T>> lambda) // [2]

Run(async () => { return 1; } );  // prefers [2] with T=int

Obviously we'd want to be able to write similar overloads for our own custom Tasklikes:

void Run<T>(Func<T> lambda)  // [3]
void Run<T>(Func<MyTask<T>> lambda) // [4]

Run(async () => { return 1; } );  // hopefully prefers [4] with T=int

But there's a problem. The way the C# language handles overload resolution between cases like [1] and [2] is it first gathers together the fully-inferred candidates spec:method-invocations:

candidate1: Run(Func<Task<int>> lambda)  // [1] with T=Task<int>
candidate2: Run(Func<Task<T>> lambda) // [2] with T=int

Next it does overload resolution, which calls into the rules for better function members. In this case it looks to see whether the instantiated parameter types of candidate1 or candidate2 are better for the argument expression async () => { return 1; }. The parameter types of the two candidates are identical, so it falls back to tie-breakers. It uses the tie-breaker rule that the "smaller" T is better. That's why it prefers [2].

But overload resolution doesn't work at all for choosing between [3] and [4]. Here, I'll write out the two candidates in this case:

candidate3: Run(Func<Task<T>> lambda) // [3] with T=Task<int>
candidate4: Run(Func<MyTask<T>> lambda) // [4] with T=int

As before, it looks to see whether the instantiated parameter types of candidate3 or candidate4 are better for the argument expression. Neither is better. The two parameter types both are reasonable. There's nothing in the parameter types to distinguish which one is better. However, the parameter types are not identical in this case, so it doesn't fall back to the tie-breakers. The compiler merely gives an error message that this overload is ambiguous.


I don't have any ideas at the moment on how to solve this problem. Overload resolution is a really delicate area of the spec.

@benaadams

This comment has been minimized.

Show comment
Hide comment
@benaadams

benaadams Apr 11, 2016

Contributor

Can you coerce with a cast? (not ideal)

Contributor

benaadams commented Apr 11, 2016

Can you coerce with a cast? (not ideal)

@ljw1004

This comment has been minimized.

Show comment
Hide comment
@ljw1004

ljw1004 Apr 12, 2016

Contributor

You can always coerce with a cast to force overload resolution to pick the right thing. But... ugly!

Contributor

ljw1004 commented Apr 12, 2016

You can always coerce with a cast to force overload resolution to pick the right thing. But... ugly!

@bartdesmet

This comment has been minimized.

Show comment
Hide comment
@bartdesmet

bartdesmet Apr 12, 2016

That's an interesting case indeed. We've run into similar ambiguities for the whole shebang of Create overloads for Observable in Rx with async variants of the passed delegates.

It seems to me that the insistence around the use of Task<T> in this whole binding procedure stems from the rules for the inferred return type of an async lambda. Would there need to be additions to cater for tasklikes? It seems it'd be quite invasive to return an open-ended set of possible inferred return types here, so it feels like the bias for Task and Task<T> in this section would have to remain as-is.

One thing that always seems to come back when designing APIs with a variety of different delegate types on overloads is the cliff one falls off when overload resolution falls short, requiring the insertion of ugly casts at the use site, or an overhaul of the API design to have different methods to avoid the problem altogether. I've always felt it strange there's no way to specify the return type of a lambda, unlike the ability to specify parameter types fully. It'd an odd asymmetry compared to named methods. Some languages such as C++ allow for an explicit return type to be specified on lambdas.

With the introduction of tuples and identifiers for components of a tuple, it seems this may be useful as well; currently, AFAIK, there'd be no place to stick those identifiers (even if it's just for their documentation value):

Func<int> f1_1 = () => 42;
Func<int> f1_2 = int () => 42;

Func<int, int> f2_1 = x => x + 1;
Func<int, int> f2_2 = (int x) => x + 1;
Func<int, int> f2_3 = int (int x) => x + 1;

Func<Task<int>> f3_1 = async () => 42;
Func<Task<int>> f3_2 = async Task<int> () => 42;

Func<int, (int a, int b)> f4_1 = () => (1, 2);
Func<int, (int a, int b)> f4_2 = () => (a: 1, b: 2);
Func<int, (int a, int b)> f4_3 = (int a, int b) () => (1, 2);
Func<int, (int a, int b)> f4_4 = (int a, int b) () => (a: 1, b: 2);

I haven't given it much thought about possible syntactic ambiguities that may arise (C++ has a suffix notation with a thin arrow to specify a return type, but that'd be a totally new token; VB also has a suffix notation with As for multi-line lambdas). Same for any rules around conversions with the specified return type etc. but it seems it could be quite similar to regular methods.

With the Run examples above one would hypothetically be able to guide the picking of an overload as follows:

Run(async Task<int> () => { return 1; }); // [3] with T=Task<int>
Run(async MyTask<int> () => { return 1; }); // [4] with T=int

That's an interesting case indeed. We've run into similar ambiguities for the whole shebang of Create overloads for Observable in Rx with async variants of the passed delegates.

It seems to me that the insistence around the use of Task<T> in this whole binding procedure stems from the rules for the inferred return type of an async lambda. Would there need to be additions to cater for tasklikes? It seems it'd be quite invasive to return an open-ended set of possible inferred return types here, so it feels like the bias for Task and Task<T> in this section would have to remain as-is.

One thing that always seems to come back when designing APIs with a variety of different delegate types on overloads is the cliff one falls off when overload resolution falls short, requiring the insertion of ugly casts at the use site, or an overhaul of the API design to have different methods to avoid the problem altogether. I've always felt it strange there's no way to specify the return type of a lambda, unlike the ability to specify parameter types fully. It'd an odd asymmetry compared to named methods. Some languages such as C++ allow for an explicit return type to be specified on lambdas.

With the introduction of tuples and identifiers for components of a tuple, it seems this may be useful as well; currently, AFAIK, there'd be no place to stick those identifiers (even if it's just for their documentation value):

Func<int> f1_1 = () => 42;
Func<int> f1_2 = int () => 42;

Func<int, int> f2_1 = x => x + 1;
Func<int, int> f2_2 = (int x) => x + 1;
Func<int, int> f2_3 = int (int x) => x + 1;

Func<Task<int>> f3_1 = async () => 42;
Func<Task<int>> f3_2 = async Task<int> () => 42;

Func<int, (int a, int b)> f4_1 = () => (1, 2);
Func<int, (int a, int b)> f4_2 = () => (a: 1, b: 2);
Func<int, (int a, int b)> f4_3 = (int a, int b) () => (1, 2);
Func<int, (int a, int b)> f4_4 = (int a, int b) () => (a: 1, b: 2);

I haven't given it much thought about possible syntactic ambiguities that may arise (C++ has a suffix notation with a thin arrow to specify a return type, but that'd be a totally new token; VB also has a suffix notation with As for multi-line lambdas). Same for any rules around conversions with the specified return type etc. but it seems it could be quite similar to regular methods.

With the Run examples above one would hypothetically be able to guide the picking of an overload as follows:

Run(async Task<int> () => { return 1; }); // [3] with T=Task<int>
Run(async MyTask<int> () => { return 1; }); // [4] with T=int
@benaadams

This comment has been minimized.

Show comment
Hide comment
@benaadams

benaadams Apr 12, 2016

Contributor

Related, if you do

private void LongRunningAction()
{
// ...
} 

Task.Run(LongRunningAction);

You'll get the error
The call is ambiguous between the following methods or properties: 'Task.Run(Action)' and 'Task.Run(Func<Task>)'

Contributor

benaadams commented Apr 12, 2016

Related, if you do

private void LongRunningAction()
{
// ...
} 

Task.Run(LongRunningAction);

You'll get the error
The call is ambiguous between the following methods or properties: 'Task.Run(Action)' and 'Task.Run(Func<Task>)'

@ljw1004

This comment has been minimized.

Show comment
Hide comment
@ljw1004

ljw1004 Apr 15, 2016

Contributor

Okay, here's my detailed idea on what to do about it. I'll do precisely what @bartdeset wrote and Mads Torgersen also suggested -- "return an open-ended set of possible inferred return types here" (even though, as Bart says, "it seems it'd be quite invasive"). Put another way, I'll make it so that "inferred return type" algorithm when it sees an async lambda is able to infer the magical return type InferredTask or InferredTask<T> from it. These InferredTasks aren't real types. They're a sort of quantum superposition of all possible tasklike return types. Later on, if you observe them, only then do they collapse down to Task or Task<T>.

I've written out examples at the bottom, so I can explain them in specifics rather than in generalities.

Note: this post is all just theory, born from a day's worth of careful spec-reading, without actually having written any code. It might all be completely wrong! Next step is to implement it and see! What I'll do before coding, though, is write up this plan into a fork of the C# spec over here: https://github.com/ljw1004/csharpspec/tree/async-return

More detailed plan

(1) The inferred return type algorithm will be modified to return the pseudo-type InferredTask or InferredTask<T>. The folks who call into the inferred-return-type algorithm and have to be aware that it might return this pseudo-type are are type inference, and compile time checking of dynamic overload resolution which I'll ignore for now.

(2) The type inference algorithm will be modified to become aware of InferredTask...

The algorithm for fixing will say that if every bound (lower, upper, exact) is to an identical InferredTask or InferredTask<T>, then fix to that exact type; otherwise, collapse every occurrence to Task or Task<T> and proceed with the fixing algorithm as normal. This is to preserve backwards compatibility in cases which currently obtain a lower bound on an inferred return type Task and also have other bounds relating to types that Task inherits from, or similar. (This is conjecture at the moment... I haven't yet come up with examples. I bet there are examples and they're trivial and ubiquitous.)

The algorithm for lower bound inference will say that if V is a generic tasklike Tasklike<V1> and U is InferredTask<U1> then make an exact inference from U1 to V1. Also, if V is InferredTask<V1> and U is a generic tasklike Tasklike<U1> then make the same exact inference. Likewise, the algorithms for exact bound inference and upper bound inference will be changed in the same way.

(Note: I'm concerned that this isn't quite right. What we have a single InferredTask<int> from a single lambda, which later gets involved in one lower bound inference and binds to MyTask<T> to give T=int, and later gets involved in another lower bound inference where it binds to YourTask<U> to give U=int as well. Isn't it wrong that the single lambda, which must have a single return type, is apparently being both MyTask<int> and YourTask<int> at the same time? It's possible this doesn't matter, and the job of type inference is merely to come up with suggestions, and those suggestions are checked fully at a later time and not solely for constraints. But if type inference is supposed to come up with something that's properly correct, then I'm not sure. I'm not sure if the situation ever arises. Maybe I have to make each InferredTask<int> be tagged with the lambda it came from, so that once a lambda's InferredTask is collapsed, then every use of that one sees the collapse...)

The type inference algorithm performs certain tests upon its types, and so will continue to perform those tests upon the pseudo-types InferredTask and InferredTask<T>. The tests: neither pseudo-type is a delegate type, a nullable, an array type, one of IEnumerable<T> / ICollection<T> / IList<T>, nothing inherits from them, they don't implement anything, and finally InferredTask<T> is a constructed type where T is invariant.

The folks who call into type inference are method invocations, conversion of method groups, and best common type for a set of expressions. As a thought exercise, for a scenario where InferredTask participates in the algorithm for common type of a set of expressions, consider this example:

void f<T,U>(Func<T> lambda, Func<T,U> lambda2)
f(async () => {}, x => {if (true) return x; else return x;});

(3) The Overload resolution algorithm is changed. In particular, when computing the Better function member algorithm, it considers two parameter types Pi and Qi to be equivalent if there is an identity conversion from Pi to Qi up to equivalence of InferredTask and Tasklikes. Immediately after this equivalence judgment, all InferredTasks are collapsed down to concrete Tasks immediately -- whether they're used in the tie-breakers or in the test for better function members.

Note that if OverloadResolution has no InferredTask coming in, then it will have no InferredTask going out. Note also that there are only four places that invoke overload resolution: method-invocation (as disussed), object-creation-expressions (but constructors are never generic so this won't feed InferredTask into overload resolution), element-access (but indexers are again never generic), and operator overload resolution (but operator overloads are again never generic).

I do not propose to change conversion-of-method-group. Actually, to be honest, I haven't been able to come up in my head with an example where it would arise. After all, whenever you convert a method group, every method in that group has a concrete return type. (Even if I did come up with an example, as @benaadams notes above, return type is still typically ignored for method-group conversion).

Example 1

void f<T>(Func<T> lambda)   // [1]
void f<T>(Func<Task<T>> lambda)  // [2]
f(async () => { return 1; });

For [1] the inferred return type of the argument lambda expression is InferredTask<int>, This is a lower bound for T, and its sole bound. Therefore T is inferred to be InferredTask<int> and the parameter type of [1] is Func<InferredTask<int>>.

For [2] the inferred return type of the argument is InferredTask<int>. This makes a lower bound inference from InferredTask<int> to Task<T>. Because Task is tasklike, it digs in and makes an exact inference from int to T. Therefore T is inferred to be int and the parameter type of [2] is Func<Task<int>>.

For sake of overload resolution judging parameter types, the two parameter types Func<InferredTask<int>> and Func<Task<int>> are considered identical up to equivalence of InferredTask and any tasklike. Therefore, the tie-breakers are considered. Before they're considered, the two candidates are collapsed down to Func<Task<int>> and Func<Task<int>>.

In the tie-breaker for more specific, [2] wins, because Task<T> is more specific than T.

Example 2

void g<T>(Func<T> lambda)  // [3]
void g(Func<MyTask> lambda)   // [4]
h(async () => { });

For [3] the inferred return type of the argument lambda expression is InferredTask. This is a lower bound for T, and its sole bound, so it infers T = InferredTask. So the parameter type for [3] is Func<InferredTask>.

For [4], type inference doesn't come into play at all. So the parameter type for [4] is just Func<MyTask>.

For sake of overload resolution judging parameter types, the two parameter types Func<InferredTask> and Func<MyTask> are considered identical up to equivalence of InferredTask and any tasklike. Therefore the tie-breakers are considered. Before they're considered, the two candidates are collapsed to Func<Task> and Func<MyTask>.

In the tie-breaker first rule, [4] wins because it's not generic.

Contributor

ljw1004 commented Apr 15, 2016

Okay, here's my detailed idea on what to do about it. I'll do precisely what @bartdeset wrote and Mads Torgersen also suggested -- "return an open-ended set of possible inferred return types here" (even though, as Bart says, "it seems it'd be quite invasive"). Put another way, I'll make it so that "inferred return type" algorithm when it sees an async lambda is able to infer the magical return type InferredTask or InferredTask<T> from it. These InferredTasks aren't real types. They're a sort of quantum superposition of all possible tasklike return types. Later on, if you observe them, only then do they collapse down to Task or Task<T>.

I've written out examples at the bottom, so I can explain them in specifics rather than in generalities.

Note: this post is all just theory, born from a day's worth of careful spec-reading, without actually having written any code. It might all be completely wrong! Next step is to implement it and see! What I'll do before coding, though, is write up this plan into a fork of the C# spec over here: https://github.com/ljw1004/csharpspec/tree/async-return

More detailed plan

(1) The inferred return type algorithm will be modified to return the pseudo-type InferredTask or InferredTask<T>. The folks who call into the inferred-return-type algorithm and have to be aware that it might return this pseudo-type are are type inference, and compile time checking of dynamic overload resolution which I'll ignore for now.

(2) The type inference algorithm will be modified to become aware of InferredTask...

The algorithm for fixing will say that if every bound (lower, upper, exact) is to an identical InferredTask or InferredTask<T>, then fix to that exact type; otherwise, collapse every occurrence to Task or Task<T> and proceed with the fixing algorithm as normal. This is to preserve backwards compatibility in cases which currently obtain a lower bound on an inferred return type Task and also have other bounds relating to types that Task inherits from, or similar. (This is conjecture at the moment... I haven't yet come up with examples. I bet there are examples and they're trivial and ubiquitous.)

The algorithm for lower bound inference will say that if V is a generic tasklike Tasklike<V1> and U is InferredTask<U1> then make an exact inference from U1 to V1. Also, if V is InferredTask<V1> and U is a generic tasklike Tasklike<U1> then make the same exact inference. Likewise, the algorithms for exact bound inference and upper bound inference will be changed in the same way.

(Note: I'm concerned that this isn't quite right. What we have a single InferredTask<int> from a single lambda, which later gets involved in one lower bound inference and binds to MyTask<T> to give T=int, and later gets involved in another lower bound inference where it binds to YourTask<U> to give U=int as well. Isn't it wrong that the single lambda, which must have a single return type, is apparently being both MyTask<int> and YourTask<int> at the same time? It's possible this doesn't matter, and the job of type inference is merely to come up with suggestions, and those suggestions are checked fully at a later time and not solely for constraints. But if type inference is supposed to come up with something that's properly correct, then I'm not sure. I'm not sure if the situation ever arises. Maybe I have to make each InferredTask<int> be tagged with the lambda it came from, so that once a lambda's InferredTask is collapsed, then every use of that one sees the collapse...)

The type inference algorithm performs certain tests upon its types, and so will continue to perform those tests upon the pseudo-types InferredTask and InferredTask<T>. The tests: neither pseudo-type is a delegate type, a nullable, an array type, one of IEnumerable<T> / ICollection<T> / IList<T>, nothing inherits from them, they don't implement anything, and finally InferredTask<T> is a constructed type where T is invariant.

The folks who call into type inference are method invocations, conversion of method groups, and best common type for a set of expressions. As a thought exercise, for a scenario where InferredTask participates in the algorithm for common type of a set of expressions, consider this example:

void f<T,U>(Func<T> lambda, Func<T,U> lambda2)
f(async () => {}, x => {if (true) return x; else return x;});

(3) The Overload resolution algorithm is changed. In particular, when computing the Better function member algorithm, it considers two parameter types Pi and Qi to be equivalent if there is an identity conversion from Pi to Qi up to equivalence of InferredTask and Tasklikes. Immediately after this equivalence judgment, all InferredTasks are collapsed down to concrete Tasks immediately -- whether they're used in the tie-breakers or in the test for better function members.

Note that if OverloadResolution has no InferredTask coming in, then it will have no InferredTask going out. Note also that there are only four places that invoke overload resolution: method-invocation (as disussed), object-creation-expressions (but constructors are never generic so this won't feed InferredTask into overload resolution), element-access (but indexers are again never generic), and operator overload resolution (but operator overloads are again never generic).

I do not propose to change conversion-of-method-group. Actually, to be honest, I haven't been able to come up in my head with an example where it would arise. After all, whenever you convert a method group, every method in that group has a concrete return type. (Even if I did come up with an example, as @benaadams notes above, return type is still typically ignored for method-group conversion).

Example 1

void f<T>(Func<T> lambda)   // [1]
void f<T>(Func<Task<T>> lambda)  // [2]
f(async () => { return 1; });

For [1] the inferred return type of the argument lambda expression is InferredTask<int>, This is a lower bound for T, and its sole bound. Therefore T is inferred to be InferredTask<int> and the parameter type of [1] is Func<InferredTask<int>>.

For [2] the inferred return type of the argument is InferredTask<int>. This makes a lower bound inference from InferredTask<int> to Task<T>. Because Task is tasklike, it digs in and makes an exact inference from int to T. Therefore T is inferred to be int and the parameter type of [2] is Func<Task<int>>.

For sake of overload resolution judging parameter types, the two parameter types Func<InferredTask<int>> and Func<Task<int>> are considered identical up to equivalence of InferredTask and any tasklike. Therefore, the tie-breakers are considered. Before they're considered, the two candidates are collapsed down to Func<Task<int>> and Func<Task<int>>.

In the tie-breaker for more specific, [2] wins, because Task<T> is more specific than T.

Example 2

void g<T>(Func<T> lambda)  // [3]
void g(Func<MyTask> lambda)   // [4]
h(async () => { });

For [3] the inferred return type of the argument lambda expression is InferredTask. This is a lower bound for T, and its sole bound, so it infers T = InferredTask. So the parameter type for [3] is Func<InferredTask>.

For [4], type inference doesn't come into play at all. So the parameter type for [4] is just Func<MyTask>.

For sake of overload resolution judging parameter types, the two parameter types Func<InferredTask> and Func<MyTask> are considered identical up to equivalence of InferredTask and any tasklike. Therefore the tie-breakers are considered. Before they're considered, the two candidates are collapsed to Func<Task> and Func<MyTask>.

In the tie-breaker first rule, [4] wins because it's not generic.

@ljw1004

This comment has been minimized.

Show comment
Hide comment
@ljw1004

ljw1004 Apr 15, 2016

Contributor

This type inference stuff is complicated and makes my head hurt. I think it is a problem, what I wrote above, about MyTask<T> vs YourTask<T> both binding to the same InferredTask<int>. I wonder if it's possible to handle it by making a new type variable Xi for the InferredTask that came out of each async lambda. That way when one digging-in collapses it, then everyone else will see it collapsed too.

I also think that async debugging in Visual Studio might need a hook into the tasklike things. The way async debugging works is that

  1. given a MoveNext delegate, it can determine the async-state-machine object that it's part of;
  2. given an async state machine object, it can determine the Builder;
  3. given the builder, it can find the Task;
  4. given the Task, it can find all the delegate continuations that will be executed when the Task eventually completes;
  5. given a delegate continuation, it can figure out the MoveNext delegate that it will trigger

I think we'll have to allow for user-provided hooks for step [4] for the user-provided tasklikes.

Contributor

ljw1004 commented Apr 15, 2016

This type inference stuff is complicated and makes my head hurt. I think it is a problem, what I wrote above, about MyTask<T> vs YourTask<T> both binding to the same InferredTask<int>. I wonder if it's possible to handle it by making a new type variable Xi for the InferredTask that came out of each async lambda. That way when one digging-in collapses it, then everyone else will see it collapsed too.

I also think that async debugging in Visual Studio might need a hook into the tasklike things. The way async debugging works is that

  1. given a MoveNext delegate, it can determine the async-state-machine object that it's part of;
  2. given an async state machine object, it can determine the Builder;
  3. given the builder, it can find the Task;
  4. given the Task, it can find all the delegate continuations that will be executed when the Task eventually completes;
  5. given a delegate continuation, it can figure out the MoveNext delegate that it will trigger

I think we'll have to allow for user-provided hooks for step [4] for the user-provided tasklikes.

@abreslav abreslav referenced this issue in Kotlin/kotlin-coroutines Apr 19, 2016

Open

Coroutine-like constructs in other languages #2

@ljw1004

This comment has been minimized.

Show comment
Hide comment
@ljw1004

ljw1004 Apr 21, 2016

Contributor

Oh boy, type inference is difficult. Here are some worked examples.

Example 1: inferring InferredTask<int>

void f<T>(Func<T> lambda) {}
f(async () => 3)
  • Start with T unfixed
  • The first parameter's output type T contains unfixed T, and input type (being empty) contains no unfixed.
  • So do output type inference from async()=>3 to T
    • Inferred return type InferredTask<int>
    • Lower bound inference from InferredTask<int> to T
      • InferredTask<int> is a lower bound for T - informally, InferredTask<int> must implicitly convert to T
  • T does not depend on anything, so fix it
    • Lower bound InferredTask<int> is the only bound, so T = InferredTask<int>

Example 2: matching InferredTask<int> to Task<int>

void f<T>(Func<Task<T>> lambda) {}
f(async () => 3)
  • Start with T unfixed
  • The first parameter's output type Task<T> contains unfixed T, and input type (being empty) contains no unfixed
  • So do output type inference from async()=>3 to Task<T>
    • Inferred return type InferredTask<int>
    • Lower bound inference from InferredTask<int> to Task<T>
      • DIG IN SOMEHOW SINCE THE TWO TYPES MATCH
      • Exact inference from int to T
  • T does not depend on anything, so fix it
    • Exact bound int is the only bound, so T = int

Example 3: two conflicting bounds for T cause a type inference failure

void f<T>(Func<T> lambda1, Func<T> lambda2) {}
f(async()=>3, ()=>"hello")
  • Start with T unfixed
  • The first parameter's output type T contains unfixed T, and input type (being empty) contains no unfixed
  • So do output type inference from async()=>3 to T
    • Inferred return type InferredTask<int>
    • Lower bound inference from InferredTask<int> to T
      • InferredTask<int> is a lower bound for T - informally, InferredTask<int> must implicitly convert to T
  • The second parameter's output type T contains unfixed T, and input (being empty) contains no unfixed
  • So do output type inference from ()=>"hello" to T
    • Inferred return type string
    • Lower bound inference from string to T
      • string is a lower bound for T - informally, string must implicitly convert to T
  • T does not depend on anything, so fix it
    • There are two lower bounds, InferredTask<int> and string - informally, both must implicitly convert to T
    • Neither lower bound has an implicit conversion to the other, so inference fails

Example 4: a chain of type inferences from one lambda to another

void f<T,U>(Func<T> lambda1, Func<T,U> lambda2) {}
f(async()=>3, (x)=>x)
  • Start with T,U unfixed
  • The first parameter's output type T contains unfixed T, and input is empty so contains no unfixed
  • So do output type inference from async()=>3 to T
    • Inferred return type InferredTask<int>
    • Lower bound inference from InferredTask<int> to T
      • InferredTask<int> is a lower bound for T - informally, InferredTask<int> must implicitly convert to T
  • The second parameter's input contains T, which is currently unfixed, so don't do anything with it for now
  • T does not depend on anything, so fix it
    • Lower bound InferredTask<int> is the only bound, so T = InferredTask<int>
  • The second parameter's output type U contains unfixed U, and input type contains only T which has been fixed
  • So do output type inference from (x)=>x to U
    • Inferred return type InferredTask<int>
      • because the lambda's parameter type T is known to be InferredTask<int>
    • Lower bound inference from InferredTask<int> to U
      • InferredTask<int> is a lower bound for U - informally, InferredTask<int> must implicitly convert to U
  • U does not depend on anything, so fix it
    • Lower bound InferredTask<int> is the only bound, so U = InferredTask<int>

Example 5: type inference just ignores some of the lambda arguments

What's striking in this example is that type inference completely ignores the second lambda.

void f<T>(Func<T> lambda1, Func<T,T> lambda2) {}
f(async()=>3, (x)=>x)
  • Start with T unfixed
  • The first parameter's output type T contains unfixed T, and input is empty so contains no unfixed
  • So do output type inference from async()=>3 to T
    • Inferred return type InferredTask<int>
    • Lower bound inference from InferredTask<int> to T
      • InferredTask<int> is a lower bound for T - informally, InferredTask<int> must implicitly convert to T
  • T does not depend on anything, so fix it
    • Lower bound InferredTask<int> is the only bound, so T = InferredTask<int>
  • No unfixed type variables remain, so the second phase completes and inference succeeds.

Example 6: type inference succeeds thanks to an ignored argument, but subsequent compilation fails

As before type inference ignores the second lambda and succeeds - but this time the second lambda is incompatible with with the inference!

void f<T>(Func<T> lambda1, Func<T,T> lambda2) {}
f(async()=>3, (x)=>x.ToString())
  • Start with T unfixed
  • The first parameter's output type T contains unfixed T, and input is empty so contains no unfixed
  • So do output type inference from async()=>3 to T
    • Inferred return type InferredTask<int>
    • Lower bound inference from InferredTask<int> to T
      • InferredTask<int> is a lower bound for T - informally, InferredTask<int> must implicitly convert to T
  • The second parameter's input contains T which isn't yet fixed, so don't do anything with it for now
  • T does not depend on anything, so fix it
    • Lower bound InferredTask<int> is the only bound, so T = InferredTask<int>
  • No unfixed type variables remain, so the second phase completes and inference succeeds.
  • Later on in compilation, conversion of (x)=>x.ToString() to Func<InferredTask<int>,InferredTask<int>> fails.

Example 7: two lower bounds, including both InferredTask and Task

void f<T>(Func<T> lambda, Func<T> lambda2) {}
f(async()=>3, ()=>Task.FromResult<int>(1))
  • Start with T unfixed
  • The first parameter's output type T contains unfixed T, and input type (being empty) contains no unfixed
  • So do output type inference from async()=>3 to T
    • Inferred return type InferredTask<int>
    • Lower bound inference from InferredTask<int> to T
      • InferredTask<int> is a lower bound for T - informally, InferredTask<int> must implicitly convert to T
  • The second parameter's output type T contains unfixed T, and input type (being empty) contains no unfixed
  • So do output type inference from ()=>Task.FromResult<int>(1) to T
    • Inferred return type Task<int>
    • Lower bound inference from Task<int> to T
      • Task<int> is a lower bound for T - informally, Task<int> must implicitly convert to T
  • T does not depend on anything, so fix it
    • There are two lower bounds, InferredTask<int> and Task<int> - informally, both must implicitly convert to T.
    • SOMEHOW, THESE MUST BE RESOLVED. Maybe one way is to say that InferredTask<T> has an implicit conversion to Task<T> but not the other way round?
    • Whatever the mechanism, infer T = Task<int>.
  • No unfixed type variables remain, so the second phase completes and inference succeeds.

Example 8: two lower bounds, including both InferredTask and a non-tasklike

void f<T>(Func<T> lambda, Func<T> lambda2) {}
f(async()=>3, ()=>default(IAsyncResult))
  • Start with T unfixed
  • The first parameter's output type T contains unfixed T, and input type (being empty) contains no unfixed
  • So do output type inference from async()=>3 to T
    • Inferred return type InferredTask<int>
    • Lower bound inference from InferredTask<int> to T
      • InferredTask<int> is a lower bound for T - informally, InferredTask<int> must implicitly convert to T
  • The second parameter's output type T contains unfixed T, and input type (being empty) contains no unfixed
  • So do output type inference from ()=>default(IAsyncResult) to T
    • Inferred return type IAsyncResult
    • Lower bound inference from IAsyncResult to T
      • IAsyncResult is a lower bound for T - informally, IAsyncResult must implicitly convert to T
  • T does not depend on anything, so fix it
    • There are two lower bounds, InferredTask<int> and IAsyncResult - informally, both must implicitly convert to T
    • SOMEHOW, those have to be resolved. Maybe we say if Task<T> has an implicit conversion to something, then InferredTask<T> has an implicit conversion to it as well?
    • Whatever the mechanism, infer T = IAsyncResult.
  • No unfixed type variables remain, so the second phase completes and inference succeeds.
  • However, subsequent compilation fails because Func<IAsyncResult> isn't a valid target type for an async lambda in C#

Example 9: simple case of two lower bounds

void f<T>(T x, T y) {}
f(default(int), default(long));
  • Start with T unfixed
  • In Phase1, the type of default(int) is int, so make a lower bound inference from int to T
    • int is a lower bound for T - informally, int must implicitly convert to T
  • In Phase1, the type of default(long) is long, so make a lower bound inference from long to T
    • long is a lower bound for T - informally, long must implicitly convert to T
  • T does not depend upon anything, so fix it
    • Candidate types are int and long - informally, both must implicitly convert to T
    • For lower bound int, there is an implicit conversion from lower bound int to candidate long, so keep candidate long
    • For lower bound long, there is no implicit conversion from lower bound long to candidate int, so remove candidate int
    • The sole remaining candidate type is long, so fix T = long
  • No unfixed type variables remain, so the second phase completes and inference succeeds.

Example 10: covariant case with two lower bounds where the implicit conversions work

interface I<out T> { }
void f<T>(I<T> x, T y) { }
f(default(I<string>), default(object));
  • Start with T unfixed
  • In Phase1, the type of default(I<string>) is I<string>, so make a lower bound inference from I<string> to I<T>
    • The type parameter of I is covariant, and string is known to be a reference type, so make a lower bound inference from string to T
      • string is a lower bound for T - informally, string must implicitly convert to T
  • In Phase1, the type of default(object) is object, so make a lower bound inference from object to T
    • object is a lower bound for T - informally, object must implicitly convert to T
  • T does not depend upon anything, so fix it
    • Candidate types are string and object - informally, both must implicitly convert to T
    • For lower bound string, there is an implicit conversion from lower bound string to candidate object, so keep candidate object
    • For lower bound object, there is no implicit conversion from lower bound object to candidate string, so remove candidate string
    • The sole remaining candidate type is object, so fix T = object
  • No unfixed type variables remain, so the second phase completes and inference succeeds.

Example 11: covariant case which treats as invariant because not a reference type

interface I<out T> { }
void f<T>(I<T> x, T y) { }
f(default(I<int>), default(long));
  • Start with T unfixed
  • In Phase1, the type of default(I<int>) is I<int>, so make a lower bound inference from I<int> to I<T>
    • The type parameter of I is covariant, but int isn't known to be a reference type, so make an exact inference from int to T
      • int is an exact bound for T - informally, int must be identical to T
  • In Phase1, the type of default(long) is long, so make a lower bound inference from long to T
    • long is a lower bound for T - informally, long must implicitly convert to T
  • T does not depend upon anything, so fix it
    • Candidate types are int and long
    • For exact bound int, it is not identical to candidate long, so remove candidate long
    • For lower bound long, there is no implicit conversion from lower bound long to candidate int, so remove candidate int
    • There are no candidates remaining, so type inference fails

Example 12: showing that once we've fixed a type parameter, there's no going back to correct it

If we fix T in one round, then even when there's a next round that does further inference on other types, it never goes back to affect the decision on T.

void f<T,U>(Func<T> lambda1, Func<T,Tuple<T,U>> lambda2) { }
f(async()=>3, (x)=>default(Tuple<Task<string>,int>));
  • Start with T,U unfixed
  • The first parameter's output type T contains unfixed T, and input is empty so contains no unfixed
  • So do output type inference from async()=>3 to T
    • Inferred return type InferredTask<int>
    • Lower bound inference from InferredTask<int> to T
      • InferredTask<int> is a lower bound for T - informally, InferredTask<int> must implicitly convert to T
  • The second parameter's input contains T which isn't yet fixed, so don't do anything with it for now
  • T does not depend on anything, so fix it
    • Lower bound InferredTask<int> is the only bound, so T = InferredTask<int>
  • The second paramaeter's inputs are fixed, and it has unfixed outputs
  • So do output type inference from (x)=>default(Tuple<Task<string>,int>) to Tuple<T,U>
    • Inferred return type Tuple<Task<string>,int>
    • Lower bound inference from Tuple<Task<string>,int> to Tuple<T,U>. This is after T has already been fixed to T = InferredTask<int>, so it's really a lower bound inference from Tuple<Task<string>,int> to Tuple<InferredTask<int>,U>.
      • Digging in we get two subsidiary inferences:
        • Exact inference from Task<string> to InferredTask<int>
          • We do allow digging in. That gives an exact inference from string to int, which doesn't lead to any inference
        • Exact inference from int to U.
          • int is an exact bound for U - informally, int must be identical to U
  • U does not depend on any unfixed type variables, so fix it
    • Exact bound int is the only bound, so U = int
  • No unfixed type variables remain, so the second phase completes and inference succeeds.
  • Later on in compilation, conversion of (x)=>default(Tuple<Task<string>,int>) to Func<InferredTask<int>,Tuple<InferredTask<int>,int>> fails.

Example 13: upper bound inference and subtypes of Task<T>

This example is subtle and involved. Upper-bound inference lets us construct a case where InferredTask must work to preserve backwards-compatibility

interface I<in T> {}
class Subtask<T> : Task<T> { public Subtask() : base(null) { } }
I<T> helper<T>(T x) => null;
void f<T,U>(Func<T> lambda1, Func<T,I<Subtask<U>>> lambda2) {}

f(async()=>3, (x)=>helper(x)); // infers T=InferrredTask<int>, U=int
  • Start with T,U unfixed
  • The first parameter's output type T contains unfixed T, and input is empty so contains no unfixed
  • So do output type inference from async()=>3 to T
    • Inferred return type InferredTask<int>
    • Lower bound inference from InferredTask<int> to T
      • InferredTask<int> is a lower bound for T - informally, InferredTask<int> must implicitly convert to T
  • The second parameter's input contains T which isn't yet fixed, so don't do anything with it for now
  • T does not depend on anything, so fix it
    • Lower bound InferredTask<int> is the only bound, so T = InferredTask<int>
  • The second parameter's inputs are fixed, and it has unfixed outputs
  • So do output type inference from (x)=>helper(x) to Func<T,I<Subtask<U>>>
    • Inferred return type I<InferredTask<int>>
    • Lower bound inference from I<InferredTask<int>> to I<Subtask<U>>.
      • I has a contravariant type parameter, so do upper bound inference from InferredTask<int> to Subtask<U>
        • SOMEHOW for backwards compatibility Prior to this proposal for InferredTask<int>, C# used to do an upper bound inference from Task<int> to Subtask<U>. And Subtask<U> inherits from the unique class Task<U>. And so it went on to do an exact inference from int to U (exact because Task<T> is invariant). That leads to int being an exact bound for U
  • U does not depend on any unfixed type variables, so fix it
    • Exact bound int is the only bound, so U = int
  • No unfixed type variables remain, so the second phase completes and inference succeeds.
Contributor

ljw1004 commented Apr 21, 2016

Oh boy, type inference is difficult. Here are some worked examples.

Example 1: inferring InferredTask<int>

void f<T>(Func<T> lambda) {}
f(async () => 3)
  • Start with T unfixed
  • The first parameter's output type T contains unfixed T, and input type (being empty) contains no unfixed.
  • So do output type inference from async()=>3 to T
    • Inferred return type InferredTask<int>
    • Lower bound inference from InferredTask<int> to T
      • InferredTask<int> is a lower bound for T - informally, InferredTask<int> must implicitly convert to T
  • T does not depend on anything, so fix it
    • Lower bound InferredTask<int> is the only bound, so T = InferredTask<int>

Example 2: matching InferredTask<int> to Task<int>

void f<T>(Func<Task<T>> lambda) {}
f(async () => 3)
  • Start with T unfixed
  • The first parameter's output type Task<T> contains unfixed T, and input type (being empty) contains no unfixed
  • So do output type inference from async()=>3 to Task<T>
    • Inferred return type InferredTask<int>
    • Lower bound inference from InferredTask<int> to Task<T>
      • DIG IN SOMEHOW SINCE THE TWO TYPES MATCH
      • Exact inference from int to T
  • T does not depend on anything, so fix it
    • Exact bound int is the only bound, so T = int

Example 3: two conflicting bounds for T cause a type inference failure

void f<T>(Func<T> lambda1, Func<T> lambda2) {}
f(async()=>3, ()=>"hello")
  • Start with T unfixed
  • The first parameter's output type T contains unfixed T, and input type (being empty) contains no unfixed
  • So do output type inference from async()=>3 to T
    • Inferred return type InferredTask<int>
    • Lower bound inference from InferredTask<int> to T
      • InferredTask<int> is a lower bound for T - informally, InferredTask<int> must implicitly convert to T
  • The second parameter's output type T contains unfixed T, and input (being empty) contains no unfixed
  • So do output type inference from ()=>"hello" to T
    • Inferred return type string
    • Lower bound inference from string to T
      • string is a lower bound for T - informally, string must implicitly convert to T
  • T does not depend on anything, so fix it
    • There are two lower bounds, InferredTask<int> and string - informally, both must implicitly convert to T
    • Neither lower bound has an implicit conversion to the other, so inference fails

Example 4: a chain of type inferences from one lambda to another

void f<T,U>(Func<T> lambda1, Func<T,U> lambda2) {}
f(async()=>3, (x)=>x)
  • Start with T,U unfixed
  • The first parameter's output type T contains unfixed T, and input is empty so contains no unfixed
  • So do output type inference from async()=>3 to T
    • Inferred return type InferredTask<int>
    • Lower bound inference from InferredTask<int> to T
      • InferredTask<int> is a lower bound for T - informally, InferredTask<int> must implicitly convert to T
  • The second parameter's input contains T, which is currently unfixed, so don't do anything with it for now
  • T does not depend on anything, so fix it
    • Lower bound InferredTask<int> is the only bound, so T = InferredTask<int>
  • The second parameter's output type U contains unfixed U, and input type contains only T which has been fixed
  • So do output type inference from (x)=>x to U
    • Inferred return type InferredTask<int>
      • because the lambda's parameter type T is known to be InferredTask<int>
    • Lower bound inference from InferredTask<int> to U
      • InferredTask<int> is a lower bound for U - informally, InferredTask<int> must implicitly convert to U
  • U does not depend on anything, so fix it
    • Lower bound InferredTask<int> is the only bound, so U = InferredTask<int>

Example 5: type inference just ignores some of the lambda arguments

What's striking in this example is that type inference completely ignores the second lambda.

void f<T>(Func<T> lambda1, Func<T,T> lambda2) {}
f(async()=>3, (x)=>x)
  • Start with T unfixed
  • The first parameter's output type T contains unfixed T, and input is empty so contains no unfixed
  • So do output type inference from async()=>3 to T
    • Inferred return type InferredTask<int>
    • Lower bound inference from InferredTask<int> to T
      • InferredTask<int> is a lower bound for T - informally, InferredTask<int> must implicitly convert to T
  • T does not depend on anything, so fix it
    • Lower bound InferredTask<int> is the only bound, so T = InferredTask<int>
  • No unfixed type variables remain, so the second phase completes and inference succeeds.

Example 6: type inference succeeds thanks to an ignored argument, but subsequent compilation fails

As before type inference ignores the second lambda and succeeds - but this time the second lambda is incompatible with with the inference!

void f<T>(Func<T> lambda1, Func<T,T> lambda2) {}
f(async()=>3, (x)=>x.ToString())
  • Start with T unfixed
  • The first parameter's output type T contains unfixed T, and input is empty so contains no unfixed
  • So do output type inference from async()=>3 to T
    • Inferred return type InferredTask<int>
    • Lower bound inference from InferredTask<int> to T
      • InferredTask<int> is a lower bound for T - informally, InferredTask<int> must implicitly convert to T
  • The second parameter's input contains T which isn't yet fixed, so don't do anything with it for now
  • T does not depend on anything, so fix it
    • Lower bound InferredTask<int> is the only bound, so T = InferredTask<int>
  • No unfixed type variables remain, so the second phase completes and inference succeeds.
  • Later on in compilation, conversion of (x)=>x.ToString() to Func<InferredTask<int>,InferredTask<int>> fails.

Example 7: two lower bounds, including both InferredTask and Task

void f<T>(Func<T> lambda, Func<T> lambda2) {}
f(async()=>3, ()=>Task.FromResult<int>(1))
  • Start with T unfixed
  • The first parameter's output type T contains unfixed T, and input type (being empty) contains no unfixed
  • So do output type inference from async()=>3 to T
    • Inferred return type InferredTask<int>
    • Lower bound inference from InferredTask<int> to T
      • InferredTask<int> is a lower bound for T - informally, InferredTask<int> must implicitly convert to T
  • The second parameter's output type T contains unfixed T, and input type (being empty) contains no unfixed
  • So do output type inference from ()=>Task.FromResult<int>(1) to T
    • Inferred return type Task<int>
    • Lower bound inference from Task<int> to T
      • Task<int> is a lower bound for T - informally, Task<int> must implicitly convert to T
  • T does not depend on anything, so fix it
    • There are two lower bounds, InferredTask<int> and Task<int> - informally, both must implicitly convert to T.
    • SOMEHOW, THESE MUST BE RESOLVED. Maybe one way is to say that InferredTask<T> has an implicit conversion to Task<T> but not the other way round?
    • Whatever the mechanism, infer T = Task<int>.
  • No unfixed type variables remain, so the second phase completes and inference succeeds.

Example 8: two lower bounds, including both InferredTask and a non-tasklike

void f<T>(Func<T> lambda, Func<T> lambda2) {}
f(async()=>3, ()=>default(IAsyncResult))
  • Start with T unfixed
  • The first parameter's output type T contains unfixed T, and input type (being empty) contains no unfixed
  • So do output type inference from async()=>3 to T
    • Inferred return type InferredTask<int>
    • Lower bound inference from InferredTask<int> to T
      • InferredTask<int> is a lower bound for T - informally, InferredTask<int> must implicitly convert to T
  • The second parameter's output type T contains unfixed T, and input type (being empty) contains no unfixed
  • So do output type inference from ()=>default(IAsyncResult) to T
    • Inferred return type IAsyncResult
    • Lower bound inference from IAsyncResult to T
      • IAsyncResult is a lower bound for T - informally, IAsyncResult must implicitly convert to T
  • T does not depend on anything, so fix it
    • There are two lower bounds, InferredTask<int> and IAsyncResult - informally, both must implicitly convert to T
    • SOMEHOW, those have to be resolved. Maybe we say if Task<T> has an implicit conversion to something, then InferredTask<T> has an implicit conversion to it as well?
    • Whatever the mechanism, infer T = IAsyncResult.
  • No unfixed type variables remain, so the second phase completes and inference succeeds.
  • However, subsequent compilation fails because Func<IAsyncResult> isn't a valid target type for an async lambda in C#

Example 9: simple case of two lower bounds

void f<T>(T x, T y) {}
f(default(int), default(long));
  • Start with T unfixed
  • In Phase1, the type of default(int) is int, so make a lower bound inference from int to T
    • int is a lower bound for T - informally, int must implicitly convert to T
  • In Phase1, the type of default(long) is long, so make a lower bound inference from long to T
    • long is a lower bound for T - informally, long must implicitly convert to T
  • T does not depend upon anything, so fix it
    • Candidate types are int and long - informally, both must implicitly convert to T
    • For lower bound int, there is an implicit conversion from lower bound int to candidate long, so keep candidate long
    • For lower bound long, there is no implicit conversion from lower bound long to candidate int, so remove candidate int
    • The sole remaining candidate type is long, so fix T = long
  • No unfixed type variables remain, so the second phase completes and inference succeeds.

Example 10: covariant case with two lower bounds where the implicit conversions work

interface I<out T> { }
void f<T>(I<T> x, T y) { }
f(default(I<string>), default(object));
  • Start with T unfixed
  • In Phase1, the type of default(I<string>) is I<string>, so make a lower bound inference from I<string> to I<T>
    • The type parameter of I is covariant, and string is known to be a reference type, so make a lower bound inference from string to T
      • string is a lower bound for T - informally, string must implicitly convert to T
  • In Phase1, the type of default(object) is object, so make a lower bound inference from object to T
    • object is a lower bound for T - informally, object must implicitly convert to T
  • T does not depend upon anything, so fix it
    • Candidate types are string and object - informally, both must implicitly convert to T
    • For lower bound string, there is an implicit conversion from lower bound string to candidate object, so keep candidate object
    • For lower bound object, there is no implicit conversion from lower bound object to candidate string, so remove candidate string
    • The sole remaining candidate type is object, so fix T = object
  • No unfixed type variables remain, so the second phase completes and inference succeeds.

Example 11: covariant case which treats as invariant because not a reference type

interface I<out T> { }
void f<T>(I<T> x, T y) { }
f(default(I<int>), default(long));
  • Start with T unfixed
  • In Phase1, the type of default(I<int>) is I<int>, so make a lower bound inference from I<int> to I<T>
    • The type parameter of I is covariant, but int isn't known to be a reference type, so make an exact inference from int to T
      • int is an exact bound for T - informally, int must be identical to T
  • In Phase1, the type of default(long) is long, so make a lower bound inference from long to T
    • long is a lower bound for T - informally, long must implicitly convert to T
  • T does not depend upon anything, so fix it
    • Candidate types are int and long
    • For exact bound int, it is not identical to candidate long, so remove candidate long
    • For lower bound long, there is no implicit conversion from lower bound long to candidate int, so remove candidate int
    • There are no candidates remaining, so type inference fails

Example 12: showing that once we've fixed a type parameter, there's no going back to correct it

If we fix T in one round, then even when there's a next round that does further inference on other types, it never goes back to affect the decision on T.

void f<T,U>(Func<T> lambda1, Func<T,Tuple<T,U>> lambda2) { }
f(async()=>3, (x)=>default(Tuple<Task<string>,int>));
  • Start with T,U unfixed
  • The first parameter's output type T contains unfixed T, and input is empty so contains no unfixed
  • So do output type inference from async()=>3 to T
    • Inferred return type InferredTask<int>
    • Lower bound inference from InferredTask<int> to T
      • InferredTask<int> is a lower bound for T - informally, InferredTask<int> must implicitly convert to T
  • The second parameter's input contains T which isn't yet fixed, so don't do anything with it for now
  • T does not depend on anything, so fix it
    • Lower bound InferredTask<int> is the only bound, so T = InferredTask<int>
  • The second paramaeter's inputs are fixed, and it has unfixed outputs
  • So do output type inference from (x)=>default(Tuple<Task<string>,int>) to Tuple<T,U>
    • Inferred return type Tuple<Task<string>,int>
    • Lower bound inference from Tuple<Task<string>,int> to Tuple<T,U>. This is after T has already been fixed to T = InferredTask<int>, so it's really a lower bound inference from Tuple<Task<string>,int> to Tuple<InferredTask<int>,U>.
      • Digging in we get two subsidiary inferences:
        • Exact inference from Task<string> to InferredTask<int>
          • We do allow digging in. That gives an exact inference from string to int, which doesn't lead to any inference
        • Exact inference from int to U.
          • int is an exact bound for U - informally, int must be identical to U
  • U does not depend on any unfixed type variables, so fix it
    • Exact bound int is the only bound, so U = int
  • No unfixed type variables remain, so the second phase completes and inference succeeds.
  • Later on in compilation, conversion of (x)=>default(Tuple<Task<string>,int>) to Func<InferredTask<int>,Tuple<InferredTask<int>,int>> fails.

Example 13: upper bound inference and subtypes of Task<T>

This example is subtle and involved. Upper-bound inference lets us construct a case where InferredTask must work to preserve backwards-compatibility

interface I<in T> {}
class Subtask<T> : Task<T> { public Subtask() : base(null) { } }
I<T> helper<T>(T x) => null;
void f<T,U>(Func<T> lambda1, Func<T,I<Subtask<U>>> lambda2) {}

f(async()=>3, (x)=>helper(x)); // infers T=InferrredTask<int>, U=int
  • Start with T,U unfixed
  • The first parameter's output type T contains unfixed T, and input is empty so contains no unfixed
  • So do output type inference from async()=>3 to T
    • Inferred return type InferredTask<int>
    • Lower bound inference from InferredTask<int> to T
      • InferredTask<int> is a lower bound for T - informally, InferredTask<int> must implicitly convert to T
  • The second parameter's input contains T which isn't yet fixed, so don't do anything with it for now
  • T does not depend on anything, so fix it
    • Lower bound InferredTask<int> is the only bound, so T = InferredTask<int>
  • The second parameter's inputs are fixed, and it has unfixed outputs
  • So do output type inference from (x)=>helper(x) to Func<T,I<Subtask<U>>>
    • Inferred return type I<InferredTask<int>>
    • Lower bound inference from I<InferredTask<int>> to I<Subtask<U>>.
      • I has a contravariant type parameter, so do upper bound inference from InferredTask<int> to Subtask<U>
        • SOMEHOW for backwards compatibility Prior to this proposal for InferredTask<int>, C# used to do an upper bound inference from Task<int> to Subtask<U>. And Subtask<U> inherits from the unique class Task<U>. And so it went on to do an exact inference from int to U (exact because Task<T> is invariant). That leads to int being an exact bound for U
  • U does not depend on any unfixed type variables, so fix it
    • Exact bound int is the only bound, so U = int
  • No unfixed type variables remain, so the second phase completes and inference succeeds.
@ljw1004

This comment has been minimized.

Show comment
Hide comment
@ljw1004

ljw1004 Apr 21, 2016

Contributor

Proposal for type inference

  1. The "inferred return type" of an async lambda will be either the non-generic InferredTask, or the generic InferredTask<T> for some T. It will never be Task or Task<T>. (of course, the inferred return type of a non-async lambda might still be.)

    (Note: InferredTask<T> can never have an unfixed type variable within its type argument.)

  2. A new exact inference case: an exact inference from InferredTask<U1> to any generic tasklike Tasklike<V1> causes an exact inference to be made from U1 to V1.

    (Note: we don't consider InferredTask<T> to be an array, nullable or constructed type any of exact, lower or upper bound inference.)

    (Note: there's no need to handle the case of exact, lower or upper bound inference from any U to InferredTask<V1> since such a V1 can never contain any unfixed type variables and hence no inferences will ever be made from it.)

  3. A new lower bound inference case: a lower bound inference from InferredTask<U1> to any generic tasklike Tasklike<V1> causes an exact inference to be made from U1 to V1.

    (Note: You'd then we need for an additional rule that a lower bound inference from InferredTask<U1> to some non-generic-tasklike V causes a lower bound inference from Task<U1> to V for back-compat. But it's not needed, because Task<T> doesn't inherit or implement anything generic.)

  4. Two new upper bound inference cases:

    • an upper bound inference from InferredTask<U1> to any generic tasklike Tasklike<V1> causes an exact inference to be made from U1 to V1.
    • an upper bound inference from InferredTask<U1> to some V that isn't a generic tasklike causes an upper bound inference from Task<U1> to V.

    (Example 13 above shows that this second bullet is needed for upper bound inference, even though it wasn't needed for exact or lower bound inference).

  5. For purposes of fixing, we consider a few additional implicit conversions

    • InferredTask has an implicit conversion to Task, and to anything that Task in turn has an implicit conversion to
    • InferredTask<T> likewise has an implicit conversion to Task<T>, and likewise anything that Task<T> in turn has an implicit conversion to

    TODO some examples with a variety of InferredTasks and exact/upper/lower bounds to check that it makes sense.

Contributor

ljw1004 commented Apr 21, 2016

Proposal for type inference

  1. The "inferred return type" of an async lambda will be either the non-generic InferredTask, or the generic InferredTask<T> for some T. It will never be Task or Task<T>. (of course, the inferred return type of a non-async lambda might still be.)

    (Note: InferredTask<T> can never have an unfixed type variable within its type argument.)

  2. A new exact inference case: an exact inference from InferredTask<U1> to any generic tasklike Tasklike<V1> causes an exact inference to be made from U1 to V1.

    (Note: we don't consider InferredTask<T> to be an array, nullable or constructed type any of exact, lower or upper bound inference.)

    (Note: there's no need to handle the case of exact, lower or upper bound inference from any U to InferredTask<V1> since such a V1 can never contain any unfixed type variables and hence no inferences will ever be made from it.)

  3. A new lower bound inference case: a lower bound inference from InferredTask<U1> to any generic tasklike Tasklike<V1> causes an exact inference to be made from U1 to V1.

    (Note: You'd then we need for an additional rule that a lower bound inference from InferredTask<U1> to some non-generic-tasklike V causes a lower bound inference from Task<U1> to V for back-compat. But it's not needed, because Task<T> doesn't inherit or implement anything generic.)

  4. Two new upper bound inference cases:

    • an upper bound inference from InferredTask<U1> to any generic tasklike Tasklike<V1> causes an exact inference to be made from U1 to V1.
    • an upper bound inference from InferredTask<U1> to some V that isn't a generic tasklike causes an upper bound inference from Task<U1> to V.

    (Example 13 above shows that this second bullet is needed for upper bound inference, even though it wasn't needed for exact or lower bound inference).

  5. For purposes of fixing, we consider a few additional implicit conversions

    • InferredTask has an implicit conversion to Task, and to anything that Task in turn has an implicit conversion to
    • InferredTask<T> likewise has an implicit conversion to Task<T>, and likewise anything that Task<T> in turn has an implicit conversion to

    TODO some examples with a variety of InferredTasks and exact/upper/lower bounds to check that it makes sense.

@ljw1004

This comment has been minimized.

Show comment
Hide comment
@ljw1004

ljw1004 Apr 22, 2016

Contributor

I've been exploring the idea of InferredTask coming out of type inference. But it's time to step back, take stock, and try to understand why.

The overall goal of InferredTask coming out of type inference

The idea is that an InferredTask is basically the same as a Task -- it will be treated as a Task for purposes of applicability, and if an InferredTask candidate wins overload resolution then it will be used as Task.

The sole difference has to do with two applicable overload candidates, one built specifically using tasklikes and the other picking up InferredTask from type inference. Normally there'd be no conversion between them so it would be an ambiguous overload, but InferredTask lets us bypass that fruitless conversion test and instead look at more useful tie-breakers.

Approach 1: radical equivalence of Task and Tasklikes.

Proposal: Leave type inference mostly as it is, returning Task as it does now (except in cases where it's specifically given a target delegate type with some other tasklike return type), but treat every single Task in an overload candidate as equivalent to a tasklike for purposes of judging whether two applicable candidates have identical parameter types.

This would throw away my work on InferredTask coming out of type inference. It'd be a lot easier! Here's a worked example:

void Run<T>(Func<T> lambda)  // [3], infers T=Func<Task>
void Run<T>(Func<MyTask<T>> lambda) // [4], infers T=int
Run(async () => { return 1; } );
// the two have identical parameter types up to tasklike, so prefer [4] in tie-breakers

Will there be problems? If so, the potential problems could only arise in cases where there are two applicable candidates for the same argument list. It would mean tie-breakers being used rather than betterness.

I think this idea has legs. I'm going to try it.

Approach 2: allow lots of InferredTask to come out of type inference

Proposal: We should feel very free to get out InferredTask from type inference wherever any of the bounds mentioned it. It will never change applicability. All it will ever do is help tie-break between two candidates that would otherwise not have had a chance.

Approach 3: be restrictive in whether InferredTask can come out of type inference

Proposal: InferredTask has a very specific meaning out of type inference. It means that any tasklike can be substituted for the InferredTask placeholder, and the method call will be "valid" for all of them equally (up to the limits of what type inference does). It should come out of type inference *only in those situations.*

Thoughts on approaches 2+3

These are the principle motives:

  • We like inferring InferredTask rather than Task. Doing so will never make an overload candidate non-applicable. All it will ever do is increase the circumstances in which two applicable members will be considered equivalent and amenable to tie-breakers. (If they can't win on tie-breakers, they generally won't have been able to win on betterness)
  • If T has lower bounds InferredTask and Task, well, it means that whatever T we pick, both must implicitly convert to it. InferredTask stands for quantum superposition of any possible tasklike which is able to collapse to any of them. It only makes sense to pick Task here, because if we picked "every possible tasklike" that would be tantamount to saying that eveyr possible tasklike has a conversion to Task.
  • If T has lower bounds InferredTask and IAsyncResult, well, we can't say that every possible tasklike will implement IAsyncResult. So we have to assume that the tasklike was really Task, and hence pick IAsyncResult as the winner.

Here are some examples that might distinguish approaches 2 and 3.

(I had this hunch that the rule should be For purposes of fixing, treat it as though InferredTask has an implicit conversion to Task and to anything that Task in turn has an implicit conversion to. The hunch wasn't particularly strongly motivated, and its weakness is what led to me re-examining the whole reason behind InferredTask)

Example F0: U has lower bounds Task<int> and InferredTask<int>

void f<T>(Func<T> lambda1, Func<T> lambda2)
f(async()=>3, ()=>default(Task<int>))
  • lambda1 provides a lower bound InferredTask<int> for T
  • lambda2 provides a lower bound Task<int> for T
  • My hunch would infer U = Task<int>.

In the presence of a second overload

void f<T>(Func<MyTask<T>> lambda1, Func<MyTask<T>> lambda2)

then the hunch inference U = Task<int>, well, that doesn't change applicability: the original f is applicable, and the second f is not, so it will pick the first. That's good!

Example F1: U has upper bounds Task<int> and InferredTask<int>

interface I<in T> {}
I<T> helper<T>(T x) => null;
void f<T,U>(Func<T> lambda1, I<U> arg2, Func<T,I<U>> arg3) {}

f(async()=>3, default(I<Task<int>>), (x)=>helper(x));
  • lambda1 provides a lower bound InferredTask<int> for T, and fixing T gives T = InferredTask<int>
  • arg2 provides an upper bound Task<int> for U
  • arg3 provides an upper bound InferredTask<int> for U
  • My hunch would infer U = InferredTask<int>.

In the presence of a second overload

void f<T>(Func<T> lambda1, I<MyTask<U>> arg2, Func<T<I<MyTask<U>>> arg3) {}

then the hunch inference U = InferredTask<int>, well, that doesn't change applicability: the original f is applicable, and the second f is not, so it will pick the first and then collapse it down to Task<int>. That's good!'

Other examples for fixing

  • Example F2: U has lower bounds IEnumerable<InferredTask<int>> and IEnumerable<Task<int>>
  • Example F3: U has lower bounds Action<InferredTask<int>> and Action<Task<int>>
  • Example F4: U has lower bounds Task<int> and MyTask<int> and InferredTask<int>
  • Example F5: U has lower bounds InferredTask<int> and object
  • Example F6: Fixing it to MyTask<int> from one lambda argument conflicts with YourTask<int> that's needed elsewhere
Contributor

ljw1004 commented Apr 22, 2016

I've been exploring the idea of InferredTask coming out of type inference. But it's time to step back, take stock, and try to understand why.

The overall goal of InferredTask coming out of type inference

The idea is that an InferredTask is basically the same as a Task -- it will be treated as a Task for purposes of applicability, and if an InferredTask candidate wins overload resolution then it will be used as Task.

The sole difference has to do with two applicable overload candidates, one built specifically using tasklikes and the other picking up InferredTask from type inference. Normally there'd be no conversion between them so it would be an ambiguous overload, but InferredTask lets us bypass that fruitless conversion test and instead look at more useful tie-breakers.

Approach 1: radical equivalence of Task and Tasklikes.

Proposal: Leave type inference mostly as it is, returning Task as it does now (except in cases where it's specifically given a target delegate type with some other tasklike return type), but treat every single Task in an overload candidate as equivalent to a tasklike for purposes of judging whether two applicable candidates have identical parameter types.

This would throw away my work on InferredTask coming out of type inference. It'd be a lot easier! Here's a worked example:

void Run<T>(Func<T> lambda)  // [3], infers T=Func<Task>
void Run<T>(Func<MyTask<T>> lambda) // [4], infers T=int
Run(async () => { return 1; } );
// the two have identical parameter types up to tasklike, so prefer [4] in tie-breakers

Will there be problems? If so, the potential problems could only arise in cases where there are two applicable candidates for the same argument list. It would mean tie-breakers being used rather than betterness.

I think this idea has legs. I'm going to try it.

Approach 2: allow lots of InferredTask to come out of type inference

Proposal: We should feel very free to get out InferredTask from type inference wherever any of the bounds mentioned it. It will never change applicability. All it will ever do is help tie-break between two candidates that would otherwise not have had a chance.

Approach 3: be restrictive in whether InferredTask can come out of type inference

Proposal: InferredTask has a very specific meaning out of type inference. It means that any tasklike can be substituted for the InferredTask placeholder, and the method call will be "valid" for all of them equally (up to the limits of what type inference does). It should come out of type inference *only in those situations.*

Thoughts on approaches 2+3

These are the principle motives:

  • We like inferring InferredTask rather than Task. Doing so will never make an overload candidate non-applicable. All it will ever do is increase the circumstances in which two applicable members will be considered equivalent and amenable to tie-breakers. (If they can't win on tie-breakers, they generally won't have been able to win on betterness)
  • If T has lower bounds InferredTask and Task, well, it means that whatever T we pick, both must implicitly convert to it. InferredTask stands for quantum superposition of any possible tasklike which is able to collapse to any of them. It only makes sense to pick Task here, because if we picked "every possible tasklike" that would be tantamount to saying that eveyr possible tasklike has a conversion to Task.
  • If T has lower bounds InferredTask and IAsyncResult, well, we can't say that every possible tasklike will implement IAsyncResult. So we have to assume that the tasklike was really Task, and hence pick IAsyncResult as the winner.

Here are some examples that might distinguish approaches 2 and 3.

(I had this hunch that the rule should be For purposes of fixing, treat it as though InferredTask has an implicit conversion to Task and to anything that Task in turn has an implicit conversion to. The hunch wasn't particularly strongly motivated, and its weakness is what led to me re-examining the whole reason behind InferredTask)

Example F0: U has lower bounds Task<int> and InferredTask<int>

void f<T>(Func<T> lambda1, Func<T> lambda2)
f(async()=>3, ()=>default(Task<int>))
  • lambda1 provides a lower bound InferredTask<int> for T
  • lambda2 provides a lower bound Task<int> for T
  • My hunch would infer U = Task<int>.

In the presence of a second overload

void f<T>(Func<MyTask<T>> lambda1, Func<MyTask<T>> lambda2)

then the hunch inference U = Task<int>, well, that doesn't change applicability: the original f is applicable, and the second f is not, so it will pick the first. That's good!

Example F1: U has upper bounds Task<int> and InferredTask<int>

interface I<in T> {}
I<T> helper<T>(T x) => null;
void f<T,U>(Func<T> lambda1, I<U> arg2, Func<T,I<U>> arg3) {}

f(async()=>3, default(I<Task<int>>), (x)=>helper(x));
  • lambda1 provides a lower bound InferredTask<int> for T, and fixing T gives T = InferredTask<int>
  • arg2 provides an upper bound Task<int> for U
  • arg3 provides an upper bound InferredTask<int> for U
  • My hunch would infer U = InferredTask<int>.

In the presence of a second overload

void f<T>(Func<T> lambda1, I<MyTask<U>> arg2, Func<T<I<MyTask<U>>> arg3) {}

then the hunch inference U = InferredTask<int>, well, that doesn't change applicability: the original f is applicable, and the second f is not, so it will pick the first and then collapse it down to Task<int>. That's good!'

Other examples for fixing

  • Example F2: U has lower bounds IEnumerable<InferredTask<int>> and IEnumerable<Task<int>>
  • Example F3: U has lower bounds Action<InferredTask<int>> and Action<Task<int>>
  • Example F4: U has lower bounds Task<int> and MyTask<int> and InferredTask<int>
  • Example F5: U has lower bounds InferredTask<int> and object
  • Example F6: Fixing it to MyTask<int> from one lambda argument conflicts with YourTask<int> that's needed elsewhere
@ljw1004

This comment has been minimized.

Show comment
Hide comment
@ljw1004

ljw1004 Apr 22, 2016

Contributor

Here are some completely blank task-builders, as a starting point for experiment:

[Tasklike(typeof(MyTaskBuilder))]
class MyTask { }

[Tasklike(typeof(MyTaskBuilder<>))]
class MyTask<T> { }

class MyTaskBuilder
{
    public static MyTaskBuilder Create() => null;
    public void SetStateMachine(IAsyncStateMachine stateMachine) { }
    public void Start<TSM>(ref TSM stateMachine) where TSM : IAsyncStateMachine { }
    public void AwaitOnCompleted<TA, TSM>(ref TA awaiter, ref TSM stateMachine) where TA : INotifyCompletion where TSM : IAsyncStateMachine { }
    public void AwaitUnsafeOnCompleted<TA, TSM>(ref TA awaiter, ref TSM stateMachine) where TA : ICriticalNotifyCompletion where TSM : IAsyncStateMachine { }
    public void SetResult() { }
    public void SetException(Exception ex) { }
    public MyTask Task => null;
}

class MyTaskBuilder<T>
{
    public static MyTaskBuilder<T> Create() => null;
    public void SetStateMachine(IAsyncStateMachine stateMachine) { }
    public void Start<TSM>(ref TSM stateMachine) { }
    public void AwaitOnCompleted<TA, TSM>(ref TA awaiter, ref TSM stateMachine) where TA : INotifyCompletion where TSM : IAsyncStateMachine { }
    public void AwaitUnsafeOnCompleted<TA, TSM>(ref TA awaiter, ref TSM stateMachine) where TA : ICriticalNotifyCompletion where TSM : IAsyncStateMachine { }
    public void SetResult(T result) { }
    public void SetException(Exception ex) { }
    public MyTask<T> Task => null;
}

namespace System.Runtime.CompilerServices
{
    public class TasklikeAttribute : Attribute
    {
        public TasklikeAttribute(Type builder) { }
    }
}

edit: Thanks @bbarry in the comment below who reminded me about generic constraints. I'd forgotten about them. What would happen if the user provided a builder that is missing appropriate generic constraints? (it's theoretically acceptable). Or if they provided a builder that has too-strict generic constraints? (then codegen, if allowed to proceed, would produce a binary that failed PEVerify). In the end, I decided I might as well just require the builder to have the EXACT correct constraints.
ljw1004@fc64b23

Contributor

ljw1004 commented Apr 22, 2016

Here are some completely blank task-builders, as a starting point for experiment:

[Tasklike(typeof(MyTaskBuilder))]
class MyTask { }

[Tasklike(typeof(MyTaskBuilder<>))]
class MyTask<T> { }

class MyTaskBuilder
{
    public static MyTaskBuilder Create() => null;
    public void SetStateMachine(IAsyncStateMachine stateMachine) { }
    public void Start<TSM>(ref TSM stateMachine) where TSM : IAsyncStateMachine { }
    public void AwaitOnCompleted<TA, TSM>(ref TA awaiter, ref TSM stateMachine) where TA : INotifyCompletion where TSM : IAsyncStateMachine { }
    public void AwaitUnsafeOnCompleted<TA, TSM>(ref TA awaiter, ref TSM stateMachine) where TA : ICriticalNotifyCompletion where TSM : IAsyncStateMachine { }
    public void SetResult() { }
    public void SetException(Exception ex) { }
    public MyTask Task => null;
}

class MyTaskBuilder<T>
{
    public static MyTaskBuilder<T> Create() => null;
    public void SetStateMachine(IAsyncStateMachine stateMachine) { }
    public void Start<TSM>(ref TSM stateMachine) { }
    public void AwaitOnCompleted<TA, TSM>(ref TA awaiter, ref TSM stateMachine) where TA : INotifyCompletion where TSM : IAsyncStateMachine { }
    public void AwaitUnsafeOnCompleted<TA, TSM>(ref TA awaiter, ref TSM stateMachine) where TA : ICriticalNotifyCompletion where TSM : IAsyncStateMachine { }
    public void SetResult(T result) { }
    public void SetException(Exception ex) { }
    public MyTask<T> Task => null;
}

namespace System.Runtime.CompilerServices
{
    public class TasklikeAttribute : Attribute
    {
        public TasklikeAttribute(Type builder) { }
    }
}

edit: Thanks @bbarry in the comment below who reminded me about generic constraints. I'd forgotten about them. What would happen if the user provided a builder that is missing appropriate generic constraints? (it's theoretically acceptable). Or if they provided a builder that has too-strict generic constraints? (then codegen, if allowed to proceed, would produce a binary that failed PEVerify). In the end, I decided I might as well just require the builder to have the EXACT correct constraints.
ljw1004@fc64b23

@bbarry

This comment has been minimized.

Show comment
Hide comment
@bbarry

bbarry Apr 22, 2016

Assuming the generic constraints

where TA : INotifyCompletion 
where TSM : IAsyncStateMachine 

are allowed on the relevant methods

bbarry commented Apr 22, 2016

Assuming the generic constraints

where TA : INotifyCompletion 
where TSM : IAsyncStateMachine 

are allowed on the relevant methods

@ljw1004

This comment has been minimized.

Show comment
Hide comment
@ljw1004

ljw1004 Apr 22, 2016

Contributor

I've updated the download link (at top of this issue) with a new version that works with overload resolution. Also I've written some more samples, including IObservable.

What kind of Observable do you think an async method should produce? (also once we add async iterators?)

async IObservable<string> Option1()
{
    // This behaves a lot like Task<string>.ToObservable() – the async method starts the moment
    // you execute Option1, and if you subscribe while it's in-flight then you'll get an OnNext+OnCompleted
    // as soon as the return statement executes, and if you subscribe later then you'll get OnNext+OnCompleted
    // immediately the moment you subscribe (with a saved value).
    // Presumably there's no way for Subscription.Dispose() to abort the async method...
    await Task.Delay(100);
    return "hello";
}

async IObservable<string> Option2()
{
    // This behaves a lot like Observable.Create() – a fresh instance of the async method starts
    // up for each subscriber at the moment of subscription, and you'll get an OnNext the moment
    // each yield executes, and an OnCompleted when the method ends.
    // Perhaps if the subscription has been disposed then yield could throw OperationCancelledException?
    await Task.Delay(100);
    yield return "hello";
    await Task.Delay(200);
    yield return "world";
    await Task.Delay(300);
}

It’s conceivable that both could exist, keyed off whether there’s a yield statement inside the body, but that seems altogether too subtle…

Contributor

ljw1004 commented Apr 22, 2016

I've updated the download link (at top of this issue) with a new version that works with overload resolution. Also I've written some more samples, including IObservable.

What kind of Observable do you think an async method should produce? (also once we add async iterators?)

async IObservable<string> Option1()
{
    // This behaves a lot like Task<string>.ToObservable() – the async method starts the moment
    // you execute Option1, and if you subscribe while it's in-flight then you'll get an OnNext+OnCompleted
    // as soon as the return statement executes, and if you subscribe later then you'll get OnNext+OnCompleted
    // immediately the moment you subscribe (with a saved value).
    // Presumably there's no way for Subscription.Dispose() to abort the async method...
    await Task.Delay(100);
    return "hello";
}

async IObservable<string> Option2()
{
    // This behaves a lot like Observable.Create() – a fresh instance of the async method starts
    // up for each subscriber at the moment of subscription, and you'll get an OnNext the moment
    // each yield executes, and an OnCompleted when the method ends.
    // Perhaps if the subscription has been disposed then yield could throw OperationCancelledException?
    await Task.Delay(100);
    yield return "hello";
    await Task.Delay(200);
    yield return "world";
    await Task.Delay(300);
}

It’s conceivable that both could exist, keyed off whether there’s a yield statement inside the body, but that seems altogether too subtle…

@benaadams

This comment has been minimized.

Show comment
Hide comment
@benaadams

benaadams Apr 23, 2016

Contributor

Would this work with structs?

Assuming
SetStateMachine(IAsyncStateMachine stateMachine) { }
was changed to
SetStateMachine<TSM>(ref TSM stateMachine) where TSM : IAsyncStateMachine { }

Contributor

benaadams commented Apr 23, 2016

Would this work with structs?

Assuming
SetStateMachine(IAsyncStateMachine stateMachine) { }
was changed to
SetStateMachine<TSM>(ref TSM stateMachine) where TSM : IAsyncStateMachine { }

@ljw1004

This comment has been minimized.

Show comment
Hide comment
@ljw1004

ljw1004 Apr 23, 2016

Contributor

@benaadams could you explain a bit further?

The way SetStateMachine works is that it's only ever invoked when an async method encounters its first cold await. At that moment the state machine struct gets boxed on the heap. The state machine has a field for the builder, and the builder is a typically struct, so the builder is included within that box. And then the builder's SetStateMachine method is called so the builder gets a reference to the boxed state machine so that it can subsequently call stateMachine.MoveNext.

Given that SetStateMachine is only ever called after the state machine struct has been boxed, I think there's no benefit to what you describe.

Contributor

ljw1004 commented Apr 23, 2016

@benaadams could you explain a bit further?

The way SetStateMachine works is that it's only ever invoked when an async method encounters its first cold await. At that moment the state machine struct gets boxed on the heap. The state machine has a field for the builder, and the builder is a typically struct, so the builder is included within that box. And then the builder's SetStateMachine method is called so the builder gets a reference to the boxed state machine so that it can subsequently call stateMachine.MoveNext.

Given that SetStateMachine is only ever called after the state machine struct has been boxed, I think there's no benefit to what you describe.

@benaadams

This comment has been minimized.

Show comment
Hide comment
@benaadams

benaadams Apr 23, 2016

Contributor

@ljw1004 makes sense

One of the issue with tasks is they aren't very good with fine grained awaits where they are often sync/hot awaits as at each step you allocate a Task. I'll do the installs and see if I can understand more.

Contributor

benaadams commented Apr 23, 2016

@ljw1004 makes sense

One of the issue with tasks is they aren't very good with fine grained awaits where they are often sync/hot awaits as at each step you allocate a Task. I'll do the installs and see if I can understand more.

@tpetricek

This comment has been minimized.

Show comment
Hide comment
@tpetricek

tpetricek Apr 23, 2016

[Sorry, I did not read the whole thread - it is way too long! I saw @ljw1004's tweet asking about semantics of computations over IObservable<T>, so I'm adding my thoughts on that - it might be out of context here.]

There is a couple of things that do something similar:

They all have similar syntax with some sort of computation that can asynchronously await things, yield values and iterate over other async streams using for. The fun fact is that they all work differently.

The tricky case for the semantics is if you write something like this:

async IObservable<string> Foo(IObservable<string> another) {
    async for(var it in another) {
        await Task.Delay(1000);
        yield "Hello: " + it;
    }
}

Or the same thing written using asyncSeq in F# for reference:

let foo (another:AsyncSeq<string>) = asyncSeq {
    for it in another do
        do! Async.Sleep(1000)
        yield "Hello: " + it }

This iterates over another observable (I made up async for syntax for C# and F# overloads the meaning of for in computation expression), waits 1 second and yield "Echo" message.

  • F# AsyncSeq implements asynchronous pull model, which means that item from another are requested one by one (though this is done asynchronously). So, the for loop requests first value, sleeps 1 second, produces value, requests another value and so on.

    This works great for pull based sources (say, download next 1kb of data from internet). When you have push based source (say, tweets feed), you have to decide what to do with events you receive when the consumer does not want them (i.e. is processing the body of the loop). You can either throw them away (e.g. ignore MouseMove events) or queue them (hoping that you'll process them later), or some mix of the two.

  • The Dart extension from Erik uses Dart's version of IObservable<T>, but with an important addition! Rather than having IDisposable Subscribe(IObserver<T>), the Subscribe method returns some ISubscription which has Dispose (stop) but also Pause and Resume. This basically lets you use the type for both push model and asynchronous pull model (where you explicitly request next value by un-pausing the source).

    So, in Dart, the for loop requests next value, pauses the another source immediately, runs the body of the loop and then calls resume to get the next value from the stream. Although it is based on types more akin to .NET IObservable<T>, it actually behaves very much like F#'s asynchronous pull model.

  • In the Scala's Ray library, the streams are push model. The question is what to do with values that are produced while the body of the for loop is processing previous items.

    You can either (1) drop them, (2) queue them or (3) start the body multiple times in parallel. I think that doing (3) is highly counter-intuitive, because it would be braking intuition about sequentiality of the code. The Ray library does (2) and queues all events, which is problematic (you cannot control the queue and it may overflow). Dropping them is equally bad. (For the record, with F# you can add combinators to implement (1) and (2)).

I think the sequential nature of the code makes it really hard to write computations that consume push-based sources. It becomes apparent once you want to write something like async for loops - there is no good way of doing that for push source, which is why F# only supports it for async pull source.

So, having this over IAsyncEnumerable (or AsyncSeq<T> that exists in F# libraries) would be great, but I don't think it can be done in any reasonable way over IObservable (unless you add pause/resume to the subscription object).

tpetricek commented Apr 23, 2016

[Sorry, I did not read the whole thread - it is way too long! I saw @ljw1004's tweet asking about semantics of computations over IObservable<T>, so I'm adding my thoughts on that - it might be out of context here.]

There is a couple of things that do something similar:

They all have similar syntax with some sort of computation that can asynchronously await things, yield values and iterate over other async streams using for. The fun fact is that they all work differently.

The tricky case for the semantics is if you write something like this:

async IObservable<string> Foo(IObservable<string> another) {
    async for(var it in another) {
        await Task.Delay(1000);
        yield "Hello: " + it;
    }
}

Or the same thing written using asyncSeq in F# for reference:

let foo (another:AsyncSeq<string>) = asyncSeq {
    for it in another do
        do! Async.Sleep(1000)
        yield "Hello: " + it }

This iterates over another observable (I made up async for syntax for C# and F# overloads the meaning of for in computation expression), waits 1 second and yield "Echo" message.

  • F# AsyncSeq implements asynchronous pull model, which means that item from another are requested one by one (though this is done asynchronously). So, the for loop requests first value, sleeps 1 second, produces value, requests another value and so on.

    This works great for pull based sources (say, download next 1kb of data from internet). When you have push based source (say, tweets feed), you have to decide what to do with events you receive when the consumer does not want them (i.e. is processing the body of the loop). You can either throw them away (e.g. ignore MouseMove events) or queue them (hoping that you'll process them later), or some mix of the two.

  • The Dart extension from Erik uses Dart's version of IObservable<T>, but with an important addition! Rather than having IDisposable Subscribe(IObserver<T>), the Subscribe method returns some ISubscription which has Dispose (stop) but also Pause and Resume. This basically lets you use the type for both push model and asynchronous pull model (where you explicitly request next value by un-pausing the source).

    So, in Dart, the for loop requests next value, pauses the another source immediately, runs the body of the loop and then calls resume to get the next value from the stream. Although it is based on types more akin to .NET IObservable<T>, it actually behaves very much like F#'s asynchronous pull model.

  • In the Scala's Ray library, the streams are push model. The question is what to do with values that are produced while the body of the for loop is processing previous items.

    You can either (1) drop them, (2) queue them or (3) start the body multiple times in parallel. I think that doing (3) is highly counter-intuitive, because it would be braking intuition about sequentiality of the code. The Ray library does (2) and queues all events, which is problematic (you cannot control the queue and it may overflow). Dropping them is equally bad. (For the record, with F# you can add combinators to implement (1) and (2)).

I think the sequential nature of the code makes it really hard to write computations that consume push-based sources. It becomes apparent once you want to write something like async for loops - there is no good way of doing that for push source, which is why F# only supports it for async pull source.

So, having this over IAsyncEnumerable (or AsyncSeq<T> that exists in F# libraries) would be great, but I don't think it can be done in any reasonable way over IObservable (unless you add pause/resume to the subscription object).

@benaadams

This comment has been minimized.

Show comment
Hide comment
@benaadams

benaadams Apr 23, 2016

Contributor

@ljw1004 I like this a lot! For https://gist.github.com/benaadams/e23dabec441a7fdf613557aba4688b33

Timing 100M ops - Only the Task branch allocates (this is all for sync completes)

ValueTask    Await    Async  3128.6621ms x  3.2 inc vs Task 
ValueTask No-Await    Async  2638.6003ms x  3.8 inc vs Task
ValueTask    Await No-Async  2403.1469ms x  4.2 inc vs Task
ValueTask No-Await No-Async   960.3686ms x 10.4 inc vs Task
     Task    Await    Async 10070.9579ms

So full async and await add some overhead, as expected as its doing more, but its greatly improved over Task<T> for a mostly sync operation.and has no allocations.

Contributor

benaadams commented Apr 23, 2016

@ljw1004 I like this a lot! For https://gist.github.com/benaadams/e23dabec441a7fdf613557aba4688b33

Timing 100M ops - Only the Task branch allocates (this is all for sync completes)

ValueTask    Await    Async  3128.6621ms x  3.2 inc vs Task 
ValueTask No-Await    Async  2638.6003ms x  3.8 inc vs Task
ValueTask    Await No-Async  2403.1469ms x  4.2 inc vs Task
ValueTask No-Await No-Async   960.3686ms x 10.4 inc vs Task
     Task    Await    Async 10070.9579ms

So full async and await add some overhead, as expected as its doing more, but its greatly improved over Task<T> for a mostly sync operation.and has no allocations.

@benaadams

This comment has been minimized.

Show comment
Hide comment
@benaadams

benaadams Apr 23, 2016

Contributor

The extensions seem to make the non-async paths heavier (async-path obv can't compile)

Uninstalling the extensions and commenting out the async ValueTask bits

ValueTask    Await No-Async 1376.0959ms x  7 inc vs Task
ValueTask No-Await No-Async  220.5036ms x 44 inc vs Task
     Task    Await    Async 9714.2687ms

But it would still be better for the common case which would be async+await; and cuts all the allocations.

Contributor

benaadams commented Apr 23, 2016

The extensions seem to make the non-async paths heavier (async-path obv can't compile)

Uninstalling the extensions and commenting out the async ValueTask bits

ValueTask    Await No-Async 1376.0959ms x  7 inc vs Task
ValueTask No-Await No-Async  220.5036ms x 44 inc vs Task
     Task    Await    Async 9714.2687ms

But it would still be better for the common case which would be async+await; and cuts all the allocations.

@ljw1004

This comment has been minimized.

Show comment
Hide comment
@ljw1004

ljw1004 Apr 24, 2016

Contributor

@benaadams, thanks for the perf analysis. I've taken your gist, augmented it a bit, and checked it into source control: https://github.com/ljw1004/roslyn/blob/features/async-return/ArbitraryAsyncReturns-sample/Performance.cs

Your last post "the extensions seem to make the non-async paths heavier" is just a case of experimental error, I'm sure. Repeat runs of the test give about 10% variation, and machines are always susceptible to hiccups. Looking at the output in ILDASM, my extensions produce exactly the same IL output for the three methods in question.

But I think your tests might be a bit misleading... even your async Task case has no allocations! That's because it ends up taking only hot paths, and returns a Task<int> with value 0, and the BCL keeps singleton instances of about 15 common Task values (including -1, 0, 1...8, true, false, null). If you'd made your async Task method return 10 rather than return delay, then you'd have seen heap allocations.

I've written up the performance results with some more explanation:

Performance analysis

Naive Task-returning async method

This is the naive code you might write with async/await if you weren't concerned about performance. This takes about 120ns on JIT, about 90ns on .NET Native, when called with delay=0.

async Task<int> TestAsync(int delay)
{
    await Task.Delay(delay);
    return delay;
}

var x = await TestAsync(0);

Switch it to naive ValueTask-returning async method

The feature we're discussing lets you easily change it as below.
This improves perf to about 65ns on JIT, about 55ns on .NET Native:

async ValueTask<int> TestAsync(int delay)
{
    await Task.Delay(delay);
    return delay;
}

var x = await TestAsync(0);

Manual optimization: avoid hot-paths

It's okay to await an already-completed task such as Task.Delay(0) - the compiler does so with very little overhead.
But we can shave off even that overhead if we want,
to about 50ns on JIT and 30ns on .NET Native:

async ValueTask<int> TestAsync(int delay)
{
    if (f > 0) await Task.Delay(f);
    return delay;
}

var t = TestAsync();
var x = t.IsCompletedSuccessfully ? t.Result : await t;

Ultimate manual optimization: async-elimination

We can build upon the previous optimization with the ultimate optimization: eliminating the async method entirely on the hot path.
This cuts it down to the bare bones, about 10ns on JIT, and also 10ns on .NET Native:

ValueTask<int> TestAsync(int delay)
{
    if (f > 0) return ValueTask<int>.FromTask(TestAsyncInner(delay));
    else return ValueTask<int>.FromValue(delay);
}

async Task TestAsyncInner(int delay)
{
    await Task.Delay(delay);
    return delay;
}

var t = TestAsync();
var x = t.IsCompletedSuccessfully ? t.Result : await t;

Ballpark figures? I think that all of these numbers (10ns up to 150ns overhead for calling into an async method) are pretty negligible so long as they're not done in the innermost inner loop of your code. And you should never do that anyway, e.g. because locals in an async method tend to be implemented as fields in a struct. Even the 150ns will quickly be dominated by everything else.

The real win of this feature and ValueTask isn't in shaving off CPU cycles as exercised in these tests. Instead it's in avoiding heap allocations for cases where you have to return an already-completed Task<T> where the value of your task isn't one of the 15 hard-coded already-known-about ones in the BCL. I think real-world benchmarks are the best way to demonstrate an advantage in reducing GC.

Contributor

ljw1004 commented Apr 24, 2016

@benaadams, thanks for the perf analysis. I've taken your gist, augmented it a bit, and checked it into source control: https://github.com/ljw1004/roslyn/blob/features/async-return/ArbitraryAsyncReturns-sample/Performance.cs

Your last post "the extensions seem to make the non-async paths heavier" is just a case of experimental error, I'm sure. Repeat runs of the test give about 10% variation, and machines are always susceptible to hiccups. Looking at the output in ILDASM, my extensions produce exactly the same IL output for the three methods in question.

But I think your tests might be a bit misleading... even your async Task case has no allocations! That's because it ends up taking only hot paths, and returns a Task<int> with value 0, and the BCL keeps singleton instances of about 15 common Task values (including -1, 0, 1...8, true, false, null). If you'd made your async Task method return 10 rather than return delay, then you'd have seen heap allocations.

I've written up the performance results with some more explanation:

Performance analysis

Naive Task-returning async method

This is the naive code you might write with async/await if you weren't concerned about performance. This takes about 120ns on JIT, about 90ns on .NET Native, when called with delay=0.

async Task<int> TestAsync(int delay)
{
    await Task.Delay(delay);
    return delay;
}

var x = await TestAsync(0);

Switch it to naive ValueTask-returning async method

The feature we're discussing lets you easily change it as below.
This improves perf to about 65ns on JIT, about 55ns on .NET Native:

async ValueTask<int> TestAsync(int delay)
{
    await Task.Delay(delay);
    return delay;
}

var x = await TestAsync(0);

Manual optimization: avoid hot-paths

It's okay to await an already-completed task such as Task.Delay(0) - the compiler does so with very little overhead.
But we can shave off even that overhead if we want,
to about 50ns on JIT and 30ns on .NET Native:

async ValueTask<int> TestAsync(int delay)
{
    if (f > 0) await Task.Delay(f);
    return delay;
}

var t = TestAsync();
var x = t.IsCompletedSuccessfully ? t.Result : await t;

Ultimate manual optimization: async-elimination

We can build upon the previous optimization with the ultimate optimization: eliminating the async method entirely on the hot path.
This cuts it down to the bare bones, about 10ns on JIT, and also 10ns on .NET Native:

ValueTask<int> TestAsync(int delay)
{
    if (f > 0) return ValueTask<int>.FromTask(TestAsyncInner(delay));
    else return ValueTask<int>.FromValue(delay);
}

async Task TestAsyncInner(int delay)
{
    await Task.Delay(delay);
    return delay;
}

var t = TestAsync();
var x = t.IsCompletedSuccessfully ? t.Result : await t;

Ballpark figures? I think that all of these numbers (10ns up to 150ns overhead for calling into an async method) are pretty negligible so long as they're not done in the innermost inner loop of your code. And you should never do that anyway, e.g. because locals in an async method tend to be implemented as fields in a struct. Even the 150ns will quickly be dominated by everything else.

The real win of this feature and ValueTask isn't in shaving off CPU cycles as exercised in these tests. Instead it's in avoiding heap allocations for cases where you have to return an already-completed Task<T> where the value of your task isn't one of the 15 hard-coded already-known-about ones in the BCL. I think real-world benchmarks are the best way to demonstrate an advantage in reducing GC.

@benaadams

This comment has been minimized.

Show comment
Hide comment
@benaadams

benaadams Apr 24, 2016

Contributor

I should have looked at the counts; it did allocate just in a smaller way - probably the cached Tasks :)

Changing to returning 10 and dropping the iters to 1M (so the profile completes in a sane time)

task-allocs

The Task path allocates 1M Task<T> at 80MB, whereas ValueTask<T> allocates nothing

I think that all of these numbers (10ns up to 150ns overhead for calling into an async method) are pretty negligible so long as they're not done in the innermost inner loop of your code. They'll quickly become dominated by everything else.

The allocations are the definitely the big player here.

However, there are areas where the overheads are a significant factor, if you don't write convoluted code. Parsing and messaging on async data are the two that come to mind. Whether parsing XML, parsing Form data; or reading messages from a Websocket - though that has other allocation issues.

It would also likely be the next item on Async Observables/Enumerables after allocations; if they were doing small transformations like a where clause in Linq.

Contributor

benaadams commented Apr 24, 2016

I should have looked at the counts; it did allocate just in a smaller way - probably the cached Tasks :)

Changing to returning 10 and dropping the iters to 1M (so the profile completes in a sane time)

task-allocs

The Task path allocates 1M Task<T> at 80MB, whereas ValueTask<T> allocates nothing

I think that all of these numbers (10ns up to 150ns overhead for calling into an async method) are pretty negligible so long as they're not done in the innermost inner loop of your code. They'll quickly become dominated by everything else.

The allocations are the definitely the big player here.

However, there are areas where the overheads are a significant factor, if you don't write convoluted code. Parsing and messaging on async data are the two that come to mind. Whether parsing XML, parsing Form data; or reading messages from a Websocket - though that has other allocation issues.

It would also likely be the next item on Async Observables/Enumerables after allocations; if they were doing small transformations like a where clause in Linq.

@omariom

This comment has been minimized.

Show comment
Hide comment
@omariom

omariom Apr 24, 2016

However, there are areas where the overheads are a significant factor, if you don't write convoluted code.

Agree. Otherwise we will get "Avoid async/await on the fast(est) path" as it is now with LINQ.

omariom commented Apr 24, 2016

However, there are areas where the overheads are a significant factor, if you don't write convoluted code.

Agree. Otherwise we will get "Avoid async/await on the fast(est) path" as it is now with LINQ.

@benaadams

This comment has been minimized.

Show comment
Hide comment
@benaadams

benaadams Apr 24, 2016

Contributor

Performance aside, as I don't think it has relevance to the api surface. I'm wondering if/how cancellation tokens fit into the IObservable? e.g.

async IObservable<string> Option1or2(CancellationToken cancellationToken)

or even

o.Subscribe((msg) => {}, (ex) => {}, () => {}, cancellationToken) 
Contributor

benaadams commented Apr 24, 2016

Performance aside, as I don't think it has relevance to the api surface. I'm wondering if/how cancellation tokens fit into the IObservable? e.g.

async IObservable<string> Option1or2(CancellationToken cancellationToken)

or even

o.Subscribe((msg) => {}, (ex) => {}, () => {}, cancellationToken) 
@ljw1004

This comment has been minimized.

Show comment
Hide comment
@ljw1004

ljw1004 Apr 24, 2016

Contributor

@bbarry just to say thanks for the reminder about generic constraints. I was interested to learn that the C# compiler doesn't ever validate constraints on the well-known types and methods it uses. If someone provided a well-known type/method themselves, and placed too-strict constraints on it, the compiler would generate code that doesn't PEVerify. For this case I changed the compiler to make sure that the async method builders have exactly the expected constraints, neither more nor fewer.

Contributor

ljw1004 commented Apr 24, 2016

@bbarry just to say thanks for the reminder about generic constraints. I was interested to learn that the C# compiler doesn't ever validate constraints on the well-known types and methods it uses. If someone provided a well-known type/method themselves, and placed too-strict constraints on it, the compiler would generate code that doesn't PEVerify. For this case I changed the compiler to make sure that the async method builders have exactly the expected constraints, neither more nor fewer.

@ljw1004

This comment has been minimized.

Show comment
Hide comment
@ljw1004

ljw1004 Apr 26, 2016

Contributor

I'm closing this thread. I've summarized the results of this thread in the spec and design rationale documents.

I have started a new discussion thread #10902 for further comments.

Contributor

ljw1004 commented Apr 26, 2016

I'm closing this thread. I've summarized the results of this thread in the spec and design rationale documents.

I have started a new discussion thread #10902 for further comments.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment