Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Async parallel foreach #1946

Closed
RoccoDevs opened this issue Dec 24, 2018 · 91 comments · Fixed by #46943
Closed

Async parallel foreach #1946

RoccoDevs opened this issue Dec 24, 2018 · 91 comments · Fixed by #46943
Assignees
Labels
api-approved API was approved in API review, it can be implemented area-System.Threading.Tasks
Milestone

Comments

@RoccoDevs
Copy link

RoccoDevs commented Dec 24, 2018

There are situations where a asynchronous parallel foreach is needed instead of the already existing Parallel.ForEach() and while it's not rocket science to implement yourself I think it would be a nice feature to be implemented in corefx. This should be seen as the asynchronous alternative to the existing Parallel.ForEach(). And should take IEnumerable<T> as well as IAsyncEnumerable<T>.

There are examples on this issue already here is one: https://devblogs.microsoft.com/pfxteam/implementing-a-simple-foreachasync-part-2/

        /// <summary>
        ///     Executes a foreach asynchronously.
        /// </summary>
        /// <typeparam name="T"></typeparam>
        /// <param name="source">The source.</param>
        /// <param name="dop">The degrees of parallelism.</param>
        /// <param name="body">The body.</param>
        /// <returns></returns>
        public static Task ForEachAsync<T>(this IEnumerable<T> source, int dop, Func<T, Task> body)
        {
            return Task.WhenAll(
                from partition in System.Collections.Concurrent.Partitioner.Create(source).GetPartitions(dop)
                select Task.Run(async delegate
                {
                    using (partition)
                    {
                        while (partition.MoveNext())
                            await body(partition.Current);
                    }
                }));
        }

Proposed API

namespace System.Threading.Tasks
{
    public static class Parallel
    {
        public static Task ForEachAsync<T>(IAsyncEnumerable<T> source, Func<T, ValuteTask> body);
        public static Task ForEachAsync<T>(IAsyncEnumerable<T> source, ParallelOptions parallelOptions, Func<T, CancellationToken, ValueTask> body);

        public static Task ForEachAsync<T>(IEnumerable<T> source, Func<T, ValuteTask> body);
        public static Task ForEachAsync<T>(IEnumerable<T> source, ParallelOptions parallelOptions, Func<T, CancellationToken, ValuteTask> body);
    }
}

Closed questions

While I consider these questions to be closed feel free to continue discussion on them.

  • Should DOP be automatic or manual at all times?
    DOP Will have a default, just like parallel linq. However documentation should make it clear that this default can be wrong in various cases. Such as detailed by @GSPP in various comments.
  • Should it accept IEnumerable?
    Yes it should
  • Should the partitioner concept be supported at all?
    No we should ensure the specified DOP is adhered to exactly. Creating batches can lead to a smaller than expected effective DOP.
  • Should we process the source items sequentially or make no attempt?
    By default we should try to process source items sequentially to improve data locality. Reason being: HDD read performance will increase drastically. Furthermore cache usage will be improved. It should however be made very clear to people in docs, that the order cannot be guaranteed
  • Should there be an alternative to Parallel.For?
    There should be one but I'll leave that outside of this issue.
  • Should the funcs return ValueTask instead of Task?
    Yes ValueTask is more accomodating than Task (cheaper to convert Task to ValueTask than the other way arround). Furthermore it should result in a peformance increase.
  • Should TaskCreationOptions.DenyChildAttach be specified?
    Yes this should be specified see This comment for more info
  • Should there be an overload without CancellationToken in the Func?
    No there shouldn't be. It would have little to no benefit, but would cause a lot of implications (ambiguity, analysis becomes harder)
@ichensky
Copy link
Contributor

ForEachAsync method can have to many different implementations. Some of users would like to create their own Partitioner<TSource> class, others do not need it in the ForEachAsync like method, and some of users would like to have their own absolutly specific logic in Task.Run(async delegate ..... block.

@svick
Copy link
Contributor

svick commented Dec 24, 2018

@ichensky Compare this with the synchronous Parallel.ForEach:

  • Some users would like to use Partitioner, and there's an overload for that.
  • Some users will need completely custom logic, and so they can't use Parallel.ForEach.

But Parallel.ForEach still exists and is useful. I think the situation with Parallel.ForEachAsync is similar: we could create a set of overloads that are useful in the common cases. And that would still be useful, even if it wouldn't cover all cases.

@RoccoDevs
Copy link
Author

RoccoDevs commented Dec 24, 2018

@ichensky I fully agree with the way @svick explains it.
I would simply see this as the asynchronous version of Parallel.ForEach.

A set of overloads for the most common cases is needed for sure!

@Clockwork-Muse
Copy link
Contributor

Hmm. Probably wants to be based on dotnet/corefx#32640 ?

@svick
Copy link
Contributor

svick commented Dec 24, 2018

@Clockwork-Muse Possibly. I think Parallel.ForEachAsync that somehow accepts IEnumerable<T> is important. One that accepts IAsyncEnumerable<T> would be useful too. If there was a simple built-in way to convert from IEnumerable<T> to IAsyncEnumerable<T>, then overloads that directly accept IEnumerable<T> might not be necessary. Does something like that exist or is it proposed?

@RoccoDevs
Copy link
Author

@svick Yes accepting IAsyncEnumerable<T> sounds good!

@ichensky
Copy link
Contributor

Usually when making ForEachAsync, for ex. over list of files and inside we create task that Download/Process these files, it is useful pass in custom function ForEachAsync param like maximum possible count of Tasks, that can executes at one time, because of internet bandwidth, depending on other services and so on. Time to time it is needed, that ForEachAsync not just Wait until all Tasks finished but also return result of all those tasks.

@WOLVIE97
Copy link

Kool..

@GSPP
Copy link

GSPP commented Jan 15, 2019

This API is urgently needed. In particular, there is no way currently to process a set of "items" in parallel and in an async way. Parallel and PLINQ (.AsParallel()) cannot be used for this. I believe Dataflow can be used but it's awkward for something this simple.

The source for the code in the opening post is https://blogs.msdn.microsoft.com/pfxteam/2012/03/05/implementing-a-simple-foreachasync-part-2/.

I am very active on Stack Overflow and I see people needing this all the time. People then use very bad workarounds such as starting all items in parallel at the same time, then WhenAll them. So they start 10000 HTTP calls and wonder why it performs so poorly. Or, they execute items in batches. This can be much slower because as the batch completes item by item the effective DOP decreases. Or, they write very awkward looping code with collections of tasks and weird waiting schemes.

I needed this API multiple times myself. So I developed a solution based on the code in this article. I made the following enhancements which the framework should also have:

  1. A CancellationToken should be taken as an argument. When it is signaled no further loop iterations should be started.
  2. When an exception is thrown by one item the loop should be cancelled. When all items are done executing all exceptions should be thrown together in the form of an AggregateException.
  3. The worker delegate should receive a CancellationToken. This is a token that combines the externally passed token and the cancellation that can come from cancellation due to an exception.
  4. The partitioner should not do any batching. It must be ensured that the specified degree of parallelism is adhered to exactly. For IO, it is important to use the DOP specified by the caller. Batching can lead to the effective DOP being smaller than expected.
  5. Since we are starting tasks we should think about whether a TaskScheduler should be taken as an argument. The caller might want to do a lot of CPU work so he might want to schedule that. Alternatively, the caller can switch to a different scheduler as part of the delegate he passes in. So we might not want to natively support TaskScheduler.
  6. In my implementation I specified TaskCreationOptions.DenyChildAttach for those tasks. I'm not entirely sure about this.
  7. Do we need to think about the SynchronizationContext? As the code stands user code will always be called on the thread pool with no sync context.

Should the DOP be automatic? In my opinion, we have no choice but to force the caller to specify an explicit DOP. Usually, user code will perform IO as part of the loop (that's likely the reason for using async). The system has no way of knowing the right DOP for IO. Different devices (disk, SSD, web service) have vastly different characteristics. Also, the DOP might be intentionally low in order to not overload the system being called. IMO, no auto-tuning is possible. We cannot even make the parameter optional and default to the CPU count. The CPU count is unrelated to the ideal IO DOP! This is why IO on the thread pool can explode so radically. The auto-tuning heuristics can get into a state where they add unbounded amounts of threads (see these issues: https://github.com/dotnet/coreclr/issues/1754, https://github.com/dotnet/coreclr/issues/2184).

@tarekgh
Copy link
Member

tarekgh commented Jan 17, 2019

@tomesendam @GSPP could you please add the full proposal according to the first step in the doc https://github.com/dotnet/corefx/blob/master/Documentation/project-docs/api-review-process.md. Thanks.

@RoccoDevs
Copy link
Author

@tarekgh I will try when I find the time to work on this!

@MgSam
Copy link

MgSam commented Feb 27, 2019

I agree this is urgently needed for the reasons @GSPP lists.

I'd also add it's urgently needed because Parallel.ForEach() is a pit of failure when writing async code. It's more than happy to accept an async lambda that returns a Task. Only it doesn't do what the user expects (which is only continue once the async functions within have all fully executed). Instead it will immediately complete, leaving the user's code silently broken and waiting for a potentially hard-to-track-down issue at runtime.

I'd go so far as to say that Parallel.ForEach should maybe throw an exception when passed a Task returning method- or at the very least VS should ship with an analyzer that points out this is an anti-pattern.

Even then, this only solves half the problem. Users need a proper async API to call as an alternative.

@yaakov-h
Copy link
Member

yaakov-h commented Feb 27, 2019

Parallel.ForEach should maybe throw an exception when passed a Task returning method

It's not Task-returning though. It's async void as a lambda.

@tarekgh
Copy link
Member

tarekgh commented Feb 28, 2019

This is unlikely will be in v3.0. Please apply the request I mentioned in my comment https://github.com/dotnet/corefx/issues/34233#issuecomment-455354106 to be able to look at it as whole and move forward.

@MihaZupan
Copy link
Member

@tomesendam if you want I can open the api proposal issue

I think that a base parallel foreach should accept a Func that takes a CancellationToken.

Func<T, CancellationToken, Task> asyncBody;

The same CT that is passed in ParallelOptions can then be passed along.

There could be other overloads, not accepting a CT for the body that would call the "base" overload, discarding the CT.

So in essence:

Task<ParallelLoopResult> ForEachAsync<T>(IAsyncEnumerable<T> source, ParallelOptions parallelOptions, Func<T, Task> asyncBody)
{
    return ForEachAsync(source, parallelOptions, (work, ct) => asyncBody(work));
}

Task<ParallelLoopResult> ForEachAsync<T>(IAsyncEnumerable<T> source, ParallelOptions parallelOptions, Func<T, CancellationToken, Task> asyncBody);

@RoccoDevs
Copy link
Author

@MihaZupan Thanks! @tarekgh I'll look into everything next week as I have a week off then. But maybe I can find time this weekend!

@RoccoDevs
Copy link
Author

@tarekgh Reason I'm putting this in a comment is to check if this is what you expect! If this is something you can work with I'll edit the original issue.

There are situations where a asynchronous parallel foreach is needed instead of the already existing Parallel.ForEach() and while it's not rocket science to implement yourself I think it would be a nice feature to be implemented in corefx. This should be seen as the asynchronous alternative to the existing Parallel.ForEach(). And should take IEnumerable<T> as well as IAsyncEnumerable<T>.

There are examples on this issue already here is one: https://devblogs.microsoft.com/pfxteam/implementing-a-simple-foreachasync-part-2/

        /// <summary>
        ///     Executes a foreach asynchronously.
        /// </summary>
        /// <typeparam name="T"></typeparam>
        /// <param name="source">The source.</param>
        /// <param name="dop">The degrees of parallelism.</param>
        /// <param name="body">The body.</param>
        /// <returns></returns>
        public static Task ForEachAsync<T>(this IEnumerable<T> source, int dop, Func<T, Task> body)
        {
            return Task.WhenAll(
                from partition in System.Collections.Concurrent.Partitioner.Create(source).GetPartitions(dop)
                select Task.Run(async delegate
                {
                    using (partition)
                    {
                        while (partition.MoveNext())
                            await body(partition.Current);
                    }
                }));
        }

Proposed API

Task<ParallelLoopResult> ForEachAsync<T>(IAsyncEnumerable<T> source, ParallelOptions parallelOptions, Func<T, Task> asyncBody)
{
    return ForEachAsync(source, parallelOptions, (work, ct) => asyncBody(work));
}

Task<ParallelLoopResult> ForEachAsync<T>(IAsyncEnumerable<T> source, ParallelOptions parallelOptions, Func<T, CancellationToken, Task> asyncBody);

Task<ParallelLoopResult> ForEachAsync<T>(IEnumerable<T> source, ParallelOptions parallelOptions, Func<T, Task> asyncBody)
{
    return ForEachAsync(source, parallelOptions, (work, ct) => asyncBody(work));
}

Task<ParallelLoopResult> ForEachAsync<T>(IEnumerable<T> source, ParallelOptions parallelOptions, Func<T, CancellationToken, Task> asyncBody)
{
    //possibly cast IEnumerable<T> to IAsyncEnumerable<T>
}

Open questions

  • Should DOP be automatic or manual at all times? (@GSPP )
  • Should it accept IEnumerable?
  • Should TaskCreationOptions.DenyChildAttach be specified? (@GSPP)

@MihaZupan
Copy link
Member

MihaZupan commented Mar 12, 2019

As far as I understand it, conversion between IEnumerable and IAsyncEnumerable could be done quite easily (tho not necessarily efficiently), via a simple proxy class.

A Parallel.ForEachAsync that had an IAsyncEnumerable as the source would pretty much be a running-until-canceled load balancer.

From what I gathered from the source code, implementing this "properly" is a non-trivial change, requiring changes to the Partitioner, (I assume also Task replicator), and of course the Parallel class itself.

I think input from @stephentoub would be valuable here.

@GSPP
Copy link

GSPP commented Mar 14, 2019

Should DOP be automatic or manual at all times?

I feel very strongly that it must be manual and a required parameter. See my comment (at the bottom). I'm speaking from experience and theoretically there.

Should it accept IEnumerable?

If you mean the non-generic version, no. But I think you meant the generic version. Definitely yes. Often, the input is a list of work items (not IO). E.g. URLs, file names, DTOs, ...

Of course, IAsyncEnumerable should be accepted as well.

From what I gathered from the source code, implementing this "properly" is a non-trivial change, requiring changes to the Partitioner, (I assume also Task replicator), and of course the Parallel class itself.

New open question: Should the partitioner concept be supported at all? I personally have never needed anything but a single items partitioner. For IO work, batch partitioning would likely not increase efficiency much.

Also note, that the default behavior must be no batching or chunking. I'm quoting my point 4:

The partitioner should not do any batching. It must be ensured that the specified degree of parallelism is adhered to exactly. For IO, it is important to use the DOP specified by the caller. Batching can lead to the effective DOP being smaller than expected.

The TL;TD is that IO needs exact DOP control. Anything in the way of that is a non-starter.

@MihaZupan
Copy link
Member

MihaZupan commented Mar 14, 2019

I agree on setting dop explicitly.

I too have never used the Partitioner directly. I don't think that it would be needed for IAsyncEnumerable, was just pointing it out as a part that would need changes during implementation.

@GSPP
Copy link

GSPP commented Apr 18, 2019

We should think about the order that source items are processed. We cannot be fully deterministic here but we have choices:

  1. Make no attempt at ordering. Process IList and arrays using range partitioning for example.
  2. Try to process items sequentially.

Often, it is desirable to process items in approximate sequential order. For example, if you have a list of database keys you want to retrieve you likely want to send the queries to the database in order. That way data locality is increased and cache usage is better.

Or, if you want to read data blocks from a file you want to issue the IOs in ascending file position order so that the disk head sweeps sequentially over the disk (elevator algorithm).

The performance gains from this can be large. Data locality is important.

This is an argument for (2). An argument against (1) is that since this feature is mainly used for IO any small CPU optimization coming from (1) would not matter much.

I believe that (2) must be supported or else it can be a performance dealbreaker in some scenarios. We can independently decide what the default should be. My feeling is that only (2) should be supported.

@Misiu
Copy link

Misiu commented Apr 18, 2019

I just wanted to create an issue for the exact same request.
There should be an async version for both Parallel.For and Parallel.ForEach.

@tomesendam do You think that adding one overload that supports IProgress would be fine?
Currently I use this:

/// <summary>
/// Executes a foreach asynchronously.
/// </summary>
/// <typeparam name="T"></typeparam>
/// <param name="source">The source.</param>
/// <param name="dop">The degrees of parallelism.</param>
/// <param name="body">The body.</param>
/// <param name="progress">Use for progress updates</param>
/// <returns></returns>
public static Task ForEachAsync<T>(this IEnumerable<T> source, int dop, Func<T, Task> body, IProgress<T> progress = null)
{
	return Task.WhenAll(
		from partition in System.Collections.Concurrent.Partitioner.Create(source).GetPartitions(dop)
		select Task.Run(async delegate
		{
			using (partition)
			{
				while (partition.MoveNext()) {
					await body(partition.Current);
					progress?.Report(partition.Current);
				}
			}
		}));
}

This single overload would be useful for progress notification (for example we want to know how many files are downloaded and notify the user on progress)

What do You guys think?

@GSPP
Copy link

GSPP commented Apr 18, 2019

@Misiu you can send the progress from inside the body, right?

@Misiu
Copy link

Misiu commented Apr 19, 2019

@GSPP I forgot about that :)
I guess there isn't any difference if we pass IProgress to a function call or to the body.
So extra overload isn't needed, let's keep it simple.

@RoccoDevs
Copy link
Author

@Misiu Sounds good to me

@GSPP do you have any tips on finalizing the suggestion? Or is the comment I posted earlier alright. I'm rather new to this that's why I'm asking!

@GSPP
Copy link

GSPP commented Apr 22, 2019

@tomesendam I am not a team member but I have contributed multiple times by writing up a comprehensive treatment of some issue that I think the framework should solve. This is how I would approach this: I think think you should now write down the full API surface in C#. Also, all relevant design points must be discussed and decided. I contributed some in my posts in this issue. You'd need to read through everything that was said here and write in your proposal how you would decide. I'd create a numbered list of all design aspects. Say what the design question is, the possible decisions, their pros and cons and your tentative decision.

This enables the team to then take your comprehensive summary into an internal meeting and produce feedback. This can lead to a final decision about this API. After that it can be implemented with no surprises left.

I think it's awesome that we are now making progress on this API! This is a very important feature that will come to be used a lot. We need to get the design exactly right. There is no second chance because compatibility forces design mistakes to be carried forever.

@Misiu
Copy link

Misiu commented May 21, 2019

@tarekgh any chance this might get added with .NET Core 3.0?

@tarekgh
Copy link
Member

tarekgh commented May 21, 2019

@Misiu we are very late on adding such feature in 3.0. meanwhile you can use the workaround mentioned in the issue description.

@PureKrome
Copy link
Contributor

Any update on this issue, btw?

@RoccoDevs
Copy link
Author

@PureKrome honestly I've forgotten about this issue. I'll create the complete list of design decisions I can come up with by the end of today.

This most likely will not be enough so others should feel free to comment more design decisions and I'll add them to the list.

@RoccoDevs
Copy link
Author

@YohanSciubukgian I don't think this API is the correct place for such a feature. That's better suited for the devs using this API to do on their own.

@Symbai
Copy link

Symbai commented Jan 7, 2021

Can we revisit this for .NET 6?

Could we get an update on this please?

@terrajobst
Copy link
Member

@tarekgh @stephentoub how can we move this forward? Can someone summarize the contentious points and proposed options?

@tarekgh tarekgh modified the milestones: Future, 6.0.0 Jan 7, 2021
@tarekgh
Copy link
Member

tarekgh commented Jan 7, 2021

I think the only open issue here is DOP should be forced or can be defaulted. I think we should allow the default and we can clarify that in the docs with some different use case examples. but I'll let @tomesendam and @stephentoub continue their discussion to reach some conclusion here. I moved this to 6.0 release.

@RoccoDevs
Copy link
Author

I think the only open issue here is DOP should be forced or can be defaulted. I think we should allow the default and we can clarify that in the docs with some different use case examples.

This sounds good to me. If @stephentoub has other ideas I'll gladly hear them out!

@stephentoub
Copy link
Member

I think we should allow the default and we can clarify that in the docs with some different use case examples.

Yup.

So... what is the proposed set of APIs now?

@tarekgh
Copy link
Member

tarekgh commented Jan 8, 2021

So... what is the proposed set of APIs now?

If I understand correctly, the proposed APIs on the top still hold. the detailed behavior may need to change in the description. @tomesendam please correct me if I spelled anything wrong here.

@RoccoDevs
Copy link
Author

So... what is the proposed set of APIs now?

If I understand correctly, the proposed APIs on the top still hold. the detailed behavior may need to change in the description. @tomesendam please correct me if I spelled anything wrong here.

Correct, I'll change the description to fit the default DOP.

@tarekgh
Copy link
Member

tarekgh commented Jan 9, 2021

one last question I have, as we are going to have a default anyway, does it make sense to make ParallelOptions parameter be nullable to avoid allocations if the caller is happy with whatever default we'll have?

@RoccoDevs
Copy link
Author

one last question I have, as we are going to have a default anyway, does it make sense to make ParallelOptions parameter be nullable to avoid allocations if the caller is happy with whatever default we'll have?

Yes, I don't see a problem with that. I'll update it for now. Can always revert if others in this issue, have problems with that.

@stephentoub
Copy link
Member

as we are going to have a default anyway, does it make sense to make ParallelOptions parameter be nullable to avoid allocations if the caller is happy with whatever default we'll have?

Personally, rather than making it nullable, I'd prefer an additional overload that just didn't take one. Since the options is also how a CancellationToken is specified, that delegate also wouldn't take a CancellationToken, e.g.

public static Task ForEachAsync<T>(IEnumerable<T> source, Func<T, ValuteTask> asyncBody)

Then a user isn't forced into unnecessary ceremony to provide things for the default common case.

There are other aspects of Parallel.ForEach that aren't represented in this proposal, e.g. support for ParallelLoopState, support for Partitioner, task state, etc. Is the plan to just leave those out for now and add overloads for them only if needed later?

@tarekgh
Copy link
Member

tarekgh commented Jan 11, 2021

Personally, rather than making it nullable, I'd prefer an additional overload that just didn't take one.

I think this is a good idea. @tomesendam could you please update the proposal by adding the overloads?

here are other aspects of Parallel.ForEach that aren't represented in this proposal,

I think we can wait to see if there is a demand on that and then add the overloads.

@RoccoDevs
Copy link
Author

@stephentoub I agree with @tarekgh, I would wat and see if there's demand for those overloads. And I've changed the description to reflect your comment.

@tarekgh tarekgh added api-ready-for-review API is ready for review, it is NOT ready for implementation and removed api-needs-work API needs work before it is approved, it is NOT ready for implementation labels Jan 11, 2021
@tarekgh
Copy link
Member

tarekgh commented Jan 11, 2021

And now the issue is marked ready for design review :-)

@GSPP
Copy link

GSPP commented Jan 12, 2021

support for ParallelLoopState, support for Partitioner, task state

I'll post my opinions on these: This API is meant to be used with async IO. Any async IO carries a lot more overhead than typical parallel algorithms or allocations bring. I do not think that CPU efficiency should be a significant concern here.

  1. ParallelLoopState: I see a little of a use case for these features except for breaking the loop entirely. This is supported through the normal cancellation mechanisms.
  2. Partitioner: I have described above that single-item partitioning seems to be the only reasonable choice. This is needed to achieve reliable and predictable scheduling of IO.
  3. Task state: I'm not sure what is meant by this. If it is the common pattern of threading through an object state then I think this is not needed because of the efficiency argument.

@terrajobst
Copy link
Member

  • The bodies should take a cancellation token so that the method can construct a cancellation token to stop work when one work item failed.
  • Let's make it easier to pass in a cancellation token without parallel options. Since we don't want it last for consistency, let's overload with and without cancellation token.
namespace System.Threading.Tasks
{
    public static class Parallel
    {
        public static Task ForEachAsync<T>(IAsyncEnumerable<T> source, Func<T, CancellationToken, ValueTask> body);
        public static Task ForEachAsync<T>(IAsyncEnumerable<T> source, CancellationToken cancellationToken, Func<T, CancellationToken, ValueTask> body);
        public static Task ForEachAsync<T>(IAsyncEnumerable<T> source, ParallelOptions parallelOptions, Func<T, CancellationToken, ValueTask> body);

        public static Task ForEachAsync<T>(IEnumerable<T> source, Func<T, CancellationToken, ValueTask> body);
        public static Task ForEachAsync<T>(IEnumerable<T> source, CancellationToken cancellationToken, Func<T, CancellationToken, ValueTask> body);
        public static Task ForEachAsync<T>(IEnumerable<T> source, ParallelOptions parallelOptions, Func<T, CancellationToken, ValueTask> body);
    }
}

@stephentoub stephentoub added api-approved API was approved in API review, it can be implemented and removed api-ready-for-review API is ready for review, it is NOT ready for implementation labels Jan 12, 2021
@stephentoub stephentoub self-assigned this Jan 12, 2021
@ghost ghost added the in-pr There is an active PR which will close this issue when it is merged label Jan 13, 2021
@ghost ghost removed the in-pr There is an active PR which will close this issue when it is merged label Jan 14, 2021
@RoccoDevs
Copy link
Author

@stephentoub great work! Excited to see what will be done with this.

@GSPP
Copy link

GSPP commented Jan 15, 2021

A small step for a developer, a big step for mankind. Awesome!

@ghost ghost locked as resolved and limited conversation to collaborators Feb 14, 2021
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
api-approved API was approved in API review, it can be implemented area-System.Threading.Tasks
Projects
None yet
Development

Successfully merging a pull request may close this issue.