Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add ability to specify scheduler that coroutine should be resumed on when awaiting various synchronisation operations #20

Closed
lewissbaker opened this issue Jul 7, 2017 · 3 comments
Assignees

Comments

@lewissbaker
Copy link
Owner

task<> f(some_scheduler& scheduler, async_mutex& mutex)
{
  auto lock = co_await mutex.scoped_lock_async();

  // Coroutine is now potentially executing on whatever execution context the
  // prior call to mutex.unlock() that released the mutex occurred.
  // We don't have any control over this here.

  // We can manually re-schedule ourselves for execution on a particular execution context.
  // This means that the mutex.unlock() call has resumed this coroutine only to immediately
  // suspend it again.
  co_await scheduler.schedule();

  // Also when the lock goes out of scope here and mutex.unlock() is called
  // we will be implicitly resuming the next coroutine that is waiting to
  // acquire the mutex. If that coroutine then unlocks the mutex without
  // suspending then it will recursively resume the next waiting coroutine, etc.,
  // blocking further execution of this coroutine until one of the lock holders
  // coroutines suspends its execution.
}

Some issues:

  1. It could be more efficient to directly schedule the coroutine for resumption on the scheduler rather than resuming it and suspending it again.
  2. This unconditionally re-schedules the coroutine, which may not be necessary if we were already executing on the right execution context before the acquiring the lock and we acquired the lock synchronously.

You could do something like this now to (mostly) solve (2):

task<> f(some_scheduler& scheduler, async_mutex& mutex)
{
  if (!mutex.try_lock())
  {
    // This might still complete synchronously if the lock was released
    // between call to try_lock and lock_async.
    co_await mutex.lock_async();

    // Only reschedule if we (probably) didn't acquire the lock synchronously.
    // NOTE: This needs to be noexcept to be exception-safe.
    co_await scheduler.schedule();
  }

  async_mutex_lock lock(mutex, std::adopt_lock);
}

For solving (1) I'm thinking of something like this:

task<> f(some_scheduler& scheduler, async_mutex& mutex)
{
  auto lock = co_await mutex.scoped_lock_async().resume_on(scheduler);

  // Or possibly more simply
  auto lock2 = co_await mutex.scoped_lock_async(scheduler);
}

It may be possible to make this a general facility that is applicable to other awaitables:

auto lock = co_await mutex.scoped_lock_async() | resume_on(scheduler);
// or
auto lock = co_await resume_on(scheduler, mutex.scoped_lock_async());
@lewissbaker
Copy link
Owner Author

Possible implementation of resume_on():

template<typename SCHEDULER, typename T>
lazy_task<T> resume_on(SCHEDULER& scheduler, lazy_task<T> task)
{
  co_await task.when_ready();
  co_await scheduler.schedule();
  co_return co_await std::move(task);
}

@lewissbaker
Copy link
Owner Author

Also, possible implementation of schedule_on() that starts execution on a given scheduler:

template<typename SCHEDULER, typename T>
lazy_task<T> schedule_on(SCHEDULER& scheduler, lazy_task<T> task)
{
  co_await scheduler.schedule();
  co_return co_await std::move(task);
}

@lewissbaker lewissbaker self-assigned this Aug 17, 2017
@lewissbaker
Copy link
Owner Author

lewissbaker commented Aug 20, 2018

The schedule_on() and resume_on() operators now support arbitrary awaitables.
This means you can now do the following to ensure you resume on the specified scheduler:

co_await (mutex.scoped_lock_async() | resume_on(scheduler));

tavi-cacina pushed a commit to tavi-cacina/cppcoro that referenced this issue Dec 2, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant