Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Using asio::io_context in single threaded applications #15

Closed
ashtum opened this issue Mar 15, 2022 · 10 comments
Closed

Using asio::io_context in single threaded applications #15

ashtum opened this issue Mar 15, 2022 · 10 comments

Comments

@ashtum
Copy link

ashtum commented Mar 15, 2022

Hi,
Thank you for your awesome library.
It is a very convenient way to use asio for writing single threaded (but concurrent) applications without worrying about problems in multi threaded applications.
If i'm not mistaken, right now the only way to use agrpc is to instantiate GrpcContext and run it on its own thread, which means we need to run asio::io_context on a separate thread and deal with concurrency problems between them.
Is there any plan for making it possible to reuse asio::io_context for agrpc services?

@ashtum ashtum changed the title Using asio::io_context in single thread application Using asio::io_context in single threaded applications Mar 15, 2022
@Tradias
Copy link
Owner

Tradias commented Mar 16, 2022

Hi, I am glad to hear that my library is helpful.

When I started writing the library I tried doing something like:

while(!io_context.stopped() && !grpc_context.stopped()) {
  io_context.run_one();
  grpc_context.run_one();
}

but the performance was very bad. I could try it again and see whether I can speed it up. I will put that on the agenda for v1.5.0.


If you have multiple execution contexts you could try to declare one of them as the "main" context where all your business logic runs. Let's assume you have created a tcp::socket with an io_context:

asio::co_spawn(
  grpc_context,
  [&]() -> asio::awaitable<void> {
    // ... some business logic that will be performed in the thread of the grpc_context

    // Interaction with the io_context is thread-safe as long as you do not use one of the 
    // concurrency hints.
    co_await socket.async_wait(asio::ip::tcp::socket::wait_read, asio::use_awaitable);
    // async_wait will automatically dispatch back to the grpc_context when it completes

    // ... some more business logic that will be performed in the thread of the grpc_context

    // It is also possible to explicitly switch to the grpc_context. By using asio::dispatch 
    // it will be a no-op if we already are on the grpc_context.
    co_await asio::dispatch(asio::bind_executor(grpc_context, asio::use_awaitable));
}

I have recently added an example that uses grpc_context and io_context, maybe that can give you some more ideas: file-transfer-client and file-transfer-server

@ashtum
Copy link
Author

ashtum commented Mar 16, 2022

Thanks for your response,
What is the reason for not using asio::io_context? is it because it needs to work with libunifex too?

@Tradias
Copy link
Owner

Tradias commented Mar 16, 2022

grpc::CompletionQueue is already an event loop. Some thread must repeatedly invoke its Next function or (for performance reason) be suspended in a call to it. Yes, it can be invoked with an immediate deadline in which case it behaves like a poll and might be suitable for integration with an io_context. I could image something like the following as well:

asio::defer(io_context, [&] {
  grpc_context.poll();
  asio::defer(io_context, [&] {
    grpc_context.poll();
    // recursive
  }
};

Like I said, I will try it out and see what can be done.

@Tradias
Copy link
Owner

Tradias commented Mar 17, 2022

I have implemented the above mentioned asio::defer style code on a separate branch: https://github.com/Tradias/asio-grpc/tree/grpc_context-poll

Still needs some more thought on the API desgin. Usage for now is:

#include <agrpc/pollContext.hpp>

// Must stay alive until grpc_context stops
agrpc::PollContext context{io_context.get_executor()};
context.poll(grpc_context);

io_context.run();

Performance on an otherwise idle io_context seems to be almost identical to running GrpcContext on its own thread, which is great news.

name req/s avg. latency 90 % in 95 % in 99 % in avg. cpu avg. memory
cpp_asio_grpc_run 38882 25.58 ms 27.32 ms 27.78 ms 29.00 ms 101.16% 20.86 MiB
cpp_asio_grpc_poll 38430 25.89 ms 27.57 ms 28.09 ms 29.26 ms 102.35% 22.19 MiB

@ashtum
Copy link
Author

ashtum commented Mar 17, 2022

Thanks for your work,
I was thinking about what is the requirement for having multiple async libraries to use same event loop (e.g. somebody else is working on a c++ async mysql lib), it seams we need some sort of standard i/o scheduler in the STL which is far from happening anytime soon.
But I think it's necessary for having efficient single threaded concurrent programs where we can use all sort synchronization primitives like async_mutex when_all when_any channels condition_variables without any lock and atomic operations.

@CaptainTrunky
Copy link

Hi, thanks for sharing your awesome solution with us!

It looks like the snippet below is exactly what I need - I'm working on a project that requires both grpc and REST-like API at the same time - imagine that I'm accepting REST for authorization and use given credentials for executing GRPC calls. Am I correct that i may use this approach for running a coroutine that executes grpc call? At the moment I plan to use something like grpc-gateway, but I'm considering other options.

Hi, I am glad to hear that my library is helpful.

When I started writing the library I tried doing something like:

while(!io_context.stopped() && !grpc_context.stopped()) {
  io_context.run_one();
  grpc_context.run_one();
}

but the performance was very bad. I could try it again and see whether I can speed it up. I will put that on the agenda for v1.5.0.

If you have multiple execution contexts you could try to declare one of them as the "main" context where all your business logic runs. Let's assume you have created a tcp::socket with an io_context:

asio::co_spawn(
  grpc_context,
  [&]() -> asio::awaitable<void> {
    // ... some business logic that will be performed in the thread of the grpc_context

    // Interaction with the io_context is thread-safe as long as you do not use one of the 
    // concurrency hints.
    co_await socket.async_wait(asio::ip::tcp::socket::wait_read, asio::use_awaitable);
    // async_wait will automatically dispatch back to the grpc_context when it completes

    // ... some more business logic that will be performed in the thread of the grpc_context

    // It is also possible to explicitly switch to the grpc_context. By using asio::dispatch 
    // it will be a no-op if we already are on the grpc_context.
    co_await asio::dispatch(asio::bind_executor(grpc_context, asio::use_awaitable));
}

I have recently added an example that uses grpc_context and io_context, maybe that can give you some more ideas: file-transfer-client and file-transfer-server

@Tradias
Copy link
Owner

Tradias commented Mar 18, 2022

Correct I assume in your case you would want the io_context to be the "main" context. In that case all you need to change is:

asio::co_spawn(
  io_context,
  [&]() -> asio::awaitable<void> {
    // Some REST API logic:
    co_await socket.async_wait(asio::ip::tcp::socket::wait_read, asio::use_awaitable);

    // A client streaming RPC, just an example
    grpc::ClientContext client_context;
    example::v1::Response response;
    std::unique_ptr<grpc::ClientAsyncWriter<example::v1::Request>> writer;
    // Must use asio::bind_executor because asio::this_coro::executor does not refer to a GrpcExecutor
    co_await agrpc::request(&example::v1::Example::Stub::AsyncClientStreaming, stub, client_context,
                                           writer, response, asio::bind_executor(grpc_context, asio::use_awaitable));
    // Now executing in the thread that called grpc_context.run(). We can still interact with the asio IoObjects,
    // like the socket, from here since they are thread-safe (unless you have set certain concurrency hints).
}

And then run the io_context and grpc_context:

agrpc::GrpcContext grpc_context{std::make_unique<grpc::CompletionQueue>()};
auto guard = asio::make_work_guard(grpc_context);

asio::io_context io_context{1};

asio::co_spawn(io_context, ...);

std::thread grpc_context_thread{[&] { grpc_context.run(); }};
io_context.run();
guard.reset();
grpc_context_thread.join();

@vangork
Copy link

vangork commented Mar 20, 2022

I have implemented the above mentioned asio::defer style code on a separate branch: https://github.com/Tradias/asio-grpc/tree/grpc_context-poll

Still needs some more thought on the API desgin. Usage for now is:

#include <agrpc/pollContext.hpp>

// Must stay alive until grpc_context stops
agrpc::PollContext context{io_context.get_executor()};
context.poll(grpc_context);

io_context.run();

Performance on an otherwise idle io_context seems to be almost identical to running GrpcContext on its own thread, which is great news.

name req/s avg. latency 90 % in 95 % in 99 % in avg. cpu avg. memory
cpp_asio_grpc_run 38882 25.58 ms 27.32 ms 27.78 ms 29.00 ms 101.16% 20.86 MiB
cpp_asio_grpc_poll 38430 25.89 ms 27.57 ms 28.09 ms 29.26 ms 102.35% 22.19 MiB

Will it support io_context with thread pool enabled(io_context.run() called from multiple threads) ?

@Tradias
Copy link
Owner

Tradias commented Mar 20, 2022

@vangork yes it should, although I would expect the performance to be slightly worse than running it on a single thread.

@Tradias
Copy link
Owner

Tradias commented Mar 25, 2022

I pushed a client and server example showing how to run io_context and grpc_context in the same thread with the new PollContext. Here are some performance numbers from my machine:

  • Idle io_context, loaded grpc_context sharing one thread: ~2.5% slower RPC performance compared to grpc_context.run() on its own thread
  • Loaded io_context, idle grpc_context sharing one thread: Seemingly no slowdown of the io_context
  • Loaded io_context, loaded grpc_context sharing one thread: ~2.5% slower RPC performance compared to grpc_context.run() on its own thread and 88% slower io_context performance.

I used grpc_bench to load the grpc_context and repeated calls to asio::post(io_context)s to load the io_context, so real world io_context usage may show different performance characteristics.

Another important thing to note is that the PollContext will bring CPU consumption of the shared thread to 100% even while io_context and grpc_context are idle.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants