Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

New additional asynchronous mechanism #203

Closed

Conversation

i94matjo
Copy link
Contributor

OK, I'll try to explain why this pull request is needed. It all boils down to the use of futures. We have an embedded system with limited resources which does a lot of, mostly provides but this essentially accounts for everything, asynchronous calls. Using futures is not ideally here. We end up in two different scenarios:

  1. Using future continuations causes thread creation explosion with performance issues as consequence.
  2. Not using future continuations made code extremely complex and/or non-robust.
    This suggestion adds another kind of asynchronous mechanism that is simply based on callback handlers, letting the user self decide how to handle the situation. This makes autobahn to almost seamlessly integrate to 3rd party library rxcpp.
    This suggestion is only one way of dealing with the problem, and I came up with it rather quickly. There might be better solutions to the problem at hand.
    Note that the future interface is intact, so this change is not breaking.

on_exception_handler&& on_exception)
{
m_connect = wamp_async<void>(std::move(on_success), std::move(on_exception));
std::cout << m_connect.is_promise() << std::endl;
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Remove! Debug printout

@oberstet
Copy link
Contributor

thanks for the PR! unfortunately: no, we don't want to expose/support a callback(-only) based API, and

future continuations causes thread creation explosion

why? AutobahnC++ is designed to allow single-threaded apps mainly .. it all depends on the specific boost/asio reactor used I guess ..

so I'd go for option 3.: use futures + single threaded app

@i94matjo
Copy link
Contributor Author

Because that's the way boost continuations work, see for example boost::detail::future_async_continuation_shared_state::launch_continuation. So with a large number of futures this does not fly. We must not use continuation. That leaves us with a couple of other options though. We could use boost's when_all to bundle up futures and then continue, or do not use continuations at all or just skip the notification.
If we don't want continuations and we can only wait for one future at a time per thread we'll need to queue the futures and wait for them one by one in order, possibly with a timeout. Then it's not truly asynchronous though which is kind of sad since we already have the truly asynchronous boost::asio::io_service as engine. But we'll make it work. Somehow... It's only during initialization this happens so we may not need to be fully asynchronous here. Thanks!

@oberstet
Copy link
Contributor

Not sure we're on the same page? To my knowledge, the thread on which a future is scheduled is determined by the launch policy / executor, eg:

should run on the same thread as the future submitting code.

this, combined with boost::wait_for_all should then allow to process many futures on one thread.

decoupling the number of futures (being not yet complete) from the number of threads (discounting desire to parallelize workload) is one advantage of async programming IMO ...

@oberstet
Copy link
Contributor

probably, would be good to have an example #204 ..

would that work for you? demonstrate that useful work like multiple outstanding futures, waiting for a set of futures to complete etc can be done single threaded?

@i94matjo
Copy link
Contributor Author

Perfect! Thanks a lot! We will use ideas from that one. Thanks for the quick reply!

@oberstet
Copy link
Contributor

great! pls post here or on #204 about your findings / results if possible - this ("single threaded async mode") is definitely sth we want to support, actually encourage for autobahnc++

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants