Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

io_context needs to be reset in a connection pool use model #265

Closed
mvphilip opened this issue Nov 17, 2020 · 4 comments
Closed

io_context needs to be reset in a connection pool use model #265

mvphilip opened this issue Nov 17, 2020 · 4 comments

Comments

@mvphilip
Copy link

Hello, I am using a connection pool very similar to what Michael showed as an example in request 195:

#195 (comment)

My use is also sporadic and I also dont really care about the callback. Im only writing data and I dont exactly care if
the write failed. Given this use scenario, I'm finding that I need to ioc.restart/ioc.run to actually move jobs along
after an ozo::execute. I can give some psuedo code as an example but I'm wondering if you know of this situation?

I am not using a work_guard currently. I had thought the connection_pool would do the equivalent for me without having to
burn cpu on an empty loop like work_guard.

If you have any suggestions would you please let me know?

thank you,

-mp

@thed636
Copy link
Collaborator

thed636 commented Nov 18, 2020

Hi!

My use is also sporadic and I also dont really care about the callback.

You may use an empty body lambda as a continuation of the operation in this case.

Given this use scenario, I'm finding that I need to ioc.restart/ioc.run to actually move jobs along
after an ozo::execute.

Well, that is how Boost.Asio works. A user runs an event-loop to execute asynchronous operations and handle events. So without any operations in its queue, the io_context stops. The usually asynchronous program runs event-loop once and works in terms of asynchronous operations and continuations. Sometimes synchronous programs need to use asynchronous API. In that case, there are two choices. The first one - run event-loop in a separate thread and use synchronisation primitives like future. The second - run event-loop in the same thread right after the asynchronous operation started to proceed with. In the second case, restart is needed due to the io_context behaviour. In the case of fire-and-forget for synchronous application, the thirst case with a dedicated thread for the io_context::run() is the only option.

I am not using a work_guard currently. I had thought the connection_pool would do the equivalent for me without having to
burn cpu on an empty loop like work_guard.

Do you have any evidence of burning CPU on an empty cycle without any event or operation posted into the event loop?

If you have any suggestions would you please let me know?

I'd recommend learning a little bit more about Boost.Asio and its concepts, since it looks like there is some misunderstanding for the library API and how it works. So first of all, just try to check a hypothesis about CPU burning.

Hope that helps.

@mvphilip
Copy link
Author

Hello Thank you for the response. I am aware of the coroutine use model. My confusion was on the connection pool and ownership. I had thought the connection pool would keep the context alive. And for this:

Do you have any evidence of burning CPU on an empty cycle without any event or operation posted into the event loop?

Yes and no on this. I am not using threading in my code. So to keep an io_context under a work_guard non-blocking I would need to add a thread. Its not burning cpu but it is using a thread. As I mentioned, I wasnt sure if the resource pool wasnt already doing something to keep the io_context alive already.

But you've answered my question. We need to fully manage the life cycle of the io context outside of ozo.

Thank you again

@thed636
Copy link
Collaborator

thed636 commented Nov 20, 2020

Hi!

I had thought the connection pool would keep the context alive.

The connection pool is designed as a context-free entity because we had a negative experience with a context-bound connection pool, especially with multithreading asynchronous applications with one-io_context-per-thread configuration. So the present configuration is the most universal to give a user freedom of making choice.

Its not burning cpu but it is using a thread.

For asynchronous application, this is the only way to execute several control flows simultaneously. So if you care about resources I'd suggest using the asynchronous model for the application.

I am aware of the coroutine use model.

You may use the most generic callback-continuation model. In case of sophisticated operations, I'd suggest using some syntactic sugar as boost::asio::coroutine which models stackless coroutine and allow the linear representation of asynchronous operation without any additional context-switch mechanisms as boost::coroutine.

Best regards.

@mvphilip
Copy link
Author

Thank you again Sergei.

-mp

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants