Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

rgw: add async Throttle for beast frontend #20990

Closed
wants to merge 6 commits into from

Conversation

cbodley
Copy link
Contributor

@cbodley cbodley commented Mar 21, 2018

adds a ceph::async::Completion template for boost::asio completion handlers, and uses that to implement ceph::async::Throttle

the beast frontend uses this throttle to limit the number of concurrent calls to process_request(). it also uses the throttle to implement pause by using throttle.async_set_maximum(0) to wait for all outstanding requests to finish

adds a new config variable rgw_max_concurrent_requests with a config observer to apply changes. perf counters for this throttle are exposed as "throttle-frontend-requests"

@cbodley cbodley force-pushed the wip-rgw-beast-pause branch 2 times, most recently from e1e48cb to 85473d0 Compare March 21, 2018 16:48
@@ -128,6 +128,8 @@ set(rgw_a_srcs
rgw_crypt_sanitize.cc
rgw_iam_policy.cc)

list(APPEND rgw_a_srcs async/throttle.cc)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

still not sure where in the tree this stuff should live. none of it is specific to rgw, so maybe under common instead? i'd eventually like to use the async::Completion for the librados bindings in librados/librados_asio.h

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The throttler code as such doesn't seem to have anything rgw specific so should be fine moving under common::async as well

a generic Completion for storing asio completion handlers for deferred
execution

Signed-off-by: Casey Bodley <cbodley@redhat.com>
Signed-off-by: Casey Bodley <cbodley@redhat.com>
Signed-off-by: Casey Bodley <cbodley@redhat.com>
adds a config variable rgw_max_concurrent_requests, and uses
rgw::async::Throttle to enforce it in the beast frontend

Signed-off-by: Casey Bodley <cbodley@redhat.com>
Throttle::async_set_maximum() gives us a good way to wait for all
outstanding requests to complete before pausing, without having to stop
the whole io_context. this will make it easier to share the io_context
and its worker threads outside of the frontend

Signed-off-by: Casey Bodley <cbodley@redhat.com>
Signed-off-by: Casey Bodley <cbodley@redhat.com>
@cbodley
Copy link
Contributor Author

cbodley commented Mar 22, 2018

moved the Completion and Throttle classes from rgw/async/ to common/async/. the throttle builds as an object library called common-async-objs

GetRequest(size_t count, Clock::time_point started) noexcept
: count(count), started(started) {}
};
using GetCompletion = Completion<void(boost::system::error_code),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OoC, any reason to use boost::system::error_code and not std::error_code?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Boost.Asio and Beast force that on us, unfortunately.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

one of the main reasons is that boost::system::error_code is part of the interface for stackful coroutines, which is how rgw is using this throttle: request_throttle.async_get(1, yield[ec]);

you can read more about error handling with stackful coroutines here: http://www.boost.org/doc/libs/1_66_0/doc/html/boost_asio/overview/core/spawn.html

at a higher level, if you're writing an 'asynchronous initiating function' like async_get() and want to support any kind of CompletionToken that follows the extensible asynchronous model, your Signature has to have boost::system::error_code as the first parameter. asio and beast both do this throughout

i'd love to find a way out, and switch to std::error_code - but for now i'm resigned to waiting for the Networking TS to save us

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That all sounds perfectly sensible! Thanks.

* should be moved/copied out of the Completion first.
*/
template <typename Signature, typename T = void>
class Completion;
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@adamemerson this is an evolution of your ceph::invoker template class from #16715, with some extra features:

  • ability to include user data in the memory allocated by the handler allocator. async operations often require some extra state, so it's nice to combine its allocation (and take advantage of handler allocators that recycle memory)
  • ability to select between defer(), dispatch() and post() when invoking, instead of going through a single operator(). as seen in BaseThrottle::put(), there are cases where we have to post() instead of dispatch(), depending on which code path we're in
  • ability to release and reclaim ownership, as Throttle does when it stores Completions in an intrusive list
  • use of std::unique_ptr without a custom deleter helps with dmclock integration - dmclock's priority queue will either take a std::unique_ptr<R>&& or a R&& (which it then wraps in a unique_ptr). so the ability to use Completion this way saves dmclock that extra allocation

i'd love feedback about the design choices i made to implement these

@cbodley
Copy link
Contributor Author

cbodley commented Apr 5, 2018

closing for now. this throttle will be moved into a dmclock scheduler in a separate pr. for pause, i'm using an async shared mutex instead - but this branch is now based on #20464, so i'll reopen once that merges

@cbodley cbodley closed this Apr 5, 2018
@cbodley
Copy link
Contributor Author

cbodley commented Apr 6, 2018

reopened as #21271

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
4 participants