Skip to content

Commit

Permalink
block: Allow AIO_WAIT_WHILE with NULL ctx
Browse files Browse the repository at this point in the history
bdrv_drain_all() wants to have a single polling loop for draining the
in-flight requests of all nodes. This means that the AIO_WAIT_WHILE()
condition relies on activity in multiple AioContexts, which is polled
from the mainloop context. We must therefore call AIO_WAIT_WHILE() from
the mainloop thread and use the AioWait notification mechanism.

Just randomly picking the AioContext of any non-mainloop thread would
work, but instead of bothering to find such a context in the caller, we
can just as well accept NULL for ctx.

Signed-off-by: Kevin Wolf <kwolf@redhat.com>
  • Loading branch information
kevmw committed Jun 18, 2018
1 parent 57320ca commit 4d22bbf
Showing 1 changed file with 9 additions and 4 deletions.
13 changes: 9 additions & 4 deletions include/block/aio-wait.h
Expand Up @@ -57,7 +57,8 @@ typedef struct {
/**
* AIO_WAIT_WHILE:
* @wait: the aio wait object
* @ctx: the aio context
* @ctx: the aio context, or NULL if multiple aio contexts (for which the
* caller does not hold a lock) are involved in the polling condition.
* @cond: wait while this conditional expression is true
*
* Wait while a condition is true. Use this to implement synchronous
Expand All @@ -75,7 +76,7 @@ typedef struct {
bool waited_ = false; \
AioWait *wait_ = (wait); \
AioContext *ctx_ = (ctx); \
if (in_aio_context_home_thread(ctx_)) { \
if (ctx_ && in_aio_context_home_thread(ctx_)) { \
while ((cond)) { \
aio_poll(ctx_, true); \
waited_ = true; \
Expand All @@ -86,9 +87,13 @@ typedef struct {
/* Increment wait_->num_waiters before evaluating cond. */ \
atomic_inc(&wait_->num_waiters); \
while ((cond)) { \
aio_context_release(ctx_); \
if (ctx_) { \
aio_context_release(ctx_); \
} \
aio_poll(qemu_get_aio_context(), true); \
aio_context_acquire(ctx_); \
if (ctx_) { \
aio_context_acquire(ctx_); \
} \
waited_ = true; \
} \
atomic_dec(&wait_->num_waiters); \
Expand Down

0 comments on commit 4d22bbf

Please sign in to comment.