New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Rendezvous after receiving "exit" notification #159

Merged
merged 1 commit into from Dec 14, 2018

Conversation

Projects
None yet
3 participants
@MaskRay
Copy link
Owner

MaskRay commented Dec 13, 2018

@hotpxl
Copy link

hotpxl left a comment

膜拜大佬

@@ -91,6 +92,10 @@ struct Index_Request {
int64_t ts = tick++;
};

std::mutex thread_mtx;
std::condition_variable no_pending_threads;
int pending_threads;

This comment has been minimized.

@hotpxl

hotpxl Dec 13, 2018

make this atomic as well?

This comment has been minimized.

@MaskRay

MaskRay Dec 13, 2018

Owner

pending_threads is protected by thread_mtx. It doesn't need to be atomic.

manager.Quit();

{ std::lock_guard lock(index_request->mutex_); }
indexer_waiter->cv.notify_all();

This comment has been minimized.

@hotpxl

hotpxl Dec 13, 2018

It’s better to protect the notify with the mutex. If the thread is woken up for any reason, the signal would be lost

This comment has been minimized.

@MaskRay

MaskRay Dec 13, 2018

Owner

The only execution sequence which can lead to lost signal is:

waiter lock
waiter test
signaler change pipeline::quit to true
signaler notify
waiter unlock

By having a lock_guard between pipeline::quit=true; and indexer_waiter->cv.notify_all();, we guarantee that this scenario will not happen.

This comment has been minimized.

@lhmouse

lhmouse Dec 14, 2018

The code is correct. It is a bit obscure, however.

@@ -432,6 +436,8 @@ void *CompletionMain(void *manager_) {
set_thread_name("comp");
while (true) {
std::unique_ptr<SemaManager::CompTask> task = manager->comp_tasks.Dequeue();
if (pipeline::quit)

This comment has been minimized.

@hotpxl

hotpxl Dec 13, 2018

A full memory barrier is probably not necessary here?

This comment has been minimized.

@MaskRay

MaskRay Dec 13, 2018

Owner

ThreadedQueue::{Dequeue,PushBack} use a mutex internally, the access of pipeline::quit is guaranteed to be synchronized.

This comment has been minimized.

@lhmouse

lhmouse Dec 14, 2018

Yes a seq_cst barrier is an overkill. Consider the following:

if(pipeline::quit.load(std::memory_order_relaxed)
  ...

(or memory_order_acquire if it wasn't protected by a mutex).

This comment has been minimized.

@MaskRay

MaskRay Dec 14, 2018

Owner

It seems everything is protected by a mutex. I can change every load of quit to use std::memory_order_relaxed

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment