New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Mpmcqueue #19358
Mpmcqueue #19358
Conversation
Thank you for your pull request. Before we can look at your contribution, we need to ensure all contributors are covered by a Contributor License Agreement. After the following items are addressed, please respond with a new comment here, and the automated system will re-verify.
Regards, |
Re-verify CLA |
|
||
namespace grpc_core { | ||
|
||
DebugOnlyTraceFlag thread_pool(false, "thread_pool_trace"); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Trace name: "thread_pool"
Also, thread_pool is too generic as the name of a trace flag. Consistent with the existing naming convention, grpc_thread_pool_trace may be more appropriate.
DebugOnlyTraceFlag thread_pool(false, "thread_pool_trace"); | ||
|
||
inline void* InfLenFIFOQueue::PopFront() { | ||
// Caller should already checked queue is not empty and has already hold the |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
checked -> check
hold -> held
MutexLock l(&mu_); | ||
|
||
Node* new_node = New<Node>(elem); | ||
if (count_.Load(MemoryOrder::RELAXED) == 0) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why not directly calling FetchAdd() which will return the previous value? Even though the Load will be cheap here, it seems redundant to have an extra Load.
// Creates a new MPMC Queue. The queue created will have infinite length. | ||
InfLenFIFOQueue(); | ||
|
||
// Releases all resources hold by the queue. The queue must be empty, and no |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
hold -> held
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Note in general that you'd likely see better performance with a chunked queue or a flat vector resizing when 100% or 25% full, although both of those add extra complexity that isn't merited in a first implementation of this API.
Node* head_to_remove = queue_head_; | ||
queue_head_ = queue_head_->next; | ||
|
||
count_.FetchSub(1, MemoryOrder::RELAXED); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As mentioned offline, assuming count_ is not modified outside of lock this wants to be load, subtract 1, store. FetchAdd/FetchSub are more expensive (even relaxed) than load-add-store because they guarantee atomicity.
MutexLock l(&mu_); | ||
|
||
Node* new_node = New<Node>(elem); | ||
if (count_.FetchAdd(1, MemoryOrder::RELAXED) == 0) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ditto that this should be load+add+store
Add MPMCQ implementations and unit tests.
Modified BUILD and build.yaml for build and make.