Navigation Menu

Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Mpmcqueue #19358

Merged
merged 56 commits into from Jul 3, 2019
Merged

Mpmcqueue #19358

merged 56 commits into from Jul 3, 2019

Conversation

yunjiaw26
Copy link
Contributor

Add MPMCQ implementations and unit tests.
Modified BUILD and build.yaml for build and make.

@yunjiaw26 yunjiaw26 requested a review from guantaol June 14, 2019 23:13
@thelinuxfoundation
Copy link

Thank you for your pull request. Before we can look at your contribution, we need to ensure all contributors are covered by a Contributor License Agreement.

After the following items are addressed, please respond with a new comment here, and the automated system will re-verify.

Regards,
CLA GitHub bot

@yunjiaw26
Copy link
Contributor Author

Re-verify CLA

@guantaol guantaol added lang/core release notes: yes Indicates if PR needs to be in release notes labels Jun 17, 2019
BUILD Outdated Show resolved Hide resolved
BUILD Outdated Show resolved Hide resolved
CMakeLists.txt Outdated Show resolved Hide resolved
Makefile Outdated Show resolved Hide resolved
Makefile Outdated Show resolved Hide resolved
tools/run_tests/generated/tests.json Outdated Show resolved Hide resolved
test/core/iomgr/mpmcqueue_test.cc Outdated Show resolved Hide resolved
test/core/iomgr/mpmcqueue_test.cc Outdated Show resolved Hide resolved
test/core/iomgr/mpmcqueue_test.cc Outdated Show resolved Hide resolved
test/core/iomgr/mpmcqueue_test.cc Outdated Show resolved Hide resolved

namespace grpc_core {

DebugOnlyTraceFlag thread_pool(false, "thread_pool_trace");
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Trace name: "thread_pool"

Also, thread_pool is too generic as the name of a trace flag. Consistent with the existing naming convention, grpc_thread_pool_trace may be more appropriate.

DebugOnlyTraceFlag thread_pool(false, "thread_pool_trace");

inline void* InfLenFIFOQueue::PopFront() {
// Caller should already checked queue is not empty and has already hold the
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

checked -> check
hold -> held

src/core/lib/iomgr/executor/mpmcqueue.cc Outdated Show resolved Hide resolved
MutexLock l(&mu_);

Node* new_node = New<Node>(elem);
if (count_.Load(MemoryOrder::RELAXED) == 0) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why not directly calling FetchAdd() which will return the previous value? Even though the Load will be cheap here, it seems redundant to have an extra Load.

// Creates a new MPMC Queue. The queue created will have infinite length.
InfLenFIFOQueue();

// Releases all resources hold by the queue. The queue must be empty, and no
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

hold -> held

test/core/iomgr/mpmcqueue_test.cc Outdated Show resolved Hide resolved
test/core/iomgr/mpmcqueue_test.cc Outdated Show resolved Hide resolved
test/core/iomgr/mpmcqueue_test.cc Outdated Show resolved Hide resolved
test/core/iomgr/mpmcqueue_test.cc Show resolved Hide resolved
test/core/iomgr/mpmcqueue_test.cc Outdated Show resolved Hide resolved
@yunjiaw26 yunjiaw26 requested a review from dklempner June 28, 2019 01:17
Copy link
Contributor

@dklempner dklempner left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Note in general that you'd likely see better performance with a chunked queue or a flat vector resizing when 100% or 25% full, although both of those add extra complexity that isn't merited in a first implementation of this API.

Node* head_to_remove = queue_head_;
queue_head_ = queue_head_->next;

count_.FetchSub(1, MemoryOrder::RELAXED);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As mentioned offline, assuming count_ is not modified outside of lock this wants to be load, subtract 1, store. FetchAdd/FetchSub are more expensive (even relaxed) than load-add-store because they guarantee atomicity.

MutexLock l(&mu_);

Node* new_node = New<Node>(elem);
if (count_.FetchAdd(1, MemoryOrder::RELAXED) == 0) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ditto that this should be load+add+store

@yunjiaw26
Copy link
Contributor Author

#19422

@yunjiaw26 yunjiaw26 merged commit aec511e into grpc:master Jul 3, 2019
@lock lock bot locked as resolved and limited conversation to collaborators Oct 1, 2019
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
lang/core release notes: yes Indicates if PR needs to be in release notes
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

5 participants