Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fluid channels should match the semantics of Go Channels #9265

Merged
merged 9 commits into from
Mar 27, 2018

Conversation

abhinavarora
Copy link
Contributor

Fixes #8813

@abhinavarora abhinavarora self-assigned this Mar 20, 2018
@abhinavarora abhinavarora added this to To Do in Concurrent Programming in Fluid via automation Mar 21, 2018
// We cannot do the data transfer because
// this QueueMessage was added by Select
// and some other case was executed.
// So call the Send function again.
// We do not care about notifying other
// because they would have been notified
// by the executed select case.
return send_return(Send(item));
Send(item);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you need to "lock.unlock();" here?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think you are right, it makes sense to release the lock there. With the new semantics if the nested method call leads to an exception then the outer lock will be held forever.

// TODO(abhinavarora) Should panic on closed channel
return send_return(!m->chan_closed);
if (m->chan_closed) {
send_return();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should unlock before throwing exception

send_return();
PADDLE_THROW("Cannot send on closed channel");
}
send_return();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you need to unlock here?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good catch, the lock needs to be unlocked here. Thank you for pointing this out.

@@ -118,15 +117,15 @@ bool ChannelImpl<T>::CanReceive() {
}

template <typename T>
bool ChannelImpl<T>::Send(T *item) {
void ChannelImpl<T>::Send(T *item) {
send_ctr++;
std::unique_lock<std::recursive_mutex> lock{mu_};
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you need to explicitly lock after constructor? lock->lock() ?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No we don't need to do that. The unique lock constructor automatically does that.

bool Send(T* data) {
if (!IsInitialized()) return false;
void Send(T* data) {
PADDLE_ENFORCE_EQ(IsInitialized(), true);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe It is better that adding exception information for PADDLE_ENFORCE_EQ,
e.g.
PADDLE_ENFORCE_EQ(IsInitialized(), true, "The channel hasn't been initialized.");

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

Copy link
Contributor

@chengduoZH chengduoZH left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM!

@abhinavarora abhinavarora merged commit 65534c4 into PaddlePaddle:develop Mar 27, 2018
Concurrent Programming in Fluid automation moved this from To Do to Done Mar 27, 2018
@abhinavarora abhinavarora deleted the issue_8813 branch March 27, 2018 02:12
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants