Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support both use_calc_stream and sync_op in send recv APIs #46023

Merged
merged 5 commits into from Sep 15, 2022

Conversation

HermitSun
Copy link
Contributor

PR types

New features

PR changes

APIs

Describe

In the new communication library, we designed ProcessGroup to manage different communication group. Inside each process_group has its own stream which all communications in this group will be done on this stream. For high level API, like distributed.all_reduce, we use use_calc_stream to indicate whether this operation is sync or not. Notice that frequently add unnecessary cuda events may lead to low performance on some model. In order to achieve high performance, this pr add a new API name distributed.stream.all_reduce. This new API provided use_calc_stream and sync_op both.

  • sync_op, indicate whether communication is sync or not.
  • use_calc_stream, do communicate on calc stream, save the time of switching stream. Only work when sync_op is true.

Comment on lines +248 to +250
int numel = (*dense).numel();
int send_numel = numel / nranks;
int offset = send_numel * rank_id;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这个返回的应该是int64,大一点的tensor这就越界了,后面再统一改吧

Copy link
Contributor

@gongweibao gongweibao left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

Copy link
Contributor

@XieYunshen XieYunshen left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM
单测LABEL设置

Copy link
Contributor

@XiaoguangHu01 XiaoguangHu01 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@FeixLiu FeixLiu merged commit ae00f42 into PaddlePaddle:develop Sep 15, 2022
@HermitSun HermitSun deleted the collective-stream-sendrecv branch September 16, 2022 00:28
JiabinYang pushed a commit that referenced this pull request Sep 26, 2022
* Support both use_calc_stream and sync_op in send recv APIs (#46023)

* add batch_norm prim2orig rule

Co-authored-by: Wen Sun <35923278+HermitSun@users.noreply.github.com>
HermitSun added a commit to HermitSun/Paddle that referenced this pull request Oct 12, 2022
XiaoguangHu01 pushed a commit that referenced this pull request Oct 17, 2022
* Support both use_calc_stream and sync_op in send recv APIs (#46023)

* Support both use_calc_stream and sync_op in allgather API (#46295)

* Support both use_calc_stream and sync_op in collective communication API (#46761)

* Move group and all reduce from collective to communication (#45848)

* Completes bfloat16 dtype for collective api in eager mode (#45844)

* Fix collective APIs cannot be recognized when building docs (#46962)

Co-authored-by: LiYuRio <63526175+LiYuRio@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

7 participants