Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

pipe engine eval_batch: add option to disable loss broadcast #4326

Merged
merged 2 commits into from
Oct 6, 2023

Conversation

nelyahu
Copy link
Contributor

@nelyahu nelyahu commented Sep 13, 2023

it is sometimes not required to have the loss bcasted to all rank after evaluation cycle and it is only required by some ranks. It adds overhead of communication between rank.
by setting bcast_loss=False (default is True and retains the previous behavior), loss will not be bcasted.
if monitor is enabled loss will be bcasted.

it is sometimes not required to have the loss bcasted to all rank after
evaluation cycle and it is only required by some ranks.
It adds overhead of communication between rank.
by setting bcast_loss=False (default is True and retains the previous behavior),
loss will not be bcasted.
if monitor is enabled loss will be bcasted.
@tjruwase tjruwase added this pull request to the merge queue Oct 6, 2023
Merged via the queue into microsoft:master with commit f9698c7 Oct 6, 2023
16 checks passed
mauryaavinash95 pushed a commit to mauryaavinash95/DeepSpeed that referenced this pull request Oct 9, 2023
…ft#4326)

it is sometimes not required to have the loss bcasted to all rank after
evaluation cycle and it is only required by some ranks.
It adds overhead of communication between rank.
by setting bcast_loss=False (default is True and retains the previous behavior),
loss will not be bcasted.
if monitor is enabled loss will be bcasted.

Co-authored-by: Logan Adams <114770087+loadams@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants