Skip to content

Conversation

@tchaton
Copy link
Contributor

@tchaton tchaton commented Jun 3, 2021

What does this PR do?

Fixes #<issue_number>

Before submitting

  • Was this discussed/approved via a GitHub issue? (not for typos and docs)
  • Did you read the contributor guideline, Pull Request section?
  • Did you make sure your PR does only one thing, instead of bundling different changes together?
  • Did you make sure to update the documentation with your changes? (if necessary)
  • Did you write any new necessary tests? (not for typos and docs)
  • Did you verify new and existing tests pass locally with your changes?
  • Did you update the CHANGELOG? (not for typos, docs, test updates, or internal minor changes/refactorings)

PR review

Anyone in the community is free to review the PR once the tests have passed.
Before you start reviewing make sure you have read Review guidelines. In short, see the following bullet-list:

  • Is this pull request ready for review? (if not, please submit in draft mode)
  • Check that all items from Before submitting are resolved
  • Make sure the title is self-explanatory and the description concisely explains the PR
  • Add labels and milestones (and optionally projects) to the PR so it can be classified

Did you have fun?

Make sure you had fun coding 🙃

@codecov
Copy link

codecov bot commented Jun 3, 2021

Codecov Report

Merging #7823 (4794194) into master (36770b2) will decrease coverage by 5%.
The diff coverage is 0%.

@@           Coverage Diff           @@
##           master   #7823    +/-   ##
=======================================
- Coverage      93%     88%    -5%     
=======================================
  Files         199     199            
  Lines       13016   13026    +10     
=======================================
- Hits        12051   11438   -613     
- Misses        965    1588   +623     

@tchaton tchaton marked this pull request as ready for review June 4, 2021 07:21
@tchaton tchaton changed the title [CI] Resolve DeepSpeed tests [CI] Small fixes for DeepSpeed. Jun 4, 2021
model_parallel_context = super().model_sharded_context()

with model_parallel_context:
with torch.cuda.amp.autocast(), model_parallel_context:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think for ZeRO 2/3, autocast isn't used as we rely on a custom DeepSpeed FP16Optimizer to handle this, which converts the entire model to half iirc.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

With newer DeepSpeed version, they check the weights are created in half precision and currently fails in CI.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

oh very interesting! i'll check myself when I get time, do they throw an error if the weights are in half precision or not in half precision?

@SeanNaren
Copy link
Contributor

SeanNaren commented Jun 5, 2021

If you go through the special tests (for some reason they pass even though a test fails) there still is a failing test. I've removed a line in #7841 and will debug there, as I'm unsure why the changes in this PR are necessary and need more information since the version of DeepSpeed shouldn't have changed

@carmocca
Copy link
Contributor

carmocca commented Jun 6, 2021

If you go through the special tests (for some reason they pass even though a test fails) there still is a failing test.

@awaelchli you looked at this already somewhere else, right?

@awaelchli
Copy link
Contributor

Yes here #7790

@tchaton
Copy link
Contributor Author

tchaton commented Jun 8, 2021

Hey @SeanNaren,

Closing this PR for now.

Best,
T.C

@tchaton tchaton closed this Jun 8, 2021
@Borda Borda deleted the deepspeed_tests branch June 17, 2021 16:54
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants