Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We鈥檒l occasionally send you account related emails.

Already on GitHub? Sign in to your account

add memory parity for PL vs Vanilla #5170

Merged
merged 22 commits into from
Dec 23, 2020
Merged

add memory parity for PL vs Vanilla #5170

merged 22 commits into from
Dec 23, 2020

Conversation

Borda
Copy link
Member

@Borda Borda commented Dec 17, 2020

What does this PR do?

adding memory parity...
Resolves #2080

Before submitting

  • Was this discussed/approved via a Github issue? (no need for typos and docs improvements)
  • Did you read the contributor guideline, Pull Request section?
  • Did you make sure your PR does only one thing, instead of bundling different changes together? Otherwise, we ask you to create a separate PR for every change.
  • Did you make sure to update the documentation with your changes?
  • Did you write any new necessary tests?
  • Did you verify new and existing tests pass locally with your changes?
  • If you made a notable change (that affects users), did you update the CHANGELOG?

PR review

Anyone in the community is free to review the PR once the tests have passed.
Before you start reviewing make sure you have read Review guidelines. In short, see the following bullet-list:

  • Is this pull request ready for review? (if not, please submit in draft mode)
  • Check that all items from Before submitting are resolved
  • Make sure the title is self-explanatory and the description concisely explains the PR
  • Add labels and milestones (and optionally projects) to the PR so it can be classified; Bugfixes should be including in bug-fix release milestones (m.f.X) and features should be included in (m.X.b) releases.

Did you have fun?

Make sure you had fun coding 馃檭

@Borda Borda added ci Continuous Integration priority: 1 Medium priority task labels Dec 17, 2020
@Borda Borda added this to the 1.1.x milestone Dec 17, 2020
@pep8speaks
Copy link

pep8speaks commented Dec 17, 2020

Hello @Borda! Thanks for updating this PR.

Line 38:121: E501 line too long (121 > 120 characters)

Comment last updated at 2020-12-23 18:49:05 UTC

@Borda Borda marked this pull request as ready for review December 17, 2020 14:36
@codecov
Copy link

codecov bot commented Dec 17, 2020

Codecov Report

Merging #5170 (d33dae9) into master (27f3f97) will increase coverage by 4%.
The diff coverage is n/a.

@@           Coverage Diff           @@
##           master   #5170    +/-   ##
=======================================
+ Coverage      89%     93%    +4%     
=======================================
  Files         134     134            
  Lines        9942    9942            
=======================================
+ Hits         8873    9256   +383     
+ Misses       1069     686   -383     

(ParityModuleMNIST, 0.25), # todo: lower this thr
@pytest.mark.parametrize('cls_model,max_diff_speed,max_diff_memory', [
(ParityModuleRNN, 0.05, 0.0),
(ParityModuleMNIST, 0.25, 0.0), # todo: lower this thr
Copy link
Contributor

@SeanNaren SeanNaren Dec 20, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is it really 0.0 still? I know you investigated a bit was curious. It doesn't seem correct but maybe that's because of how small the memory difference is (memory usage is tiny) Maybe move to a significant figure 1e-5?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think so, the model is super small and we run just 4 epochs

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not sure adding memory check for such small models make sense.

Copy link
Member

@awaelchli awaelchli Dec 23, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what difference does it make how big the models are?
max_diff_memory there is the difference between the pytorch run and the lightning run with the SAME model. It's perfectly fine if lightning uses the same amount of memory as pytorch. in fact, how would you even explain any other numbers?
There is no logging, no fancy Lightning features, nothing that should occupy extra memory on the gpu.

@Borda Borda requested a review from SeanNaren December 20, 2020 21:06
benchmarks/generate_comparison.py Show resolved Hide resolved
(ParityModuleMNIST, 0.25), # todo: lower this thr
@pytest.mark.parametrize('cls_model,max_diff_speed,max_diff_memory', [
(ParityModuleRNN, 0.05, 0.0),
(ParityModuleMNIST, 0.25, 0.0), # todo: lower this thr
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not sure adding memory check for such small models make sense.

benchmarks/test_basic_parity.py Show resolved Hide resolved
benchmarks/test_basic_parity.py Show resolved Hide resolved
@Borda Borda requested a review from tchaton December 21, 2020 13:29
tchaton and others added 2 commits December 23, 2020 09:40
@Borda Borda added the ready PRs ready to be merged label Dec 23, 2020
Copy link
Contributor

@SeanNaren SeanNaren left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The memory check for a small model doesn't make sense, but I'd keep it for a future test where we might try a larger model (I think we should).

@Borda Borda enabled auto-merge (squash) December 23, 2020 13:08
Copy link
Member

@awaelchli awaelchli left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM
see my comment above regarding model size (which does not matter if I understand the tests correctly)

@Borda Borda merged commit 6adc1b3 into master Dec 23, 2020
@Borda Borda deleted the test/parity-memory branch December 23, 2020 22:13
Borda added a commit that referenced this pull request Jan 6, 2021
* refactor

* memory

* show

* clean

* clean

* try

* device

* reset

* fix

* fix

* mean

* hook

* format

* add todo

Co-authored-by: chaton <thomas@grid.ai>

Co-authored-by: chaton <thomas@grid.ai>

(cherry picked from commit 6adc1b3)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ci Continuous Integration priority: 1 Medium priority task ready PRs ready to be merged
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[test] Add memory parity tests
5 participants