Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fully deprecate variadic inputs of checkpoint_sequential #25985

Closed

Conversation

sublee
Copy link
Contributor

@sublee sublee commented Sep 11, 2019

To support variadic inputs of checkpoint_sequential was deprecated at #21006. This case should be warned with DeprecationWarning for PyTorch 1.2, but it should be simply failed with TypeError since PyTorch 1.3. This patch removes the DeprecationWarning for PyTorch 1.2.

@pytorchbot pytorchbot added module: checkpoint module: tests labels Sep 11, 2019
@soumith
Copy link
Member

@soumith soumith commented Sep 24, 2019

@pytorchbot rebase this please

@sublee
Copy link
Contributor Author

@sublee sublee commented Sep 24, 2019

@soumith Thank you for reviewing it.

@yf225
Copy link
Contributor

@yf225 yf225 commented Sep 25, 2019

@sublee We can't land it yet because test is failing:

Sep 24 20:52:17 ======================================================================
Sep 24 20:52:17 ERROR: test_checkpoint_module_list_multiple_args (__main__.TestCheckpoint)
Sep 24 20:52:17 ----------------------------------------------------------------------
Sep 24 20:52:17 Traceback (most recent call last):
Sep 24 20:52:17   File "test_utils.py", line 188, in test_checkpoint_module_list_multiple_args
Sep 24 20:52:17     torch.randn(1, 60, requires_grad=True)
Sep 24 20:52:17   File "test_utils.py", line 69, in _check_checkpoint_sequential
Sep 24 20:52:17     out = checkpoint_sequential(model_to_compare, num_chunks, *detached_inputs)
Sep 24 20:52:17   File "/opt/conda/lib/python3.6/site-packages/torch/utils/checkpoint.py", line 217, in checkpoint_sequential
Sep 24 20:52:17     preserve_rng_state=preserve_rng_state)
Sep 24 20:52:17   File "/opt/conda/lib/python3.6/site-packages/torch/utils/checkpoint.py", line 155, in checkpoint
Sep 24 20:52:17     return CheckpointFunction.apply(function, preserve, *args)
Sep 24 20:52:17   File "/opt/conda/lib/python3.6/site-packages/torch/utils/checkpoint.py", line 62, in forward
Sep 24 20:52:17     if preserve_rng_state:
Sep 24 20:52:17 RuntimeError: bool value of Tensor with more than one value is ambiguous

Copy link
Member

@soumith soumith left a comment

please fix test

@sublee
Copy link
Contributor Author

@sublee sublee commented Sep 26, 2019

@yf225, @soumith Sorry for my mistake. I ran only the test cases I added at #21006 but I had not to do. I just fixed those failures. Please review it.

@vincentqb vincentqb added the triaged label Sep 26, 2019
@sublee
Copy link
Contributor Author

@sublee sublee commented Oct 9, 2019

@soumith I guess you lost the last update. Would you please review this PR again?

@sublee sublee force-pushed the deprecate-checkpoint-sequential-varargs branch from aa20b4a to a07700f Compare Oct 11, 2019
@sublee
Copy link
Contributor Author

@sublee sublee commented Oct 11, 2019

This patch was intended to deploy with v1.3.0. However, today PyTorch v1.3.0 has been released. I update the version to v1.4 in the deprecation documentation and rebase this branch onto the current master.

albanD
albanD approved these changes Dec 3, 2019
Copy link
Contributor

@albanD albanD left a comment

Details in the test to make it look slightly better.
Then I'll merge.

Thanks for the PR !

test/test_utils.py Outdated Show resolved Hide resolved
test/test_utils.py Outdated Show resolved Hide resolved
@albanD albanD force-pushed the deprecate-checkpoint-sequential-varargs branch from 01bf464 to e61b020 Compare Dec 4, 2019
@albanD
Copy link
Contributor

@albanD albanD commented Dec 4, 2019

Just rebased on top of master for CI.

And again as I rebased on top of a broken master...

Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

@albanD has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@albanD albanD force-pushed the deprecate-checkpoint-sequential-varargs branch from e61b020 to 5829119 Compare Dec 4, 2019
soumith
soumith approved these changes Dec 4, 2019
Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

@albanD is landing this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@facebook-github-bot
Copy link
Contributor

@facebook-github-bot facebook-github-bot commented Dec 10, 2019

@albanD merged this pull request in fa251cf.

wuhuikx pushed a commit to wuhuikx/pytorch that referenced this issue Jan 30, 2020
Summary:
To support variadic inputs of `checkpoint_sequential` was deprecated at pytorch#21006. This case should be warned with `DeprecationWarning` for PyTorch 1.2, but it should be simply failed with `TypeError` since PyTorch 1.3. This patch removes the `DeprecationWarning` for PyTorch 1.2.
Pull Request resolved: pytorch#25985

Differential Revision: D18809875

Pulled By: albanD

fbshipit-source-id: e84dd8629c04979c4b2dc63e8ada94292e8cedd0
@zou3519 zou3519 added the module: bc-breaking label Mar 31, 2020
@zou3519
Copy link
Contributor

@zou3519 zou3519 commented Apr 8, 2020

Hmm, If I am understanding this correctly it is not BC breaking because it enables behavior that used to throw a TypeError

@zou3519 zou3519 removed the module: bc-breaking label Apr 8, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Merged module: checkpoint module: tests open source triaged
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

10 participants