Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We鈥檒l occasionally send you account related emails.

Already on GitHub? Sign in to your account

add method to make tensor constant for debug purposes #30458

Open
noskill opened this issue Nov 26, 2019 · 8 comments
Open

add method to make tensor constant for debug purposes #30458

noskill opened this issue Nov 26, 2019 · 8 comments
Labels
module: autograd Related to torch.autograd, and the autograd engine in general triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module

Comments

@noskill
Copy link

noskill commented Nov 26, 2019

馃殌 Feature

Add method to make constant tensors

tensor.make_const()

Motivation

This option will make it easy to find inplace operators in case of such exceptions:

RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation

Instead RuntimeError should be thrown when inplace operation is performed on constant tensor.

Alternatives

Perhaps it is even better to make all tensors constant by default, since inplace operators rarely needed.

cc @ezyang @ssnl @albanD @zou3519 @gqchen

@zou3519
Copy link
Contributor

zou3519 commented Nov 26, 2019

https://pytorch.org/docs/stable/autograd.html#torch.autograd.detect_anomaly might solve the problem. IIRC it should tell you where in the forward pass you used an op that caused an error in backwards.

@albanD albanD added module: autograd Related to torch.autograd, and the autograd engine in general triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module labels Nov 27, 2019
@albanD
Copy link
Collaborator

albanD commented Nov 27, 2019

I would agree that this would be a useful feature to have in general.
For this particular use case, you can indeed, as @zou3519 mentioned, use anomaly mode to pinpoint the error.

@zou3519
Copy link
Contributor

zou3519 commented Nov 27, 2019

One thing we could take design inspiration from is numpy's flags: https://docs.scipy.org/doc/numpy/reference/generated/numpy.ndarray.flags.html. One can set a numpy array's WRITEABLE flag to off, making the array read-only.

@noskill
Copy link
Author

noskill commented Nov 28, 2019

Yes, i am trying to pinpoint the place with detect_anomaly and doing tensor.clone(), so making all the data readonly would be really helpful

@ezyang
Copy link
Contributor

ezyang commented Dec 3, 2019

We've talked about this internally before too, it's a generally good idea we should support.

@noskill
Copy link
Author

noskill commented Dec 12, 2019

Where should check for the flag be placed? I am not familiar with the source yet..

@ezyang
Copy link
Contributor

ezyang commented Dec 12, 2019

Not clear; the design for this may be fairly involved :)

@t-vi
Copy link
Collaborator

t-vi commented May 29, 2020

Maybe one could make this more general: we could introduce a hook if a tensor is modified inplace. This is a bit complicated as we would make it "if the storage underlying the tensor is modified inplace", but I think it would have more general applications, too (e.g. caching computations which only need to be re-computed when inputs change).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
module: autograd Related to torch.autograd, and the autograd engine in general triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module
Projects
None yet
Development

No branches or pull requests

5 participants