Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add document of function torch.as_strided #22842

Closed
wants to merge 6 commits into from

Conversation

kianasun
Copy link
Contributor

@kianasun kianasun commented Jul 13, 2019

Documentation of torch.as_strided and Tensor.as_strided is missing. As mentioned in #9886

@pytorchbot pytorchbot added the module: docs Related to our documentation, both in docs/ and docblocks label Jul 13, 2019
Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@soumith is landing this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

r"""
as_strided(input, size, stride, storage_offset=0) -> Tensor

Create a `torch.Tensor` from an existing `torch.Tensor` :attr:`input` with
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it is important to mention that this creates a view of an existing tensor. :)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You are right. Will Create a view of an existing torch.Tensor ... be better?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, that sounds great! Thank you!

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also, could you also add an entry to the method version of it at https://github.com/pytorch/pytorch/blob/f2f80744be347fab6921fc55181b55dc86c7e07b/torch/_tensor_docs.py? It should be easy and could just refer to torch.as_strided.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actually I think torch.as_strided probably should not exist. Only the method version should, similar to resize.

cc @gchanan who probably should have better opinion on this.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it's fine, torch.as_tensor exists (because it needs to), so other as_ functions seem like fair game.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cool. I will update both soon.

Copy link
Collaborator

@ssnl ssnl left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you!

Copy link
Contributor

@zou3519 zou3519 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Wait, I thought it was intentional that torch.as_strided doesn't have documentation because it is dangerous and that users shouldn't be using it. Can we at least add a big red warning that this is dangerous?

@kianasun
Copy link
Contributor Author

Wait, I thought it was intentional that torch.as_strided doesn't have documentation because it is dangerous and that users shouldn't be using it. Can we at least add a big red warning that this is dangerous?

I think we can do that. Do we need to write some specific reasons why this is dangerous?

@zou3519
Copy link
Contributor

zou3519 commented Jul 16, 2019

Wait, I thought it was intentional that torch.as_strided doesn't have documentation because it is dangerous and that users shouldn't be using it. Can we at least add a big red warning that this is dangerous?

I think we can do that. Do we need to write some specific reasons why this is dangerous?

Things we should write:

  • Can lead to the access of memory out of bounds, resulting in garbage data
  • You should only use this function if you know what you're doing. A lot of pytorch functions are implemented internally with this.
  • Should also have the same warning as in tensor.expand: https://pytorch.org/docs/master/tensors.html#torch.Tensor.expand

@ssnl
Copy link
Collaborator

ssnl commented Jul 16, 2019

I think that is is not that dangerous. You can’t access data outside the underlying storage. The important thing is to mention that this returns a view. We can check what np says about their counterpart in the docs.

@zou3519
Copy link
Contributor

zou3519 commented Jul 16, 2019

I think that is is not that dangerous. You can’t access data outside the underlying storage. The important thing is to mention that this returns a view. We can check what np says about their counterpart in the docs.

https://docs.scipy.org/doc/numpy-1.13.0/reference/generated/numpy.lib.stride_tricks.as_strided.html for reference @kianasun

@kianasun
Copy link
Contributor Author

@zou3519 Updated. :)


.. warning::
:func:`as_strided` manipulates the internal data structure of tensors.
If done incorrectly, it can lead to the access of invalid memory and
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@ssnl says this isn't possible anymore. I played around with as_strided as well and noticed there are indeed checks, so maybe it isn't applicable anymore

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok. I will remove the first paragraph.

vectorized) may result in incorrect behavior. If you need to write to
the tensors, please clone them first.

You should only use this function if you know what you're doing. A lot
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You should only use this function if you know what you're doing.

I know I suggested this, but this sounds bad. Something like the following would be better:

"Many PyTorch functions that return a view on a tensor are internally implemented with this function. Those functions, like torch.expand, are easier to read and are therefore more advisable to use"

Copy link
Collaborator

@xuhdev xuhdev left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There is also an inplace version as_strided_, you probably wanna document it as well (in docs/source/tensors.rst). Document test skipping should also be deleted:

pytorch/test/test_torch.py

Lines 196 to 197 in 7d055c2

'as_strided',
'as_strided_',

@ssnl
Copy link
Collaborator

ssnl commented Jul 17, 2019

I wouldn't document the inplace version.. t_ gives us enough trouble.

@xuhdev
Copy link
Collaborator

xuhdev commented Jul 17, 2019

@ssnl Why don't we unexpose that function if we do not want user to use it (this should probably discussed in a different thread)? Leaving a function undocumented doesn't prevent adventurous users from using it.

@ssnl
Copy link
Collaborator

ssnl commented Jul 17, 2019

@xuhdev We should hide it IMHO, just like many other things. It's just that no one has done it. It would probably require a deprecation cycle and fix the internal callsites.

Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@soumith has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@soumith has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@soumith
Copy link
Member

soumith commented Jul 19, 2019

@pytorchbot rebase this please

Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@soumith has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@soumith
Copy link
Member

soumith commented Jul 20, 2019

@pytorchbot rebase this please

Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@soumith is landing this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@soumith
Copy link
Member

soumith commented Jul 22, 2019

@pytorchbot rebase this please

Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@soumith has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@soumith has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@facebook-github-bot
Copy link
Contributor

@soumith merged this pull request in 45d3f49.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Merged module: docs Related to our documentation, both in docs/ and docblocks open source
Projects
None yet
Development

Successfully merging this pull request may close these issues.

10 participants