-
Notifications
You must be signed in to change notification settings - Fork 22.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
adding more docs for torch.* functions #430
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you wrap the strings around ~80th column? Otherwise they're going to wrap in the middle of a word when printed in the interpreter.
add_docstr(torch._C.reshape, | ||
""" | ||
Returns a Tensor where each sub-tensor of :attr:`input` along dimension :attr:`dim` | ||
is normalized such that the `p`-norm of the sub-tensor is lower than the value :attr:`maxnorm` |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
If `input` is of shape: :math:`(A x 1 x B)`, `squeeze(input, 0)` leaves the Tensor unchanged, | ||
but `squeeze(input, 1)` will squeeze the tensor to the shape :math:`(A x B)`. | ||
|
||
.. note:: The returned Tensor shares the storage with the input Tensor, |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
@@ -1833,6 +1946,48 @@ | |||
|
|||
add_docstr(torch._C.masked_select, | |||
""" | |||
index_select(input, mask, out=None) -> Tensor |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
@@ -2935,6 +3167,39 @@ | |||
|
|||
add_docstr(torch._C.squeeze, | |||
""" | |||
squeeze(input, dim=None, out=None) | |||
|
|||
Returns a `Tensor` with all the dimensions of :attr:`input` of size `1`. |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
made changes requested and pushed to master. |
cuda implementation of Gated Linear Unit, fixed issues with genericization
Add bcast flags to BroadcastOp
now only the following are left: