Skip to content

Conversation

yuguo68
Copy link
Contributor

@yuguo68 yuguo68 commented Jul 1, 2022

Stack from ghstack (oldest at bottom):

trying to fix #80589, see the two corner cases in the issue.
added the two cases in unit tests and add device to the tests.

Edit: the landed version only fixes the first issue in #80589

@facebook-github-bot
Copy link
Contributor

facebook-github-bot commented Jul 1, 2022

🔗 Helpful links

✅ No Failures (0 Pending)

As of commit 22ef735 (more details on the Dr. CI page):

Expand to see more

💚 💚 Looks good so far! There are no failures yet. 💚 💚


This comment was automatically generated by Dr. CI (expand for details).

Please report bugs/suggestions to the (internal) Dr. CI Users group.

Click here to manually regenerate this comment.

@yuguo68 yuguo68 requested review from ngimel and ezyang July 1, 2022 05:45
trying to fix #80589, see the two corner cases in the issue. 
added the two cases in unit tests and add device to the tests. 

[ghstack-poisoned]
yuguo68 added a commit that referenced this pull request Jul 1, 2022
ghstack-source-id: 5af3b96
Pull Request resolved: #80758
@ezyang
Copy link
Contributor

ezyang commented Jul 4, 2022

It would have been nice for the device parametrization to be separately stacked (no need to do now, I'll review as is)

torch.arange(1, 0, -1, out=res2)
self.assertEqual(res1, res2, atol=0, rtol=0)
torch.arange(1, 2, 1, out=res2)
self.assertEqual(res1, res2, atol=0, rtol=0)

# FloatTensor
res1 = torch.arange(0.6, 0.89, 0.1, out=torch.FloatTensor())
out = torch.tensor([], dtype=torch.float, device=device)
res1 = torch.arange(0.6, 0.89, 0.1, out=out)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

good riddance, thx

AT_DISPATCH_ALL_TYPES_AND2(kHalf, kBFloat16, result.scalar_type(), "arange_cpu", [&]() {
using accscalar_t = at::acc_type<scalar_t, false>;
auto xstart = start.to<accscalar_t>();
auto xend = end.to<accscalar_t>();
auto xstep = step.to<accscalar_t>();

TORCH_CHECK(xstep > 0 || xstep < 0, "step must be nonzero");
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Lol are there some floating point shenanigans why xstep != 0 is invalid here

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

preexisting condition

r = torch.arange(0, 6, 3, device=device)
self.assertEqual(r.min(), 0)
self.assertEqual(r.max(), 3)
self.assertEqual(r.numel(), 2)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is the new test

r = torch.arange(0, -5, -2, device=device)
self.assertEqual(r.min(), -4)
self.assertEqual(r.max(), 0)
self.assertEqual(r.numel(), 3)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

and this too, the test here is so bad woof

@ezyang
Copy link
Contributor

ezyang commented Jul 4, 2022

onnx failure looks real

@yuguo68 yuguo68 closed this Jul 5, 2022
@yuguo68 yuguo68 reopened this Jul 5, 2022
@yuguo68
Copy link
Contributor Author

yuguo68 commented Jul 5, 2022

onnx failure looks real

thanks for reviewing. Yes, it turns out in onnx we always cast the args (start, end, step) to desired dtype before computing size.
On master,

import torch
import numpy as np
>>> np.arange(1, 5.5, 1.5, dtype=np.int32) array([1, 2, 3], dtype=int32)
>>> np.arange(1, 5.5, 1.5, dtype=np.int64) array([1, 2, 3])
>>> torch.arange(1, 5.5, 1.5, dtype=torch.int32) tensor([1, 2, 3], dtype=torch.int32)
>>> torch.arange(1, 5.5, 1.5, dtype=torch.int64) tensor([1, 2, 3, 4])

This PR will change the last one to

>>> torch.arange(1, 5.5, 1.5, dtype=torch.int64) tensor([1, 2, 3])

but for onnx,

>>> onnx.arange(1, 5.5, 1.5, dtype=torch.int32) tensor([1, 2, 3, 4], dtype=torch.int32)
>>> onnx.arange(1, 5.5, 1.5, dtype=torch.int64) tensor([1, 2, 3, 4])

the failed unittest compare torch and onnx with int64, so it works on master but fails with this PR. We may need to change the implementation in torch.onnx to make it consistent.

trying to fix #80589, see the two corner cases in the issue. 
added the two cases in unit tests and add device to the tests. 

[ghstack-poisoned]
yuguo68 added a commit that referenced this pull request Jul 6, 2022
ghstack-source-id: bf36287
Pull Request resolved: #80758
@ezyang
Copy link
Contributor

ezyang commented Jul 6, 2022

@pytorchbot merge -g

@pytorchmergebot
Copy link
Collaborator

@pytorchbot successfully started a merge job. Check the current status here

@github-actions
Copy link
Contributor

github-actions bot commented Jul 6, 2022

Hey @yuguo68.
You've committed this PR, but it does not have both a 'release notes: ...' and 'topics: ...' label. Please add one of each to the PR. The 'release notes: ...' label should represent the part of PyTorch that this PR changes (fx, autograd, distributed, etc) and the 'topics: ...' label should represent the kind of PR it is (not user facing, new feature, bug fix, perf improvement, etc). The list of valid labels can be found here for the 'release notes: ...' and here for the 'topics: ...'.
For changes that are 'topic: not user facing' there is no need for a release notes label.

@yuguo68 yuguo68 changed the title fix two corner cases of torch.arange fix a corner case of torch.arange Jul 6, 2022
@yuguo68 yuguo68 added release notes: python_frontend python frontend release notes category topic: bug fixes topic category release notes labels Jul 6, 2022
facebook-github-bot pushed a commit that referenced this pull request Jul 8, 2022
Summary:
trying to fix #80589, see the two corner cases in the issue.
added the two cases in unit tests and add device to the tests.

Pull Request resolved: #80758
Approved by: https://github.com/ezyang

Test Plan: contbuild & OSS CI, see https://hud.pytorch.org/commit/pytorch/pytorch/5e9136c24cf52d9da2e55ecb2049233c765f913b

Reviewed By: mehtanirav

Differential Revision: D37687413

Pulled By: yuguo68

fbshipit-source-id: befd1e384ff5d13272314bccac1e830b3c8db8de
@facebook-github-bot facebook-github-bot deleted the gh/yuguo68/2/head branch July 10, 2022 14:17
@yanbing-j yanbing-j mentioned this pull request Mar 9, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cla signed Merged release notes: python_frontend python frontend release notes category topic: bug fixes topic category
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants