Skip to content

Conversation

kshitij12345
Copy link
Collaborator

@kshitij12345 kshitij12345 commented Sep 7, 2020

Fixes #44273

TODO

  • Add test

@vadimkantorov
Copy link
Contributor

vadimkantorov commented Sep 7, 2020

Should this be made into some reusable helper? So it's more discoverable if a similar check is decided for other ops

@dr-ci
Copy link

dr-ci bot commented Sep 7, 2020

💊 CI failures summary and remediations

As of commit 5f4c242 (more details on the Dr. CI page):


  • 1/1 failures possibly* introduced in this PR
    • 1/1 non-CircleCI failure(s)

ci.pytorch.org: 1 failed


This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.

Please report bugs/suggestions on the GitHub issue tracker or post in the (internal) Dr. CI Users group.

See how this bot performed.

This comment has been revised 27 times.

@kshitij12345
Copy link
Collaborator Author

Should this be made into some reusable helper? So it's more discoverable if a similar check is decided for other ops

Makes sense.

@zhangguanheng66 zhangguanheng66 added module: operators triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module labels Sep 7, 2020
@@ -358,6 +358,9 @@ Tensor& prod_out(Tensor& result, const Tensor& self, Dimname dim,

Tensor &mean_out_cpu_gpu(Tensor &result, const Tensor &self, IntArrayRef dim,
bool keepdim, c10::optional<ScalarType> opt_dtype) {
auto dim_set = std::set<IntArrayRef::value_type>(dim.cbegin(), dim.cend());
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is probably not actually the most efficient way to do the duplicate check, since it involves doing a fairly unnecessary dynamic allocation for the set. Probably quickest when number of dims is small (which it should be usually) is just the quadratic nested loops version.

Copy link
Collaborator Author

@kshitij12345 kshitij12345 Sep 8, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Makes sense.
What I had in mind was something like

if (dim.size() < 10) { // 10 is just some heurestic value
   // for loop version
} else {
  // set version.
}

Let me know if it sounds good and if you approve it what should the heuristic value be?

Thank You!

def test_mean_repeated_dim(self, device):
x = torch.randn(3, 3, 3, 3, device=device)
with self.assertRaisesRegex(RuntimeError, r'mean: repeated dimension in `dim` \(\[0, 0\]\)'):
torch.mean(x, dim=(0, 0))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@mruberry So, as was mentioned in the original issue, there are a bunch of operators which take in a list of dimensions. It seems like it would be useful to easily run a version of this test for all of the operators that do this :>

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good idea. I've made a note.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I had quickly searched the file below
https://github.com/pytorch/pytorch/blob/master/aten/src/ATen/native/native_functions.yaml
for int[1] based on the signature of

- func: mean.dim(Tensor self, int[1] dim, bool keepdim=False, *, ScalarType? dtype=None) -> Tensor

Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@ezyang has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@codecov
Copy link

codecov bot commented Sep 8, 2020

Codecov Report

Merging #44281 into master will increase coverage by 0.00%.
The diff coverage is n/a.

Impacted file tree graph

@@           Coverage Diff           @@
##           master   #44281   +/-   ##
=======================================
  Coverage   69.24%   69.24%           
=======================================
  Files         381      381           
  Lines       47573    47573           
=======================================
+ Hits        32942    32944    +2     
+ Misses      14631    14629    -2     
Impacted Files Coverage Δ
torch/utils/_benchmark/utils/common.py 78.99% <0.00%> (+1.68%) ⬆️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 626e410...5f4c242. Read the comment docs.

@kshitij12345 kshitij12345 changed the title [fix] torch.mean throw error if dim is repeated [fix] ReduceOps throw error if dim is repeated Sep 9, 2020
Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@ezyang has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@ezyang
Copy link
Contributor

ezyang commented Sep 9, 2020

Ooh, this new version is much better, thanks!

Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@ezyang has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@ezyang ezyang added the module: xla Related to XLA support label Sep 9, 2020
@ezyang
Copy link
Contributor

ezyang commented Sep 9, 2020

cc @ailzhang on the xla bit

@kshitij12345
Copy link
Collaborator Author

Gentle Ping :)

@facebook-github-bot
Copy link
Contributor

@ezyang merged this pull request in 42f9f2f.

xuzhao9 pushed a commit that referenced this pull request Sep 18, 2020
Summary:
Fixes #44273

TODO

* [x] Add test

Pull Request resolved: #44281

Reviewed By: zhangguanheng66

Differential Revision: D23569004

Pulled By: ezyang

fbshipit-source-id: 1ca6523fef168c8ce252aeb7ca418be346b297bf
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Merged module: xla Related to XLA support open source triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module
Projects
None yet
Development

Successfully merging this pull request may close these issues.

mean with repeated dim gives inconsistent results for cpu and cuda
7 participants