Skip to content

Conversation

@TomHeaven
Copy link
Contributor

@TomHeaven TomHeaven commented Jan 6, 2021

This pull fix #42271 by manually specifying template data type of tensor<template>.item() in aten/src/THC/generic/THCTensorMasked.cu.

Changes in submodules are not expected since I have pulled the latest submodules from the Pytorch master branch.

@facebook-github-bot
Copy link
Contributor

Hi @TomHeaven!

Thank you for your pull request and welcome to our community. We require contributors to sign our Contributor License Agreement, and we don't seem to have you on file.

In order for us to review and merge your code, please sign at https://code.facebook.com/cla. If you are contributing on behalf of someone else (eg your employer), the individual CLA may not be sufficient and your employer may need to sign the corporate CLA.

If you have received this in error or have any questions, please contact us at cla@fb.com. Thanks!

@facebook-github-bot
Copy link
Contributor

facebook-github-bot commented Jan 6, 2021

💊 CI failures summary and remediations

As of commit 4d56f7c (more details on the Dr. CI page):


  • 1/2 failures possibly* introduced in this PR
    • 1/1 non-CircleCI failure(s)
  • 1/2 broken upstream at merge base 2ac180a on Jan 06 from 7:16am to 8:01am

🚧 1 fixed upstream failure:

These were probably caused by upstream breakages that were already fixed.

Please rebase on the viable/strict branch (expand for instructions)

If your commit is older than viable/strict, run these commands:

git fetch https://github.com/pytorch/pytorch viable/strict
git rebase FETCH_HEAD

Check out the recency history of this "viable master" tracking branch.


This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.

Please report bugs/suggestions to the (internal) Dr. CI Users group.

This comment has been revised 9 times.

@ezyang
Copy link
Contributor

ezyang commented Jan 6, 2021

Need CLA


// Determine our output size
ptrdiff_t totalElements = THTensor_wrap(mask).sum().item<ptrdiff_t>();
ptrdiff_t totalElements = THTensor_wrap(mask).sum().item<int64_t>();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why return type should be ptrdiff_t rather than int64_t?

Suggested change
ptrdiff_t totalElements = THTensor_wrap(mask).sum().item<int64_t>();
int64_t totalElements = THTensor_wrap(mask).sum().item<int64_t>();


// Determine our output size
ptrdiff_t totalElements = THTensor_wrap(mask).sum().item<ptrdiff_t>();
ptrdiff_t totalElements = THTensor_wrap(mask).sum().item<int64_t>();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same as below

Suggested change
ptrdiff_t totalElements = THTensor_wrap(mask).sum().item<int64_t>();
int64_t totalElements = THTensor_wrap(mask).sum().item<int64_t>();

Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@ezyang has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@facebook-github-bot
Copy link
Contributor

@ezyang merged this pull request in 8706187.

hwangdeyu pushed a commit to hwangdeyu/pytorch that referenced this pull request Jan 14, 2021
Summary:
This pull fix #{42271} by manually specify template data type of `tensor<template>.item()` in `aten/src/THC/generic/THCTensorMasked.cu`.

Changes in submodules are not expected since I have pulled the latest submodules from the Pytorch master branch.

Pull Request resolved: pytorch#50141

Reviewed By: zou3519

Differential Revision: D25826104

Pulled By: ezyang

fbshipit-source-id: 80527a14786b36e4e520fdecc932e257d2520f89
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Pytorch 1.6.0 linking error when linking libtorch_cuda.dylib on macOS 10.13.6 with CUDA support

5 participants