Skip to content

Conversation

eqy
Copy link
Collaborator

@eqy eqy commented Nov 21, 2023

@eqy eqy added module: cuda Related to torch.cuda, and CUDA support in general module: 64-bit Problems related to incorrectly using 32-bit integers when 64-bit is needed (e.g., 8G tensors) open source module: pooling topic: not user facing topic category labels Nov 21, 2023
@pytorch-bot
Copy link

pytorch-bot bot commented Nov 21, 2023

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/114193

Note: Links to docs will display an error until the docs builds have been completed.

✅ No Failures

As of commit 79edabb with merge base e942675 (image):
💚 Looks good so far! There are no failures yet. 💚

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@eqy
Copy link
Collaborator Author

eqy commented Nov 21, 2023

CC @malfet

@eqy eqy added the ciflow/trunk Trigger trunk jobs on your pull request label Nov 21, 2023
Copy link
Contributor

@malfet malfet left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@eqy
Copy link
Collaborator Author

eqy commented Nov 27, 2023

@pytorchmergebot rebase

@pytorchmergebot
Copy link
Collaborator

@pytorchbot started a rebase job onto refs/remotes/origin/viable/strict. Check the current status here

@pytorchmergebot
Copy link
Collaborator

Successfully rebased avgpoolbackward64 onto refs/remotes/origin/viable/strict, please pull locally before adding more changes (for example, via git checkout avgpoolbackward64 && git pull --rebase)

__global__ void avg_pool2d_backward_out_cuda_frame_nhwc(const index_t nthreads,
const scalar_t* const top_diff,
const int64_t channels, const int64_t height,
const int64_t width, const int pooled_height, const int pooled_width,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

noob question that is maybe unrelated: why are pooled_height and pooled_width const int here in avg_pool2d_backward_out_cuda_frame_nhwc but const int64_t above in avg_pool2d_backward_out_cuda_frame?

Copy link
Collaborator Author

@eqy eqy Nov 27, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It doesn't look obvious to me as I can't immediately see where its use/definition diverges between the channels-first and channels-last cases. Maybe @xwang233 can comment as he authored the channels-last version (#35855) and the last round of 64-bit indexing fixes here.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

btw I think my question is unrelated to the initial issue, did not mean to block this PR from merging. Could you perhaps file an issue about the typing difference between channels-last and non-channels-last for avg_pool though so this doesn't get lost please?

@eqy
Copy link
Collaborator Author

eqy commented Dec 5, 2023

@pytorchmergebot merge

@pytorchmergebot
Copy link
Collaborator

Merge started

Your change will be merged once all checks pass (ETA 0-4 Hours).

Learn more about merging in the wiki.

Questions? Feedback? Please reach out to the PyTorch DevX Team

Advanced Debugging
Check the merge workflow status
here

@pytorchmergebot
Copy link
Collaborator

Merge failed

Reason: 1 jobs have failed, first few of them are: trunk / linux-focal-rocm5.7-py3.8 / test (default, 1, 1, linux.rocm.gpu)

Details for Dev Infra team Raised by workflow job

@eqy
Copy link
Collaborator Author

eqy commented Dec 14, 2023

@pytorchmergebot rebase

@pytorchmergebot
Copy link
Collaborator

@pytorchbot started a rebase job onto refs/remotes/origin/viable/strict. Check the current status here

@pytorchmergebot
Copy link
Collaborator

Successfully rebased avgpoolbackward64 onto refs/remotes/origin/viable/strict, please pull locally before adding more changes (for example, via git checkout avgpoolbackward64 && git pull --rebase)

@eqy
Copy link
Collaborator Author

eqy commented Dec 15, 2023

@pytorchmergebot merge

@pytorchmergebot
Copy link
Collaborator

Merge started

Your change will be merged once all checks pass (ETA 0-4 Hours).

Learn more about merging in the wiki.

Questions? Feedback? Please reach out to the PyTorch DevX Team

Advanced Debugging
Check the merge workflow status
here

guilhermeleobas pushed a commit to guilhermeleobas/pytorch that referenced this pull request Dec 18, 2023
dmenig pushed a commit to dmenig/pytorch that referenced this pull request Dec 21, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ciflow/trunk Trigger trunk jobs on your pull request Merged module: cuda Related to torch.cuda, and CUDA support in general module: pooling module: 64-bit Problems related to incorrectly using 32-bit integers when 64-bit is needed (e.g., 8G tensors) open source topic: not user facing topic category

Projects

None yet

Development

Successfully merging this pull request may close these issues.

cannot backprop a cnn when intermediate output has size larger than 2**31

4 participants