Skip to content

Conversation

xmnlab
Copy link
Contributor

@xmnlab xmnlab commented Jun 20, 2019

Resolves #18353

CPU and GPU porting for convolution transpose 3d

@pytorchbot pytorchbot added module: build Build system issues module: cpu CPU specific problem (e.g., perf, algorithm) module: cublas Problem related to cublas support module: cuda Related to torch.cuda, and CUDA support in general module: internals Related to internal abstractions in c10 and ATen module: operators labels Jun 20, 2019
@xmnlab xmnlab force-pushed the 18353-convtranspose3d branch from ce75551 to f938f43 Compare June 20, 2019 16:13
@pytorchbot pytorchbot added the module: nn Related to torch.nn label Jun 20, 2019
@xmnlab xmnlab force-pushed the 18353-convtranspose3d branch from 06b62a4 to ac61d7a Compare June 21, 2019 00:46
@xmnlab xmnlab marked this pull request as ready for review June 21, 2019 20:02
@xmnlab
Copy link
Contributor Author

xmnlab commented Jun 21, 2019

I will try to fix this conflict today

@xmnlab xmnlab force-pushed the 18353-convtranspose3d branch from 66f6b98 to 7eac504 Compare June 22, 2019 00:49
@xmnlab
Copy link
Contributor Author

xmnlab commented Jun 22, 2019

Hey @ailzhang

Do you have any thoughts related to the error with xla job? I renamed the function from conv_transpose2d_backward to aten_conv_transpose2d_backward. But the error seems to be related to the old name.

@ailzhang
Copy link
Contributor

Hi @xmnlab Yea although it's not hard for XLA to follow these changes, why are we adding aten_ prefixes? I think it used to be if in th, we have th prefix but nothing has been added for aten, is this change intended?

@xmnlab
Copy link
Contributor Author

xmnlab commented Jun 23, 2019

hi @ailzhang thanks for the feedback. this change intended to apply the recommendations from another PR #20994 (comment)

what do you recommend for this?

@xmnlab
Copy link
Contributor Author

xmnlab commented Jun 24, 2019

@ailzhang any recommendation here? should I remoe aten_ prefix?

@xmnlab
Copy link
Contributor Author

xmnlab commented Jun 24, 2019

@ailzhang @ezyang I will rebase the code and remove the prefix. If the prefix is necessary we can address that in a follow-up PR

@xmnlab xmnlab force-pushed the 18353-convtranspose3d branch from f5c06d9 to 9c28e06 Compare June 24, 2019 18:35
@xmnlab xmnlab requested a review from ezyang June 24, 2019 21:47
@xmnlab
Copy link
Contributor Author

xmnlab commented Jun 24, 2019

@ezyang it seems it is done for a review!

@soumith soumith changed the title 18353 convtranspose3d porting convtranspose3d to ATen Jun 24, 2019
@soumith soumith added the triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module label Jun 24, 2019
Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@ezyang is landing this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@xmnlab xmnlab deleted the 18353-convtranspose3d branch June 25, 2019 18:36
zdevito pushed a commit to zdevito/ATen that referenced this pull request Jun 25, 2019
Summary:
Resolves pytorch/pytorch#18353

CPU and GPU porting for convolution transpose 3d
Pull Request resolved: pytorch/pytorch#22019

Differential Revision: D15985353

Pulled By: ezyang

fbshipit-source-id: 1c579577a32db24a1ce38f5ab9b3f1cb9c8f2a6e
@facebook-github-bot
Copy link
Contributor

@ezyang merged this pull request in 7daa96a.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Merged module: build Build system issues module: cpu CPU specific problem (e.g., perf, algorithm) module: cublas Problem related to cublas support module: cuda Related to torch.cuda, and CUDA support in general module: internals Related to internal abstractions in c10 and ATen module: nn Related to torch.nn module: onnx Related to torch.onnx module: third_party open source triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Port SpatialFullDilatedConvolution and VolumetricFullDilatedConvolution to ATen

7 participants