Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[PHI] transpose2_grad op migration #46139

Merged
merged 22 commits into from Oct 10, 2022
Merged

[PHI] transpose2_grad op migration #46139

merged 22 commits into from Oct 10, 2022

Conversation

paulinagacek
Copy link
Contributor

PR types

Others

PR changes

Others

Describe

  • migrate transpose2_grad operator to phi

@paddle-bot
Copy link

paddle-bot bot commented Sep 16, 2022

你的PR提交成功,感谢你对开源项目的贡献!
请关注后续CI自动化测试结果,详情请参考Paddle-CI手册
Your PR has been submitted. Thanks for your contribution!
Please wait for the result of CI firstly. See Paddle CI Manual for details.

@CLAassistant
Copy link

CLAassistant commented Sep 16, 2022

CLA assistant check
All committers have signed the CLA.

@paddle-bot paddle-bot bot added contributor External developers status: proposed labels Sep 16, 2022
@paulinagacek paulinagacek marked this pull request as ready for review September 19, 2022 16:53
@paulinagacek
Copy link
Contributor Author

@piotrekobi can you review please?

Copy link
Contributor

@piotrekobi piotrekobi left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please make some changes for clarity and style:

  1. Remove phi::funcs i phi:: wherever possible (everywhere apart from the kernel registration at the end). In phi namespace leave only funcs::
  2. Move the transpose_grad_kernel.h import to the top with a new line in between the other imports (like in https://github.com/PaddlePaddle/Paddle/pull/46051/files)
  3. Change the kernel file name to transpose2_grad_kernel.cc

@piotrekobi piotrekobi self-requested a review September 22, 2022 10:43
piotrekobi
piotrekobi previously approved these changes Sep 22, 2022
Copy link
Contributor

@piotrekobi piotrekobi left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@piotrekobi
Copy link
Contributor

@chenwhql Please review


} // namespace phi

PD_REGISTER_KERNEL(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The transpose2 in Fluid is renamed to transpose in PHI. So , the registered name should be transpose_grad and this file should be renamed transpose_grad_kernel.cc. On the other hand, CI-coverage is not successful, maybe the reason is that the registerd name is wrong.

@paulinagacek
Copy link
Contributor Author

@chenwhql @YuanRisheng PR-CI-Windows tests give weird results, can you please take a look at it?

std::vector<int> reversed_axis(axis);
int ndims = axis.size();
if (ndims == 1) {
Copy(dev_ctx, out_grad, out_grad.place(), false, x_grad);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I found that phi::Copy and framework::TensorCopy behave inconsistently in the mkldnn scenario, you can still use framework::TensorCopy for the time being, the problem of phi::Copy I will solve

@chenwhql
Copy link
Contributor

@paulinagacek the problem of PR-CI-Windows may be caused by the Copy difference

@@ -1076,9 +1076,7 @@ class TransposeOneDNNHandler {
std::shared_ptr<dnnl::memory> AcquireDstMemory(DenseTensor* output,
Place place) {
auto dst_md = Axis2MemoryDesc(dims_, axis_);
output->Resize(make_ddim(dims_));
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I deleted Resize() and now it seems to work properly, @chenwhql @YuanRisheng please review

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done, please take a look at the Coverage CI @paulinagacek

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have checked the Coverage CI in my machine and the code is exercised. I have marked it as successful.

@YuanRisheng YuanRisheng merged commit e3407a8 into PaddlePaddle:develop Oct 10, 2022
Silv3S pushed a commit to Silv3S/Paddle that referenced this pull request Oct 10, 2022
* op migrated, Copy(OneDNNContext, ...) added

* mutable_data & op registration in fluid removed

* refactoring

* OneDNNGetDataType to uppercase

* missing cpu check added, handler moved to .h file

* name changed to transpose_grad

* Copy changed back to TensorCopy

* Resizing corrected, Copy(OneDNNContext) removed
Silv3S added a commit to Silv3S/Paddle that referenced this pull request Oct 11, 2022
* op migrated, Copy(OneDNNContext, ...) added

* mutable_data & op registration in fluid removed

* refactoring

* OneDNNGetDataType to uppercase

* missing cpu check added, handler moved to .h file

* name changed to transpose_grad

* Copy changed back to TensorCopy

* Resizing corrected, Copy(OneDNNContext) removed
phlrain pushed a commit that referenced this pull request Oct 13, 2022
* Revert pool+grad oneDNN kernel conversion (#45989)

* [PHI] transpose2_grad op migration (#46139)

* op migrated, Copy(OneDNNContext, ...) added

* mutable_data & op registration in fluid removed

* refactoring

* OneDNNGetDataType to uppercase

* missing cpu check added, handler moved to .h file

* name changed to transpose_grad

* Copy changed back to TensorCopy

* Resizing corrected, Copy(OneDNNContext) removed

Co-authored-by: Piotr Paturej <48731682+piotrekobi@users.noreply.github.com>
Co-authored-by: Paulina Gacek <paulina.gacek@intel.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
contributor External developers Intel
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

6 participants