Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[AMP OP&Test] register fp16 and bf16 kernel for uniform_random #50993

Merged
merged 6 commits into from Mar 2, 2023

Conversation

zhiqiu
Copy link
Contributor

@zhiqiu zhiqiu commented Feb 28, 2023

PR types

New features

PR changes

OPs

Describe

register fp16 and bf16 kernel for uniform_random
image

@paddle-bot
Copy link

paddle-bot bot commented Feb 28, 2023

你的PR提交成功,感谢你对开源项目的贡献!
请关注后续CI自动化测试结果,详情请参考Paddle-CI手册
Your PR has been submitted. Thanks for your contribution!
Please wait for the result of CI firstly. See Paddle CI Manual for details.

@ZzSean ZzSean changed the title register fp16 and bf16 kernel for uniform_random [AMP OP&Test] register fp16 and bf16 kernel for uniform_random Mar 1, 2023
@@ -69,4 +69,5 @@ PD_REGISTER_KERNEL(uniform_raw,
phi::UniformRawKernel,
float,
double,
phi::dtype::float16,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

本次修改只针对GPU,CPU暂无需修改

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

@@ -66,6 +66,7 @@ PD_REGISTER_KERNEL(uniform_raw_sr,
phi::sr::UniformRawKernel,
float,
double,
phi::dtype::float16,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

只修改gpu即可,这里不用改

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

def init_attrs(self):
self.attrs = {
"shape": [1000, 784],
"min": -5.0,
"max": 10.0,
"seed": 10,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这个attr不用删吧

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

现在2.0 api都不传这个参数了

@@ -151,15 +154,19 @@ def setUp(self):
self.op_type = "uniform_random"
self.python_api = paddle.uniform
self.inputs = {}
self.init_dtype()
self.init_attrs()
self.outputs = {"Out": np.zeros((1000, 784)).astype("float32")}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这里的astype("float32")也要改成astype(self.dtype)

Copy link
Contributor Author

@zhiqiu zhiqiu Mar 1, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

random算子单测的output不重要,这里只是fake的数据。

@zhiqiu zhiqiu force-pushed the dev/uniform_random_fp16_bf16 branch from 82e2146 to d3da773 Compare March 1, 2023 05:20
PD_REGISTER_KERNEL(
uniform_raw_sr, GPU, ALL_LAYOUT, phi::sr::UniformRawKernel, float, double) {
}
PD_REGISTER_KERNEL(uniform_raw_sr,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这里也不用改的

Copy link
Contributor

@ZzSean ZzSean left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

Copy link
Contributor

@luotao1 luotao1 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM for skipif

@ZzSean ZzSean merged commit 72f3445 into PaddlePaddle:develop Mar 2, 2023
Xreki added a commit to Xreki/Paddle that referenced this pull request Apr 10, 2023
aoyulong pushed a commit that referenced this pull request Apr 11, 2023
* Fix scale kernel for low precision, cherry pick #50998.

* Fix the FP16 precision problem of add_n. (#50129)

* Change squared_l2_norm to reuse ReduceKernel, and register fp16 and bf16 kernel, which is cherry pick #48315.

* Cherry-pick the fix of MPTypeTrait in KP, which is implemented in #50993.

* Cherry-pick the multi-precision support of AdamW for bf16, #48041.

* Fix compiling error.

* Cherry-pick the fix of CubTensorReduceImpl for bfloat16 in #50993.

* Fix unittest.

---------

Co-authored-by: liuruyan <44316842+liuruyan@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants