New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
【Hackathon No.60】prelu, clip_by_norm, multi_dot 算子FP16/BF16单测完善 #52666
Conversation
你的PR提交成功,感谢你对开源项目的贡献! |
python/paddle/nn/clip.py
Outdated
@@ -63,7 +63,9 @@ def clip_by_norm(x, max_norm, name=None): | |||
return _legacy_C_ops.clip_by_norm(x, 'max_norm', max_norm) | |||
|
|||
helper = LayerHelper("clip_by_norm", **locals()) | |||
check_variable_and_dtype(x, 'X', ['float32', 'float16'], 'clip_by_norm') | |||
check_variable_and_dtype( | |||
x, 'X', ['float16', 'float32', 'float64', 'uint16'], 'clip_by_norm' |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
这个op里不支持float64
吧
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
CI提示没有double类型,已修改取消float64
|
||
def get_dtype(self): | ||
self.np_dtype = np.uint16 | ||
return "float32" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
这里会把self.dtype
设置成float32
,应该是uint16
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
已修改
place, | ||
['X', 'Alpha'], | ||
'Out', | ||
max_relative_error=max_relative_error, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
反向尝试使用默认值,如果无法通过再尝试调整,float16同理
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
已修改
place, | ||
['X', 'Alpha'], | ||
'Out', | ||
max_relative_error=max_relative_error, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
这样写感觉不太好,建议就直接去掉max_relative_error
的设置,使用默认值
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
已修改使用默认值
self.check_grad_with_place(self.place, ['x0'], 'Out') | ||
self.check_grad_with_place(self.place, ['x1'], 'Out') | ||
except: | ||
self.check_grad_with_place(self.place, ['x0'], 'Out', atol=0.2) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
这个从PR贴图来看,差的有点多,建议要么调大numeric_grad_delta
或者自己写个user_defined_grad
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
已修改增加numeric_grad_delta,测试设置为0.01,设置0.005,0.008测试失败
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM for dtype registration
…lePaddle#52666) * Add prelu, clip_by_norm, multi_dot tests * Fix code * Fix code
…lePaddle#52666) * Add prelu, clip_by_norm, multi_dot tests * Fix code * Fix code
…lePaddle#52666) * Add prelu, clip_by_norm, multi_dot tests * Fix code * Fix code
PR types
Others
PR changes
Others
Describe
prelu, clip_by_norm, multi_dot 算子FP16/BF16单测完善
文档修改PR
PaddlePaddle/docs#5789
multi_dot 测试grad误差在0.2