Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add float16 support to cudnn softmax kernel #9269

Merged
merged 3 commits into from
Mar 21, 2018

Conversation

kexinzhao
Copy link
Contributor

@kexinzhao kexinzhao commented Mar 21, 2018

fix #9270

@kexinzhao kexinzhao added the 预测 原名Inference,包含Capi预测问题等 label Mar 21, 2018
Copy link
Contributor

@helinwang helinwang left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM!

@kexinzhao kexinzhao merged commit 64c5c8f into PaddlePaddle:develop Mar 21, 2018
@kexinzhao kexinzhao deleted the softmax_cudnn_fp16 branch March 21, 2018 21:22
@Xreki Xreki added this to Performance Tuning (DONE) in Inference Framework Apr 3, 2018
@Xreki Xreki moved this from Performance Tuning (DONE) to Support FP16 in Inference Framework Apr 3, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
预测 原名Inference,包含Capi预测问题等
Projects
No open projects
Development

Successfully merging this pull request may close these issues.

Need float16 support for softmax cudnn kernel
2 participants