-
Notifications
You must be signed in to change notification settings - Fork 5.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Integrate float16 into data_type_transform #8619
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the PR, I think this looks pretty good to me. Maybe @jacquesqiao and @dzhwinter can also have a look if they have some time. I think this PR is a requirement for other PRs for the float16 work, so maybe we can merge it.
Merged it for faster development. Comments are welcome and will be addressed in future PR. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM. I think maybe we should add the FP16 in pybind, then our python model can use this type easily.
@@ -0,0 +1 @@ | |||
data_type_transform.cc |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What's the reason make data_type_transform.cu
as an alias of data_type_transform.cc
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I thought that a .cu
file is needed for nv_library() compilation. But it turns out to be not the case. So data_type_transform.cc
alone would be fine. Will change this in future PR. Thanks!
@@ -64,6 +60,18 @@ limitations under the License. */ | |||
namespace paddle { | |||
namespace platform { | |||
|
|||
// Forward declare float16 for eigen.h | |||
struct float16; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Avoid use forward declaration when it is possible. I think the previous order is quite well, why we make this change?
https://google.github.io/styleguide/cppguide.html#Forward_Declarations
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This forward declaration is used to solves #8607. For the previous code, #include "unsupported/Eigen/CXX11/Tensor"
will cause a weird segment fault on GPU for TestNAN and TestINF in tensor_util_test. I couldn't figure out the exact reason, but changing this include to #include "eigen.h"
will solve the problem, but it requires the declaration of float16. I think this will be a future to-do item to see if we can fix the bug without forward declare.
Thanks @dzhwinter for the review and the suggestion! I agree that we want to add fp16 to pybind and that is on my to-do list. I initially plan to bind paddle float16 to numpy float16. But it seems that pybind does not support numpy.float16. I am currently thinking about alternatives to expose fp16 to pybind to use in the python side. |
fix #8607
fix #8608
This PR does the following: