-
Notifications
You must be signed in to change notification settings - Fork 3.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
add bfloat16 to all numeric types #2770
Conversation
onnx/defs/schema.h
Outdated
@@ -566,7 +567,8 @@ class OpSchema final { | |||
"tensor(int64)", | |||
"tensor(float16)", | |||
"tensor(float)", | |||
"tensor(double)"}; | |||
"tensor(double)", | |||
"tensor(bfloat16)"}; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
should we also add it in the "all_tensor_types" too?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes i added that too.
We need some basic shape inference tests for this new type. I am not sure how hard to add some numerical tests (like creating input and output from Tensorflow), but if it's not very hard, we should add some. |
@@ -887,7 +910,8 @@ ONNX_OPERATOR_SET_SCHEMA( | |||
"tensor(uint32)", | |||
"tensor(uint64)", | |||
"tensor(int32)", | |||
"tensor(int64)"}, | |||
"tensor(int64)", | |||
"tensor(bfloat16)"}, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not sure if bfloat16 should be added in the beginning of this list. @linkerzhang, any comment?
@wschin : did you mean node tests? I think shape-inference should not depend on the primitive type (type-inference does, but if it correctly works for multiple types already, I don't think it is important to add new ones). But I agree that test-cases for the runtime would be helpful. |
Yes, I mean node tests + a shape inference test. For shape inference test, I think we just need one for this type. |
onnx/defs/logical/old.cc
Outdated
@@ -6,6 +6,99 @@ | |||
using namespace ONNX_NAMESPACE; | |||
|
|||
namespace ONNX_NAMESPACE { | |||
|
|||
inline void unaryLogicalOpInference_opset12(InferenceContext& ctx) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this function used, or is it a copy-paste error?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
removed.
@souptc : thanks for the PR! As discussed above, is there any way to add some test-data for bfloat16 for at least one or two ops, to verify the representation? May be casting from float32 to bfloat16 and vice-versa? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
add the cast test cases from float32 to bfloat16 and from bfloat16 to float32.
@wschin : do you have any further comments or concerns? Thanks. |
u'NaN', u'INF', u'+INF', u'-INF'], dtype=np.float32) | ||
little_endisan = sys.byteorder == 'little' | ||
np_uint16_view = np_fp32.view(dtype=np.uint16) | ||
np_bfp16 = np_uint16_view[1::2] if little_endisan else np_uint16_view[0::2] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So, I guess this trick works correctly for NaN, INF, -INF (since you have those values in the test-case)?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes i have the test data there, and it is done by the standard conversion between float and bfloat16. I didn't find the definition of NaN in bfloat16 as bfloat16 is not in IEEE, but as TF apply the same conversion for all the float32 numbers, i think it should be correct.
@linkerzhang @postrational @wschin : is this okay to merge in? Thanks! |
* add bfloat16 to all numeric types * add bfloat to more ops * Bump versions * add opset13 into supported range * fix build err * re-generate docs * fix ci break * add cast bfloat/float test cases * remove useless code * fix the export types * fix the export types in script * add doc * fix the py27 error * fix python style * update docs * add type in comments * fix mypy error * fix more mypy error * bypass the unicode check Co-authored-by: Cheng Tang <chenta@microsoft.com> Co-authored-by: Ke Zhang <kezhan@microsoft.com>
Add bfloat16 to all_numeric_type list, so we can use it for most operators.