Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add bfloat16 to all numeric types #2770

Merged
merged 23 commits into from
May 31, 2020
Merged

Conversation

souptc
Copy link
Contributor

@souptc souptc commented May 12, 2020

Add bfloat16 to all_numeric_type list, so we can use it for most operators.

@souptc souptc requested a review from a team as a code owner May 12, 2020 21:28
@@ -566,7 +567,8 @@ class OpSchema final {
"tensor(int64)",
"tensor(float16)",
"tensor(float)",
"tensor(double)"};
"tensor(double)",
"tensor(bfloat16)"};
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should we also add it in the "all_tensor_types" too?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes i added that too.

@wschin
Copy link
Contributor

wschin commented May 13, 2020

We need some basic shape inference tests for this new type. I am not sure how hard to add some numerical tests (like creating input and output from Tensorflow), but if it's not very hard, we should add some.

@@ -887,7 +910,8 @@ ONNX_OPERATOR_SET_SCHEMA(
"tensor(uint32)",
"tensor(uint64)",
"tensor(int32)",
"tensor(int64)"},
"tensor(int64)",
"tensor(bfloat16)"},
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not sure if bfloat16 should be added in the beginning of this list. @linkerzhang, any comment?

onnx/defs/tensor/defs.cc Outdated Show resolved Hide resolved
@gramalingam
Copy link
Contributor

@wschin : did you mean node tests? I think shape-inference should not depend on the primitive type (type-inference does, but if it correctly works for multiple types already, I don't think it is important to add new ones). But I agree that test-cases for the runtime would be helpful.

@wschin
Copy link
Contributor

wschin commented May 14, 2020

@wschin : did you mean node tests? I think shape-inference should not depend on the primitive type (type-inference does, but if it correctly works for multiple types already, I don't think it is important to add new ones). But I agree that test-cases for the runtime would be helpful.

Yes, I mean node tests + a shape inference test. For shape inference test, I think we just need one for this type.

@@ -6,6 +6,99 @@
using namespace ONNX_NAMESPACE;

namespace ONNX_NAMESPACE {

inline void unaryLogicalOpInference_opset12(InferenceContext& ctx) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this function used, or is it a copy-paste error?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

removed.

@gramalingam
Copy link
Contributor

@souptc : thanks for the PR! As discussed above, is there any way to add some test-data for bfloat16 for at least one or two ops, to verify the representation? May be casting from float32 to bfloat16 and vice-versa?

Copy link
Contributor Author

@souptc souptc left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

add the cast test cases from float32 to bfloat16 and from bfloat16 to float32.

@gramalingam
Copy link
Contributor

@wschin : do you have any further comments or concerns? Thanks.

u'NaN', u'INF', u'+INF', u'-INF'], dtype=np.float32)
little_endisan = sys.byteorder == 'little'
np_uint16_view = np_fp32.view(dtype=np.uint16)
np_bfp16 = np_uint16_view[1::2] if little_endisan else np_uint16_view[0::2]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So, I guess this trick works correctly for NaN, INF, -INF (since you have those values in the test-case)?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes i have the test data there, and it is done by the standard conversion between float and bfloat16. I didn't find the definition of NaN in bfloat16 as bfloat16 is not in IEEE, but as TF apply the same conversion for all the float32 numbers, i think it should be correct.

@gramalingam
Copy link
Contributor

@linkerzhang @postrational @wschin : is this okay to merge in? Thanks!

@gramalingam gramalingam merged commit a82c6a7 into onnx:master May 31, 2020
jcwchen pushed a commit to jcwchen/onnx that referenced this pull request Sep 23, 2020
* add bfloat16 to all numeric types

* add bfloat to more ops

* Bump versions

* add opset13 into supported range

* fix build err

* re-generate docs

* fix ci break

* add cast bfloat/float test cases

* remove useless code

* fix the export types

* fix the export types in script

* add doc

* fix the py27 error

* fix python style

* update docs

* add type in comments

* fix mypy error

* fix more mypy error

* bypass the unicode check

Co-authored-by: Cheng Tang <chenta@microsoft.com>
Co-authored-by: Ke Zhang <kezhan@microsoft.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

5 participants