-
Notifications
You must be signed in to change notification settings - Fork 3.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Create IsNaN-20
and IsInf-20
#5583
Conversation
Signed-off-by: Justin Chu <justinchu@microsoft.com>
1d2c325
to
7cca35b
Compare
About isinf, e4m3fnuz has no infinite value. Should we exclude it or return false ? Same goes with e5m2fnuz. |
I think we should return False. Do we know how other frameworks handle them? |
Only e5m2 has infinite values and CUDA uses this type to compute the gradient. The only operator available is Gemm so there is no division involved and the default behaviour is to saturate the value to the maximum. I assume nan cannot appear with this only operation. |
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Signed-off-by: Justin Chu <justinchu@microsoft.com>
Signed-off-by: Justin Chu <justinchu@microsoft.com>
ebfee49
to
bd57ac8
Compare
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
I agree that for the float-8 variants that do not have an infinity value, IsInf should always return false. (While this is a redundant op for this type, it may still be helpful in defining functions that work generically for all floating-types.) |
I may need to exclude test_isinf_float16_cpu (but it was added already?) |
Why other tests for new and updated have been passing? And why it was passing before? |
It should have been skipped (and should pass) already. I am not very sure. |
Signed-off-by: Justin Chu <justinchuby@users.noreply.github.com>
Signed-off-by: Justin Chu <justinchuby@users.noreply.github.com>
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Which test failure are you referring to? The latest commit seems to complain the node backend data is inconsistent. I would suggest you recreate those node backend data from scratch (manually running command) again. |
Signed-off-by: Justin Chu <justinchu@microsoft.com>
Signed-off-by: Justin Chu <justinchuby@users.noreply.github.com>
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
@jcwchen Looks like the new test is not skipped automatically? https://dev.azure.com/onnx-pipelines/onnx/_build/results?buildId=52121&view=logs&j=825fcbdb-febe-56c2-0b31-e8b200b321eb&t=46007089-1c35-5679-e243-e8ae35eaeec6&l=656 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@jcwchen Looks like the new test is not skipped automatically? https://dev.azure.com/onnx-pipelines/onnx/_build/results?buildId=52121&view=logs&j=825fcbdb-febe-56c2-0b31-e8b200b321eb&t=46007089-1c35-5679-e243-e8ae35eaeec6&l=656
I thought https://github.com/onnx/onnx/blob/main/onnx/test/test_backend_onnxruntime.py should be covered by ORT_MAX_ONNX_OPSET_SUPPORTED_VERSION, but it seems it doesn't and we need to manually filter new versions as https://github.com/onnx/onnx/blob/main/onnx/test/test_backend_onnxruntime.py#L234-L254. It might make sense that we enable ORT_MAX_ONNX_OPSET_SUPPORTED_VERSION for test_backend_onnxruntime.py as well to save some effort. cc @xadupre for inputs. Thanks!
Signed-off-by: Justin Chu <justinchu@microsoft.com>
Description
Create
IsNaN-20
andIsInf-20
to include all floating point types. Note that even though float8 types like e4m3fnuz do not have representation for infinity, we still define the operator on these types so that the operators can be conveniently applied on all floating point types.Checklist
defs.cc
old.cc
onnx/defs/operator_sets.h
test_backend_onnxruntime.py
Motivation and Context
Fixes #5260