You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Describe the bug
Lotus's OneHot implementation registers as many as 6 permutations, yet it's missing the most intuitive combination, float values with integer indices.
onnxruntime\core\providers\cpu\tensor\onehot.cc
#define REG_ONE_HOT_OP(index_type, out_type, depth_type)
REG_TYPED_ONE_HOT_OP(index_type##_##out_type##_##depth_type, in_type, out_type, depth_type)
REG_ONE_HOT_OP(int64_t, int64_t, int64_t);
REG_ONE_HOT_OP(float, int64_t, int64_t);
REG_ONE_HOT_OP(int64_t, string, int64_t);
REG_ONE_HOT_OP(float, string, int64_t);
REG_ONE_HOT_OP(float, float, float); // added this to satisfy onnx model tests
REG_ONE_HOT_OP(int64_t, int32_t, float); // added this to satisfy onnx model tests
There is an entry with float indices and int64_t output values (which feels like a swapped typo, as indices are typically integers rather than floats, and int64_t in an output tensor times the depth is huge); and there is an entry for int32_t outputs with float depth, which is weird (added just for ONNX model tests, but doesn't seem like the most realistic case). I propose adding either of these:
REG_ONE_HOT_OP(int64_t, float, int64_t); // integer indices and depth, float value
REG_ONE_HOT_OP(uint32_t, float, uint32_t); // integer indices and depth, float value
Supporting int32 (or uint32) makes sense to keep memory usage down, given a single axis will not be larger than 4 billion elements (even if a tensor as a whole is larger than 4 billion elements, a single axis shouldn't be). DML supports int32/int64 with float.
System information
Windows 10 OS WinML 18931.1000
ONNX Runtime installed from (source or binary): source
ONNX Runtime version: 1.5
Python version: NA
Visual Studio version (if applicable): Visual Studio 2017
GCC/Compiler version (if compiling from source): NA
Describe the bug
Lotus's OneHot implementation registers as many as 6 permutations, yet it's missing the most intuitive combination, float values with integer indices.
onnxruntime\core\providers\cpu\tensor\onehot.cc
There is an entry with float indices and int64_t output values (which feels like a swapped typo, as indices are typically integers rather than floats, and int64_t in an output tensor times the depth is huge); and there is an entry for int32_t outputs with float depth, which is weird (added just for ONNX model tests, but doesn't seem like the most realistic case). I propose adding either of these:
Supporting int32 (or uint32) makes sense to keep memory usage down, given a single axis will not be larger than 4 billion elements (even if a tensor as a whole is larger than 4 billion elements, a single axis shouldn't be). DML supports int32/int64 with float.
System information
To Reproduce
Expected behavior
Support integer indices with floating data type values.
The text was updated successfully, but these errors were encountered: