You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
XPU data types supported at native are more than CUDA. Like we always support BF16, but CUDA in some operators doesn't support BF16. So we have got a handle in test infrastructure to add BF16 to our claimed data types, backward_dtypesIfXPU = backward_dtypesIfCUDA + bfloat16.
But in fft operators, we should claim not supporting BF16. The existing assumption in test infrastructure leads to some fft unit test failures,
🚀 The feature, motivation and pitch
XPU data types supported at native are more than CUDA. Like we always support BF16, but CUDA in some operators doesn't support BF16. So we have got a handle in test infrastructure to add BF16 to our claimed data types,
backward_dtypesIfXPU = backward_dtypesIfCUDA + bfloat16
.But in fft operators, we should claim not supporting BF16. The existing assumption in test infrastructure leads to some fft unit test failures,
Alternatives
No response
Additional context
No response
The text was updated successfully, but these errors were encountered: