Fix mixed precision output type to original type#11142
Conversation
|
Hi, does this mean the output OP has different dtype for inputs, weights and dst after AMP? If so, I'm afraid that my bfloat16 support for DNNL BYOC(#11111) might conflict with this. I implement bfloat16 support with the assumption that the dtype for the inputs, weights and dst are either all float32 or all bfloat16. |
|
I think this PR is for the model output dtype only so it shouldn't affect other cases. In addition to that, I'd suggest 1) exposing this behavior to users by adding a configure to PassContext, and 2) adding a unit test. |
OK, thanks for your clarification. |
|
Sorry ive been busy, ill take a look tomorrow |
|
@AndrewZhaoLuo @comaniac, Modified this behavior to users by adding a configure to PassContext, and adding a unit test. |
|
Looks like CI is flaky, please push empty commit to restart CI. @gayatripk1 |
Done |
|
waiting for more reviews? |
|
Thanks @gayatripk1 @AndrewZhaoLuo |
Thanks for contributing to TVM! Please refer to guideline https://tvm.apache.org/docs/contribute/ for useful information and tips. After the pull request is submitted, please request code reviews from Reviewers by @ them in the pull request thread.