New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add qint8 and qint16 support for FillOp #41421
Add qint8 and qint16 support for FillOp #41421
Conversation
Can you make the test also test in eager mode? Also, we are seeing the following error, both in graph mode and in eager mode (removing the session related parts of the test):
|
a6a2048
to
53945e1
Compare
Thanks @mihaimaruseac , the PR has been updated with session part removed. Also, I casts the qint types to int32 type before comparing it with numpy. Think this will resolve the error raised. Please take a look and let me know if the issue still persists. |
This still fails internally
|
Thanks @mihaimaruseac for the help. I think this is likely caused by dtype being reused (or graph being reused) in the internal test. I have split the tests into two to avoid this. The PR has been updated, I assume it will fix the error. Please give another try and sorry for taking that long. |
Thank you for the prompt response and sorry that the mismatch between internal and open source tests makes this harder to merge. |
|
Thanks @mihaimaruseac. I think the error is caused by the incorrect mapping of fast tensor <=> np which appear to be a bug unrelated. I have created a PR #41677 to address it. |
This PR tries to address the issue raised in 26069 where qint8 and qint16 were not supported for FillOp. This PR add qint8 and qint16 support for FillOp. Signed-off-by: Yong Tang <yong.tang.github@outlook.com>
``` + elif dtype.is_quantized: + zero = np.zeros([]).astype(dtype.as_numpy_dtype) ``` Signed-off-by: Yong Tang <yong.tang.github@outlook.com>
…ternal tests and cause testing to fail. Signed-off-by: Yong Tang <yong.tang.github@outlook.com>
0c925df
to
52e516c
Compare
@mihaimaruseac With PR #41677 merged in, I have rebased and updated this PR. I think the internal test issue likely have been solved. Can you give it a try? Thanks a lot for the help in the process. |
This PR tries to address the issue raised in #26069 (comment) where
qint8 and qint16 were not supported for FillOp.
This PR add qint8 and qint16 support for FillOp.
Signed-off-by: Yong Tang yong.tang.github@outlook.com