[ci][onnx] Relax more test tolerances#11152
[ci][onnx] Relax more test tolerances#11152driazati wants to merge 1 commit intoapache:mainfrom driazati:onn
Conversation
|
It has been a while since this PR was updated, @areusch please leave a review or address the outstanding comments. @driazati if this PR is still a work in progress, please convert it to a draft until it is ready for review. |
Following on #11042, this changes tolerances to fix some other ONNX test failures that have come up in the past several days
|
cc @altanh can you take a look? |
|
It has been a while since this PR was updated, @altanh @areusch please leave a review or address the outstanding comments. @driazati if this PR is still a work in progress, please convert it to a draft until it is ready for review. |
|
I'm suspicious of a single value being ~1e-2 off while the rest are below 1e-5... tricky. We could reduce tolerances to that but that's quite a reduction |
|
Closing in favor of #11376, agreed that the tolerances are too high here (and not even enough for all the failures I've seen, some are off by |
|
yeah, for quantized ops, getting accuracy aligned with frameworks is challenging, due to slight difference in how low-level numerics is done (fixed point vs fp32 etc). In PyTorch frontnend, we skip accuracy check entirely for quantized ops / models. |
Following on #11042, this changes tolerances to fix some other ONNX test failures that have come up in the past several days
cc @areusch