New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Error int8 support for jetson tx2 #26
Comments
It seems yes, jetson tx2 doesn't support INT8-quantization (DP4A). What CUDA, cuDNN do you use? |
nvidia-smi don't support jetson tx2? |
Any desktop GPU supports nvidia-smi.
jetson tx2 supports fp16, but it doesn't have Tensor Cores, so fp16 will not be faster than fp32 on jetson tx2. |
jetson tx2 really not support nvidia-smi |
oh yeah so I think it doesn't support DP4A (INT8). You can only try to use XNOR (1-bit) quantization by training these models:
|
@Yinling-123 TX2 doesn't support INT8 optimizations. |
@AlexeyAB will you share models trained with XNOR quantization with us? |
Error: CUDNN_STATUS_ARCH_MISMATCH - This GPU doesn't support DP4A (INT8 weights and input)
cudnnstat = 6
So jetson tx2 not support for quantized?
The text was updated successfully, but these errors were encountered: