You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thanks a lot for releasing the code.
I have a question regarding the nonlinear layers such as GELU, softmax or even LayerNorm (as it has RSQRT). If I understand your code correctly, you are using the floating-point versions of their implementations in the QAT model. Does this mean that we are not actually simulating the quantized behaviours of these layers in the QAT model accurately? Maybe these layers are implemented as look-up tables or they have full int implementations on hardware devices, and not simulating these in QAT has minimal impact on quantized model performance? Can you clarify on this a bit more?
Thanks a lot.
The text was updated successfully, but these errors were encountered:
Hi there,
Thanks a lot for releasing the code.
I have a question regarding the nonlinear layers such as GELU, softmax or even LayerNorm (as it has RSQRT). If I understand your code correctly, you are using the floating-point versions of their implementations in the QAT model. Does this mean that we are not actually simulating the quantized behaviours of these layers in the QAT model accurately? Maybe these layers are implemented as look-up tables or they have full int implementations on hardware devices, and not simulating these in QAT has minimal impact on quantized model performance? Can you clarify on this a bit more?
Thanks a lot.
The text was updated successfully, but these errors were encountered: