You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
First of all kudos for your work on bitsandbytes and generally on making fine-tuning of LLMs accessible to regular folks
I have 2 basic questions about 8bit optimizers, dynamic tree quantization specifically
If I understand correctly for each single number you quantize using dynamic tree quantization, you actually have to pick where indicator bit will be placed - it seems like separate optimization problem itself - I wonder how it is done in practice, it it predefined, selected upfront? Or maybe it is established per 2048 block ?
Exponent part of dynamic tree quantization is clear for me, sign bit and indicator bit as well. Some confusion that I got is about "linear quantization" part. In https://arxiv.org/abs/2110.02861 (8-bit Optimizers via Block-wise Quantization) you describe it as "linear quantization" - it feels like these bits represent int(k) / max(int_k) whereint(k) is an integer given by binary sequence of length k (remaining bits after indicator bit) and max(int_k) is maximal such integer. I lived happy life till I looked into other paper where I actually got confused - this one: https://arxiv.org/abs/1511.04561 (8-Bit Approximations for Parallelism in Deep Learning) where you describe "linear quantization" as binary decision tree - described with
"In order to decrease this error, we can use the bits of the mantissa to represent a binary tree with interval (0.1, 1) which is bisected according to the route taken through the tree; the children thus represent the start and end points for intervals in a bisection method"
I have a hard time understanding both and seeing they are the same
Thanks for your help
The text was updated successfully, but these errors were encountered:
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Hello Tim,
First of all kudos for your work on bitsandbytes and generally on making fine-tuning of LLMs accessible to regular folks
I have 2 basic questions about 8bit optimizers, dynamic tree quantization specifically
If I understand correctly for each single number you quantize using dynamic tree quantization, you actually have to pick where indicator bit will be placed - it seems like separate optimization problem itself - I wonder how it is done in practice, it it predefined, selected upfront? Or maybe it is established per 2048 block ?
Exponent part of dynamic tree quantization is clear for me, sign bit and indicator bit as well. Some confusion that I got is about "linear quantization" part. In https://arxiv.org/abs/2110.02861 (8-bit Optimizers via Block-wise Quantization) you describe it as "linear quantization" - it feels like these bits represent
int(k) / max(int_k)
whereint(k)
is an integer given by binary sequence of lengthk
(remaining bits after indicator bit) andmax(int_k)
is maximal such integer. I lived happy life till I looked into other paper where I actually got confused - this one: https://arxiv.org/abs/1511.04561 (8-Bit Approximations for Parallelism in Deep Learning) where you describe "linear quantization" asbinary decision tree
- described with"In order to decrease this error, we can use the bits of the mantissa to represent a binary tree with interval
(0.1, 1)
which is bisected according to the route taken through the tree; the children thus represent the start and end points for intervals in a bisection method"I have a hard time understanding both and seeing they are the same
Thanks for your help
The text was updated successfully, but these errors were encountered: