-
Notifications
You must be signed in to change notification settings - Fork 74
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Adding quantization #20
Comments
I assume the correct way to do it would go something like:
|
Thanks for your kind response! We also assume that if quantization needs to be applied, the correct path is as you listed. One of the reasons is that if the pruning need to be performed under a CPU, certain operations, such as SiLU, are not supported on the CPU with FP16 and below. If you apply quantization first and then proceed with pruning, it could result in quantized weights being readjusted back to fp32. |
@horseee Hi, thanks for the good suggestion. And may I ask why the paper don't compare the results bewteen pure quantizing and pure pruning? |
Hi. Quantization is orthogonal to pruning and hence can be readily deployed on top of pruning to further reduce the network size. These are two different lines of model compression strategy, focusing on different types of redundancy in models. Exactly for this reason, the majority of papers in pruning CNN/BERT will not compare the performance between these two methods. |
Thanks a lot! My question comes from the quantizing method such as GPTQ/AWQ can have a better performance with large compressing ratio than pruning method... Your answer deeply helped me~
|
@horseee hi, I have two questions hope you could reply,thx:
|
Hi. We conducted a quick experiment and here are the inference performance:
The latency is tested on the test set of Wikitext2. LLM.int8() slows down the inference of the LLaMA-7B model in our case, as is also mentioned in the paper of LLM.int8() with the model size of 6.7B. |
@horseee hi,thx for your kind reply. |
In my experiment above, the pruned model is quantized following the instruction of bitsandbytes. I didn't try GPTQ since it seems more complicated if the model is not a standard model that cannot be loaded from |
If I use the multiple strategies such as GPTQ + LLM-Pruner + LoRA, maybe the compressing ratio of LLM will be greatly improved with an acceptable performance?
The text was updated successfully, but these errors were encountered: