Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Adding quantization #20

Open
Duncan1115 opened this issue Aug 12, 2023 · 9 comments
Open

Adding quantization #20

Duncan1115 opened this issue Aug 12, 2023 · 9 comments

Comments

@Duncan1115
Copy link

If I use the multiple strategies such as GPTQ + LLM-Pruner + LoRA, maybe the compressing ratio of LLM will be greatly improved with an acceptable performance?

@MarlNox
Copy link

MarlNox commented Aug 13, 2023

I assume the correct way to do it would go something like:
0. (optional) Increase size and topic breadth of LLM-Pruner Corpus

  1. LLM-Pruner
  2. LoRA/QLoRa
  3. GPTQ
    This is completely hypothetical at the moment though, and you'd need to try it out yourself to see if that'd work as intended.

@horseee
Copy link
Owner

horseee commented Aug 14, 2023

I assume the correct way to do it would go something like: 0. (optional) Increase size and topic breadth of LLM-Pruner Corpus

  1. LLM-Pruner
  2. LoRA/QLoRa
  3. GPTQ
    This is completely hypothetical at the moment though, and you'd need to try it out yourself to see if that'd work as intended.

Thanks for your kind response! We also assume that if quantization needs to be applied, the correct path is as you listed. One of the reasons is that if the pruning need to be performed under a CPU, certain operations, such as SiLU, are not supported on the CPU with FP16 and below. If you apply quantization first and then proceed with pruning, it could result in quantized weights being readjusted back to fp32.

@Duncan1115
Copy link
Author

Duncan1115 commented Aug 17, 2023

@horseee Hi, thanks for the good suggestion. And may I ask why the paper don't compare the results bewteen pure quantizing and pure pruning?

@horseee
Copy link
Owner

horseee commented Aug 17, 2023

@horseee Hi, may I ask you why you don't compare the results bewteen pure quantizing and pure pruning in the paper?

Hi. Quantization is orthogonal to pruning and hence can be readily deployed on top of pruning to further reduce the network size. These are two different lines of model compression strategy, focusing on different types of redundancy in models. Exactly for this reason, the majority of papers in pruning CNN/BERT will not compare the performance between these two methods.

@Duncan1115
Copy link
Author

Thanks a lot! My question comes from the quantizing method such as GPTQ/AWQ can have a better performance with large compressing ratio than pruning method... Your answer deeply helped me~

@horseee Hi, may I ask you why you don't compare the results bewteen pure quantizing and pure pruning in the paper?

Hi. Quantization is orthogonal to pruning and hence can be readily deployed on top of pruning to further reduce the network size. These are two different lines of model compression strategy, focusing on different types of redundancy in models. Exactly for this reason, the majority of papers in pruning CNN/BERT will not compare the performance between these two methods.

@77h2l
Copy link

77h2l commented Aug 22, 2023

@horseee hi, I have two questions hope you could reply,thx:

  1. does the model pruned by llm-pruner or other pruner tricks, could have a better inference performance under fp16;
  2. how could it be achieved to run a model pruned by llm-pruner, and then use gptq or other ways to quant the model to int8.

@horseee
Copy link
Owner

horseee commented Aug 23, 2023

Hi. We conducted a quick experiment and here are the inference performance:

Pruning Ratio #Param Memory Latency Speedup BoolQ PIQA HellaSwag WinoGrande ARC-e ARC-c OBQA Average
LLaMA-7B 6.74B 12884.5MiB 69.32s 1x 73.18 78.35 72.99 67.01 67.45 41.38 42.40 63.25
LLM.int8() 6.74B 6777.7MiB 76.20s 0.91x 73.36 78.18 73.01 66.93 67.47 40.87 41.80 63.09
LLaMA-5.4B 5.47B 10488.4MiB 58.55s 1.18x 76.57 77.37 66.60 65.82 70.62 40.70 38.80 62.36
LLaMA-5.4B + LLM.int8() 5.47B 5444.37MiB 63.10s 1.09x 76.39 76.71 66.62 66.46 70.54 40.19 39.20 62.30

The latency is tested on the test set of Wikitext2. LLM.int8() slows down the inference of the LLaMA-7B model in our case, as is also mentioned in the paper of LLM.int8() with the model size of 6.7B.

@77h2l
Copy link

77h2l commented Aug 24, 2023

@horseee hi,thx for your kind reply.
Actually I'm not intend to compare the performance of pruner and quantization, as they are two different ways to compress the model.I mean how can we smoothly join the pruned model together with quantization, could it be done simply and directly?

@horseee
Copy link
Owner

horseee commented Aug 24, 2023

I mean how can we smoothly join the pruned model together with quantization, could it be done simply and directly?

In my experiment above, the pruned model is quantized following the instruction of bitsandbytes. I didn't try GPTQ since it seems more complicated if the model is not a standard model that cannot be loaded from .from_pretrained().

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants