Skip to content
/ JSQ Public

[ICML 2024] JSQ: Compressing Large Language Models by Joint Sparsification and Quantization

License

Notifications You must be signed in to change notification settings

uanu2002/JSQ

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

4 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

JSQ: Compressing Large Language Models by Joint Sparsification and Quantization

[paper]

intuition

JSQ is a joint compression method for large language models that effectively combines sparsification and quantization to achieve very little performance loss at high compression rates.

Installation

conda create -n jsq python=3.10 -y
conda activate jsq
git clone https://github.com/uanu2002/JSQ.git
cd JSQ
pip install --upgrade pip 
pip install transformers accelerate datasets

Usage

Compression

python main.py

Evaluation

We use EleutherAI/lm-evaluation-harness: A framework for few-shot evaluation of language models. (github.com) as an evaluation tool.

Citation

If you find JSQ useful or relevant to your research, please kindly cite our paper:

@InProceedings{pmlr-v235-guo24g,
      title = {Compressing Large Language Models by Joint Sparsification and Quantization},
      author = {Guo, Jinyang and Wu, Jianyu and Wang, Zining and Liu, Jiaheng and Yang, Ge and Ding, Yifu and Gong, Ruihao and Qin, Haotong and Liu, Xianglong},
      booktitle = {Proceedings of the 41st International Conference on Machine Learning},
      pages = {16945--16957},
      year = {2024}
}

About

[ICML 2024] JSQ: Compressing Large Language Models by Joint Sparsification and Quantization

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages