Skip to content

quanta-fine-tuning/quanta

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

13 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

(NeurIPS 2024) QuanTA: Efficient High-Rank Fine-Tuning of LLMs with Quantum-Informed Tensor Adaptation

Official implementation of QuanTA: Efficient High-Rank Fine-Tuning of LLMs with Quantum-Informed Tensor Adaptation (https://arxiv.org/abs/2406.00132)

Example Image

To cite our paper

@inproceedings{
    chen2024quanta,
    author = {Chen, Zhuo and Dangovski, Rumen and Loh, Charlotte and Dugan, Owen and Luo, Di and Solja\v{c}i\'{c}, Marin},
    booktitle = {Advances in Neural Information Processing Systems},
    doi = {10.52202/079017-2928},
    editor = {A. Globerson and L. Mackey and D. Belgrave and A. Fan and U. Paquet and J. Tomczak and C. Zhang},
    pages = {92210--92245},
    publisher = {Curran Associates, Inc.},
    title = {{QuanTA}: Efficient High-Rank Fine-Tuning of LLMs with Quantum-Informed Tensor Adaptation},
    url = {https://proceedings.neurips.cc/paper_files/paper/2024/file/a7c17115db36193f6b83b71b0fe1d416-Paper-Conference.pdf},
    volume = {37},
    year = {2024}
}

Quickstart

git clone https://github.com/quanta-fine-tuning/quanta.git
cd quanta/quanta/
pip install -e .
pip install wandb datasets accelerate sentencepiece opt_einsum
cd ../run/
sh run.sh
Note:

numpy may need to be downgraded to 1.26.4

About

(NeurIPS 2024) QuanTA: Efficient High-Rank Fine-Tuning of LLMs with Quantum-Informed Tensor Adaptation

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 3

  •  
  •  
  •