Skip to content
/ LOMO Public
forked from OpenLMLab/LOMO

LOMO: LOw-Memory Optimization

License

Notifications You must be signed in to change notification settings

signcl/LOMO

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

29 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

English | 中文

LOMO: LOw-Memory Optimization

This is the implementation for Full Parameter Fine-Tuning for Large Language Models with Limited Resources.

In this work, we propose a new optimizer, LOw-Memory Optimization (LOMO), which fuses the gradient computation and the parameter update in one step to reduce memory usage. Our approach enables the full parameter fine-tuning of a 7B model on a single RTX 3090, or a 65B model on a single machine with 8×RTX 3090, each with 24GB memory.

LOMO is integrated with CoLLiE library, which supports Collaborative Tuning of Large Language Models in an Efficient Way.

LOMO

Dependencies

torch
deepspeed
transformers
peft
wandb

The minimum dependency is PyTorch, and others are used to reproduce our paper results.

Run the code

We provide code for fine-tuning Large Language Models (LLMs) using three different approaches: LOMO, LoRA, and LoRA + LOMO.

  1. For full parameter fine-tuning using LOMO, the implementation is in src/lomo_trainer.py, and you can run:
deepspeed --master_port "$port" --include localhost:"$CUDA_VISIBLE_DEVICES" src/train_lomo.py config/args_lomo.yaml
  1. For LoRA and LoRA + LOMO, the implementation is in src/lomo_lora_trainer.py, and you can run:
deepspeed --master_port "$port" --include localhost:"$CUDA_VISIBLE_DEVICES" src/train_lomo_lora.py config/args_lomo_lora.yaml

In the code, we have included the lora_only argument in src/arguments.py, which controls whether to use LoRA only or LoRA + LOMO. Please note that when lora_only is set to True, the arguments related to LOMO will not work.

Besides, we provide a simple run.sh script for convenience. You can execute the code using the following command:

bash run.sh

For data processing, we currently only provide the six datasets of SuperGLUE mentioned in the paper. If you wish to use new datasets, please modify the Dataset and DataCollator accordingly.

For evaluation, we currently only provide the eval_step codes for multiple-choice QA and generation tasks. If you have other requirements, please modify the eval_step code in the LOMOTrainer or LOMOLoRATrainer accordingly and provide the necessary compute_metrics to the trainer.

Reproduce our results

We provide the sampled datasets used in our experiments here. Due to the limited computational resources, we reported the highest results obtained from experiments conducted with the same random seed (42). We acknolwedge this limitation in our work and plan to conduct repeated experiments in the next version to address it.

Feel free to raise issues if you have any questions.

Implementation

Hook function Our implementation relies on injecting hook functions into PyTorch's backward pass. As depicted in the figure, we register a customized hook function for each parameter. When the gradient of a parameter is computed (prior to writing it to the .grad attribute), its corresponding hook function is invoked. For more information about hook functions and the backward pass of the autograd graph, please refer to PyTorch's documentation. In summary, during the backward pass, we go through a tensor and its grad_fn, write the gradient into the .grad attribute, and then pass to the next tensor.

Our customized hook function scans all the parameters, updating a parameter if its .grad attribute is not empty, and then clears and frees the .grad attribute. Since the hook function for a parameter is called before its .grad attribute is set, the .grad attribute of the last parameter in the autograd graph is not ready when the last hook function is invoked. Therefore, we perform an additional scan to update the last parameter.

Citation

@inproceedings{Lv2023FullPF,
  title={Full Parameter Fine-tuning for Large Language Models with Limited Resources},
  author={Kai Lv and Yuqing Yang and Tengxiao Liu and Qi-jie Gao and Qipeng Guo and Xipeng Qiu},
  year={2023}
}

About

LOMO: LOw-Memory Optimization

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 99.8%
  • Shell 0.2%