Skip to content

Latest commit

 

History

History
71 lines (45 loc) · 2.59 KB

finetune.md

File metadata and controls

71 lines (45 loc) · 2.59 KB

Finetuning

We provide a simple finetuning commands (litgpt finetune *) that instruction-finetune a pretrained model on datasets such as Alpaca, Dolly, and others. For more information on the supported instruction datasets and how to prepare your own custom datasets, please see the tutorials/prepare_dataset tutorials.

LitGPT currently supports the following finetuning methods:

litgpt finetune full
litgpt finetune lora
litgpt finetune adapter
litgpt finetune adapter_v2

The following section provides more details about these methods, including links for additional resources.

 

LitGPT finetuning commands

The section below provides additional information on the available and links to further resources.

 

Full finetuning

litgpt finetune full

This method trains all model weight parameters and is the most memory-intensive finetuning technique in LitGPT.

More information and resources:

 

LoRA and QLoRA finetuning

litgpt finetune lora

LoRA and QLoRA are parameter-efficient finetuning technique that only require updating a small number of parameters, which makes this a more memory-efficienty alternative to full finetuning.

More information and resources:

 

Adapter finetuning

litgpt finetune adapter

or

litgpt finetune adapter_v2

Simillar to LoRA, adapter finetuning is a parameter-efficient finetuning technique that only requires training a small subset of weight parameters, making this finetuning method more memory-efficient than full-parameter finetuning.

More information and resources: