Skip to content

wu-ming233/FIRST-impl

Repository files navigation

Assuming in DRAGON, one can run:

pip install torchtune
tune run --nproc_per_node <num_gpus> first_lora_finetune_distributed.py --config ./configs/8B_lora.yaml

To run lora finetune with <num_gpus> available.

If not in DRAGON, modify the paths to the model checkpoints in ./configs/8B_lora.yaml accordingly.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages