- Version of Llama Usedhttps://huggingface.co/NousResearch/Llama-2-7b-chat-hf
- Dataset used for Finetuninghttps://huggingface.co/datasets/mlabonne/guanaco-llama2-1k
- Finetuning was performed on Google Colab with T4 GPU runtime enabled.
-
Notifications
You must be signed in to change notification settings - Fork 0
KevKibe/Finetuning-Llama2-with-QLoRA
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Folders and files
Name | Name | Last commit message | Last commit date | |
---|---|---|---|---|
Repository files navigation
About
This is an implementation of fine-tuning the Llama-2 model with the QLoRA (Quantized LoRA) framework using a specific version of Llama and a particular dataset all from HuggingFace Hub.
Resources
Stars
Watchers
Forks
Releases
No releases published
Packages 0
No packages published