Repository for Chat LLaMA - training a LoRA for the LLaMA (1 or 2) models on HuggingFace with 8-bit or 4-bit quantization. Research only.
-
Updated
Aug 25, 2023 - Python
Repository for Chat LLaMA - training a LoRA for the LLaMA (1 or 2) models on HuggingFace with 8-bit or 4-bit quantization. Research only.
[SIGIR'24] The official implementation code of MOELoRA.
GaLore: Memory-Efficient LLM Training by Gradient Low-Rank Projection
Efficient fine-tuned large language model (LLM) for the task of sentiment analysis using the IMDB dataset.
Stability AI SD-Turbo model fine-tuned using LoRA on Magic The Gathering artwork
Add a description, image, and links to the low-rank-adaptation topic page so that developers can more easily learn about it.
To associate your repository with the low-rank-adaptation topic, visit your repo's landing page and select "manage topics."