-
Notifications
You must be signed in to change notification settings - Fork 520
Implementation of the LLaMA language model based on nanoGPT. Supports flash attention, Int8 and GPTQ 4bit quantization, LoRA and LLaMA-Adapter fine-tuning, pre-training. Apache 2.0-licensed.
License
Lightning-AI/lit-llama
ErrorLooks like something went wrong!
About
Implementation of the LLaMA language model based on nanoGPT. Supports flash attention, Int8 and GPTQ 4bit quantization, LoRA and LLaMA-Adapter fine-tuning, pre-training. Apache 2.0-licensed.
Resources
License
Stars
Watchers
Forks
Releases
No releases published
Packages 0
No packages published