Skip to content

Implementation of the StableLM/Pythia language models based on nanoGPT. Supports flash attention, Int8 and GPTQ 4bit quantization, LoRA and LLaMA-Adapter fine-tuning, pre-training. Apache 2.0-licensed.

License

Notifications You must be signed in to change notification settings

AI-Jie01/lit-stablelm

 
 

Repository files navigation

⚡ Lit-StableLM

cpu-tests license Discord

Lit-StableLM and pineapple pizza

⚡ Lit-StableLM

Hackable implementation of the StableLM and Pythia family of models released under the Apache 2.0 license.

This implementation builds on Lit-LLaMA and nanoGPT, and it's powered by Lightning Fabric ⚡.

Weights are available under the Apache 2.0 license and can be downloaded following these instructions.

Design principles

This repository follows the main principle of openness through clarity.

Lit-StableLM is:

  • Simple: Single-file implementation without boilerplate.
  • Correct: Numerically equivalent to the original model.
  • Optimized: Runs on consumer hardware or at scale.
  • Open-source: No strings attached.

Avoiding code duplication is not a goal. Readability and hackability are.

Get involved!

Join our Discord to build high-performance, truly open-source models for the common benefit of the community.

 

Setup

Clone the repo

git clone https://github.com/Lightning-AI/lit-stablelm
cd lit-stablelm

install dependencies

pip install -r requirements.txt

You are all set! 🎉

 

Use the model

To generate text predictions, you need to download the model weights. If you don't have them, check out our guide.

Run inference:

python generate.py --prompt "Hello, my name is"

This will run the 3B pre-trained model and require ~7 GB of GPU memory using the bfloat16 datatype.

Full guide for generating samples from the model.

You can also chat with the model interactively:

python chat.py

Run Lit-StableLM on smaller consumer devices

Porting from Lit-LLaMA in progress 👷

Finetune the model

Porting from Lit-LLaMA in progress 👷

Pre-training

Porting from Lit-LLaMA in progress 👷

Get involved!

We are on a quest towards fully open source AI.

Lit-LLaMA

Join us and start contributing, especially on the following areas:

We welcome all individual contributors, regardless of their level of experience or hardware. Your contributions are valuable, and we are excited to see what you can accomplish in this collaborative and supportive environment.

Unsure about contributing? Check out our Contributing to Lit-LLaMA: A Hitchhiker’s Guide to the Quest for Fully Open-Source AI guide. The same guidelines apply to Lit-StableLM.

Don't forget to join our Discord!

Acknowledgements

License

Lit-StableLM is released under the Apache 2.0 license.

About

Implementation of the StableLM/Pythia language models based on nanoGPT. Supports flash attention, Int8 and GPTQ 4bit quantization, LoRA and LLaMA-Adapter fine-tuning, pre-training. Apache 2.0-licensed.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 100.0%