This repository contains a Colab notebook that allows you to run Large Language Models (LLM) models with just one click.
-Run gguf LLM models in TextGen-webui :
-Run GPTQ and Exl2 LLM models in TextGen-webui :
check those 🤗 huggingface repos :
- mradermacher (GGUF).
- bartowski (GGUF)
- LongStriker (exl2,gguf)
- QuantFactory (GGUF)
- using search gguf here u can find all gguf files on huggingface.
you can try these :
- QuantFactory/Mistral-Nemo-Instruct-2407-GGUF 12B model Q5_K_M / Q4_K_M (⭐🔥) .
- bartowski/Mistral-Small-Instruct-2409-GGUF this is 22B u can use it 3KM in 15g vram (⭐🔥) .
- Meta-Llama-3.1-8B-Instruct-GGUF Q8_0 (⭐🔥) .
- Meta-Llama-3-8B-Instruct-GGUF Q8_0 (⭐🔥) .
- gemma-2-9b-it-GGUF Q8_0/Q6 (⭐🔥) .
in free colab gpu T4 (15G vram) you can use :
- 22b model quantized upto Q3_K_M(
context up to 8K
) - 12b model quantized upto Q5_K_M(
context up to 16K
) - 8b/7b model quantized upto Q8_0(
context up to 16k if the model support it
) - 7b/8b model exl2 quantized 6bpw (
context up to 16k if the model support it
) - 12b model exl2 quantized 4bpw
most older models goes with 8k context length if u want to use longer context u need to make sure the models support longer context.
if you want to run model higher then 20B (such as 20B,4x7b..) on colab you may need to reduce the offloaded gpu models to split the ram usage between gpu vram and system ram. (slower but it works 😉)
if you dont have quantized version , you can use full precision
7b
modeles with gptq notebook but make sure to use flags--load-in-4bit
or--load-in-8bit
its slower then quantized versions but works well,so if u have quantized verions it will be better.
in case of exl2 you can use
--cache_4bit
to save up some vram. if you want a creative answers increase the temp(0.9 ~ 1.25) and decrease the minp(0.05~0.1) if you want a strict and accurate answers decrease the temp(0.3~0.5) and increase the min p (0.15~0.25)
To get started with the LLM Model Runner, follow these steps:
-
Open the Colab notebook in Google Colab by clicking on the "Open in Colab" button at the top of the notebook.
-
Choose The model that you want from the list .
3.Choose quantization type:
- Run the Cell and Visit the Generated link(
https://***.gradio.live
) and start your Conversation with your favorite model !
- no Requirement just open Colab in Gpu mode
All the necessary dependencies will be automatically installed when you run the Colab notebook.
- text-generation-webui for their Great Ui