Skip to content

Latest commit

 

History

History
18 lines (10 loc) · 979 Bytes

README.md

File metadata and controls

18 lines (10 loc) · 979 Bytes

FINE TUNING GEMMA 2B

crewai

This repository contains a full PEFT pipeline with model quantization and LoRA adapters. The model is hosted in Hugging Face, downloaded with a quantized version and LoRA adapters are added in order to train the model. The trained model/adapters are pushed back to a personal repo in Hugging Face

Feel free to ⭐ and clone this repo 😉

👨‍💻 Tech Stack

Visual Studio Code Jupyter Notebook Python