Let’s be real standard AI has zero aura. It’s too polite, too robotic, and honestly? Just mid. So I decided to cook something different. I took 12k+ real world WhatsApp messages from my friend “senpai” and basically baked her entire personality into a Llama-3 8B model, Trained it on Kaggle , used Hugging face as backend and nextjs as frontend.
You can explore the project here:
The model was trained through several stages of dataset preparation and fine-tuning.
- Raw WhatsApp exports containing ~81,000 lines
- Cleaned and structured into 12,613 dialogue samples
- Applied Consecutive Message Grouping to preserve natural paragraph style conversation
Training was performed using Unsloth, which enables efficient LLM fine-tuning.
Configuration highlights:
- Base Model: Llama-3 8B
- Training Framework: Unsloth
- Learning Rate:
5e-5 - Training Steps:
1000 - Sequence Length:
2048 - Hardware: Kaggle GPU T2x2
The training loss decreased from approximately 2.5 → 0.3,
After training, the model was exported in GGUF format (Q4_K_M quantization).
- Next.js 15 – React framework for UI
- Tailwind CSS – Cyberpunk themed styling
- Framer Motion – Animations and transitions
- Lucide React – Icon system
- FastAPI – Python API server
- Llama-3B – as Base model
- Hugging Face Spaces – For hosting environment
-
Clone the repo
git clone https://github.com/lilithCode/Iris_ChatBot.git cd Iris_ChatBot -
Install frontend dependencies
npm install # or yarn
- HF_TOKEN — Hugging Face API token
Production build:
npm run build
npm run startContributions welcome. Please open issues or PRs with focused changes. For model/training changes include reproducible steps and resource usage :)