The No-Code AI Model Fine-Tuning, Comparison & Deployment Platform.
FineTuneStudio is an all-in-one open-source studio designed to democratize AI fine-tuning. It allows developers and researchers to upload datasets, clean them with AI-powered suggestions, fine-tune state-of-the-art models (Llama, Mistral, Gemma, Phi) using LoRA/PEFT, and deploy them directly to Hugging Faceβall through a beautiful, premium glassmorphism interface.
- π Intelligent Data Studio: Upload CSV, JSON, Audio, or Video. Get AI-powered cleaning suggestions (duplicates, missing values, etc.).
- β‘ Hardware Orchestrator: Run training on your local machine, Google Colab, or Kaggle. (RunPod & AWS support coming soon).
- π§ Multi-Model Support: Fine-tune Llama 3.2, Mistral 7B, Phi-2, Gemma 2B, and Liquid Foundation Models (LFM).
- βοΈ Side-by-Side Comparison: Evaluate "Before vs After" performance with real-time metrics and Knowledge Transfer Radar charts.
- π’ 1-Click Deployment: Push your optimized models directly to Hugging Face Hub.
- π‘οΈ Secure Credential Management: Store HF and Kaggle keys securely with integration for Colab Secrets.
- Frontend: React.js, Vite, Recharts (Visualizations), Lucide React (Icons).
- Backend: FastAPI (Python), MongoDB (Metadata storage), Pandas (Data processing).
- AI/ML: Hugging Face Transformers, PEFT/LoRA, BitsAndBytes (4-bit quantization).
- Python 3.10+
- Node.js 18+
- MongoDB (Running locally or on Atlas)
git clone https://github.com/maleshkumar/FineTuneStudio.git
cd FineTuneStudiocd backend
python -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate
pip install -r requirements.txt
uvicorn app.main:app --reload --port 8000cd ../frontend
npm install
npm run devVisit http://localhost:5173 to start fine-tuning!
- WebSocket migration for real-time training logs.
- Multi-Cloud provider integration (RunPod/AWS).
- Shared Community Library for datasets.
- Export to ONNX/TensorRT for edge deployment.
Contributions are welcome! Please see CONTRIBUTING.md for guidelines.
FineTuneStudio is released under the MIT License.
Built with β€οΈ by Malesh Kumar
