Tinker UI
A full-stack platform for fine-tuning and training AI models, featuring a modern web UI and powerful backend API.
This project was 99% vibe coded as a fun Saturday hack to explore the Tinker Cookbook and see how quickly a full-featured training platform could be built. The result? A functional web UI that makes fine-tuning LLMs as easy as clicking a few buttons. No overthinking, just pure flow state coding.
- Demo
- Features
- Prerequisites
- Installation
- Configuration
- Running the Application
- Testing
- Troubleshooting
- Contributing
- License
Watch the complete demo: https://www.youtube.com/watch?v=qdnSWMPZri8
- Multi-Model Support: Llama, Qwen, DeepSeek architectures
- Training Recipes: SFT, DPO, RL, Distillation, Chat SL, Math RL, On-Policy Distillation
- LoRA Fine-tuning: Efficient parameter-efficient training
- Real-time Monitoring: Live progress tracking with metrics
- Auto Hyperparameters: Intelligent parameter suggestions based on model size
- JSONL Upload: Direct dataset file upload with validation
- HuggingFace Integration: Seamless dataset importing
- Data Preview: Interactive dataset exploration
- Format Conversion: Support for Alpaca and multi-turn conversation formats
- Format Detection: Automatic dataset format identification
- Interactive Chat: Test models with real-time conversations
- Model Comparison: Side-by-side evaluation tools
- Inference API: Direct model querying capabilities
- Checkpoint Downloads: Export trained model weights
- Evaluation Suite: Comprehensive model testing with custom prompts
- One-Click Deploy: Deploy trained models to HuggingFace Hub with a single click
- Secure Token Management: Encrypted storage of HuggingFace API tokens
- Auto Model Cards: Automatically generated model cards with training details
- Public/Private Repos: Choose repository visibility
- LoRA Weight Merging: Option to merge LoRA weights with base model
- Deployment Dashboard: Track all deployments with status monitoring
- Direct Links: Quick access to your models on HuggingFace Hub
- Workspace Management: Project-based organization
- Run History: Complete training run tracking
- Model Registry: Versioned model catalog
- Metrics & Logs: Detailed training metrics and logs
- Cost Estimation: Training cost calculations
Before getting started, ensure you have the following installed:
- Node.js (version 18 or higher) - Download
- Python (version 3.11 or higher) - Download
- pnpm (package manager) - Install with
npm install -g pnpm - Git - Download
git clone https://github.com/klei30/tinker-ui.git
cd tinker-uiNavigate to the backend directory and set up the Python environment:
cd backend
python -m venv .venv
# On Windows
.venv\Scripts\activate
# On macOS/Linux
source .venv/bin/activate
pip install -r requirements.txtNavigate to the frontend directory and install dependencies:
cd ../frontend
pnpm installCreate a .env file in the backend directory:
# backend/.env
TINKER_API_KEY=your_tinker_api_key_here
DATABASE_URL=sqlite:///./tinker_platform.db
ALLOW_ANON=true
# Generate encryption key with: python -c "from cryptography.fernet import Fernet; print(Fernet.generate_key().decode())"
ENCRYPTION_KEY=your_encryption_key_hereCreate a .env.local file in the frontend directory:
# frontend/.env.local
NEXT_PUBLIC_API_BASE_URL=http://127.0.0.1:8000
NEXT_PUBLIC_TINKER_API_KEY=your_tinker_api_key_hereIn the backend directory:
cd backend
uvicorn main:app --reloadThe backend will be available at http://127.0.0.1:8000
In a new terminal, navigate to the frontend directory:
cd frontend
pnpm devThe frontend will be available at http://localhost:3000
-
Generate encryption key:
python -c "from cryptography.fernet import Fernet; print(Fernet.generate_key().decode())" -
Add to backend/.env:
ENCRYPTION_KEY=your-generated-key-here
-
Get HuggingFace Token:
- Visit https://huggingface.co/settings/tokens
- Create a new token with write permissions
- Copy the token (starts with
hf_)
-
Connect in UI:
- Navigate to Settings page (http://localhost:3000/settings)
- Paste your HuggingFace token
- Click "Connect HuggingFace"
-
Deploy Models:
- Complete a training run
- Click "Deploy to HuggingFace" on any checkpoint
- Configure repository settings
- Click "Deploy" - your model will be live on HuggingFace Hub!
For detailed instructions, see docs/HUGGINGFACE_DEPLOYMENT.md
This project includes comprehensive test suites for both backend and frontend.
The backend includes extensive test coverage across:
- API Endpoints (31 tests): All FastAPI endpoints
- Training Workflows (19 tests): SFT, DPO, RL, and all recipe types
- Dataset Processing (29 tests): Format detection and validation
- Checkpoint Management (27 tests): Lifecycle and storage
- Model Evaluation (27 tests): Evaluation and metrics
- Utility Functions (38 tests): Text processing and helpers
- Job Runner (20+ tests): Background job execution
cd backend
# Run all tests
pytest
# Run with coverage report
pytest --cov=. --cov-report=html
# Run specific test categories
pytest -m unit # Unit tests only
pytest -m integration # Integration tests only
pytest -m e2e # End-to-end tests only
# Run specific test file
pytest tests/test_api_endpoints.py
# Run with verbose output
pytest -vFor more details, see backend/tests/README.md
Frontend tests using Vitest and React Testing Library:
- Hyperparameter Calculator (5 tests): Component rendering and interactions
cd frontend/tests
# Install dependencies (first time only)
pnpm install
# Run all tests
pnpm test:full
# Run tests in watch mode
pnpm test
# Run with UI
pnpm test:ui
# Run specific test file
pnpm test:run simple_tests.test.ts- Total Tests: 229
- Backend Tests: 191+ (Unit, Integration, E2E)
- Frontend Tests: 5
- Test Coverage: Comprehensive coverage of core functionality
- Success Rate: ~82% (with known fixture issues being addressed)
If port 8000 or 3000 is already in use:
- For backend: Change the port in the uvicorn command:
--port 8001 - For frontend: It will automatically use the next available port (e.g., 3001)
- Update the
NEXT_PUBLIC_API_BASE_URLinfrontend/.env.localto match the backend port
If the frontend shows "Failed to fetch" or no models load:
- Ensure the backend is running on the correct host/port
- Check that
NEXT_PUBLIC_API_BASE_URLmatches the backend URL - Verify the API key in both .env files
- Try restarting both services
If you see import errors in the backend:
- Ensure all dependencies are installed:
pip install -r requirements.txt - Check that you're in the virtual environment
- Some optional dependencies may not be available (like tinker, llm)
If the frontend fails to compile:
- Ensure all dependencies are installed:
pnpm install - Clear the Next.js cache:
rm -rf .next(orrd /s /q .nexton Windows) - Restart the dev server
If training starts but logs don't appear in the UI:
- Ensure the backend can write to the
artifacts/directory - Check the browser console for any fetch errors
- See docs/PROGRESS_BAR_FIX.md for details on progress tracking
If you experience connection issues:
- Use
127.0.0.1instead oflocalhostforAPI_BASE_URL - Ensure backend is bound to
127.0.0.1not0.0.0.0
If tests fail:
- Ensure all test dependencies are installed
- Check environment variables are set correctly
- See test documentation for specific requirements
- Some tests may require TINKER_API_KEY environment variable
tinker-ui/
├── backend/ # FastAPI backend
│ ├── main.py # Main API application
│ ├── job_runner.py # Background job execution
│ ├── models.py # SQLAlchemy models
│ ├── tinker_cookbook/ # Training recipes
│ ├── tests/ # Backend test suite
│ └── requirements.txt # Python dependencies
├── frontend/ # Next.js frontend
│ ├── app/ # Next.js 16 app directory
│ ├── components/ # React components
│ ├── lib/ # Utility functions
│ ├── tests/ # Frontend test suite
│ └── package.json # Node dependencies
├── docs/ # Documentation
├── TESTING_SUMMARY.md # Testing guide
├── TEST_RESULTS.md # Test results
└── README.md # This file
Contributions are welcome! Please see CONTRIBUTING.md for guidelines.
- Fork the repository
- Create a feature branch:
git checkout -b feature/your-feature - Make your changes
- Run tests:
pytest(backend) andpnpm test:full(frontend) - Commit with descriptive messages
- Push to your fork
- Create a Pull Request
- docs/HUGGINGFACE_DEPLOYMENT.md - HuggingFace deployment guide
- TESTING_SUMMARY.md - Complete testing documentation
- TEST_RESULTS.md - Detailed test results and analysis
- docs/PROGRESS_BAR_FIX.md - Progress tracking implementation details
- backend/tests/README.md - Backend testing guide
- FastAPI: Modern, fast web framework for building APIs
- SQLAlchemy: SQL toolkit and ORM
- Pydantic: Data validation using Python type hints
- Pytest: Testing framework with extensive fixtures
- Ruff: Fast Python linter and formatter
- Next.js 16: React framework with App Router
- React 19: Latest React with server components
- TypeScript: Type-safe JavaScript
- Tailwind CSS: Utility-first CSS framework
- Radix UI: Accessible component primitives
- Vitest: Fast unit testing framework
- Tinker Cookbook: Training recipes and utilities
- HuggingFace: Dataset and model integration
- LoRA: Parameter-efficient fine-tuning
- on progess
- Built with the Tinker Cookbook
- Inspired by modern ML training platforms
- Community contributions and feedback
Made with ❤️ by the community
