A sophisticated AI-powered prompt optimization platform with FastAPI backend and Next.js frontend. Features real-time optimization, comprehensive analytics, and local AI support via Ollama - no external API keys required!
This project was inspired by the research paper "Automated Prompt Engineering for Large Language Models", which explores systematic approaches to prompt optimization and engineering. Building on these academic foundations, I created a practical frontend utility that automates prompt optimization workflows, making advanced prompt engineering techniques accessible through an intuitive user interface.
This implementation combines the theoretical insights from the research with real-world usability, providing both novice and expert users with powerful tools to enhance their AI interactions through optimized prompting strategies.
PromptCraft/
├── Web/ # Next.js Frontend
├── API/ # FastAPI Backend
├── package.json # Root package.json for convenience scripts
└── README.md # This file
- Local AI with Ollama: Complete privacy with local model execution - no external API keys needed
- Real-time Prompt Optimization: Advanced algorithms including DSPy and meta-prompting
- Live Performance Tracking: Real-time analytics and session monitoring
- Interactive Results: Before/after comparisons with improvement scores
- Multiple Local Models: Support for Llama, Mistral, CodeLlama, and other Ollama models
- Performance Metrics: Success rates, improvement scores, and cost tracking
- Provider Analytics: Comparative analysis across different AI providers
- Session History: Searchable optimization history with filtering
- Real-time Charts: Interactive visualizations of optimization trends
- Session Persistence: SQLite database for optimization history
- Training Data: Dataset management and analytics
- API Configuration: Health monitoring and connection status
- Export/Import: Session and data portability
- Node.js 18.0.0 or higher
- Python 3.11 or higher
- npm/yarn package manager
- Ollama (required for AI functionality) - Install Ollama
-
Clone the repository
git clone <repository-url> cd PromptCraft
-
Install and Start Ollama
# Option A: Use our setup script (Recommended) cd API ./setup_ollama.sh # Option B: Manual setup curl -fsSL https://ollama.ai/install.sh | sh ollama serve # Start Ollama service ollama pull llama3.2:latest # Pull models ollama pull mistral:7b
-
Quick Setup (Recommended)
# Install all dependencies npm run install:web npm run install:api # Set up environment variables cp API/.env.example API/.env # No API keys needed - just verify Ollama URL
-
Manual Setup
Backend Setup:
# Install backend dependencies cd API pip install -r requirements.txt # Create environment file cp .env.example .env
Edit
API/.envfor Ollama configuration:# Ollama Configuration (Required) OLLAMA_BASE_URL=http://localhost:11434 DEFAULT_MODEL_NAME=llama3.2:latest # Database DATABASE_URL=sqlite:///./app.db
Frontend Setup:
# Install frontend dependencies cd Web npm install --legacy-peer-deps
-
Start the Application
Option 1: Start both services together (Recommended)
# From project root npm run devOption 2: Start services separately
# Terminal 1 - Backend npm run dev:api # Terminal 2 - Frontend npm run dev:web
- Backend: http://127.0.0.1:8000
- Frontend: http://localhost:3000
After installation, verify everything works correctly:
# Run automated setup tests
./test_setup.sh# Start the API server (in one terminal)
cd API && make dev
# Run optimization tests (in another terminal)
python test_optimization.pySee TESTING_GUIDE.md for comprehensive manual testing instructions.
API/
├── app/
│ ├── api/
│ │ └── v1/
│ │ ├── endpoints/ # API endpoints
│ │ │ ├── sessions.py # Session management
│ │ │ ├── providers.py # AI provider integration
│ │ │ ├── training.py # Training data
│ │ │ └── datasets.py # Dataset management
│ │ └── router.py # API routing
│ ├── core/
│ │ ├── config.py # Configuration
│ │ └── database.py # Database setup
│ ├── models/ # SQLAlchemy models
│ ├── schemas/ # Pydantic schemas
│ ├── services/ # Business logic
│ │ ├── optimization_service.py # Core optimization
│ │ ├── ollama_service.py # Ollama integration
│ │ └── lm_manager.py # Language model management
│ └── main.py # FastAPI application
├── app.db # SQLite database
├── requirements.txt # Python dependencies
├── .env.example # Environment template
└── pyproject.toml # Project configuration
Web/
├── app/ # Next.js app directory
│ ├── globals.css # Global styles
│ ├── layout.tsx # Root layout
│ └── page.tsx # Main page
├── components/
│ ├── ui/ # shadcn/ui components
│ ├── optimization-dashboard.tsx # Main dashboard
│ ├── session-sidebar.tsx # Session management
│ └── theme-provider.tsx # Theme management
├── lib/
│ ├── api/ # API integration
│ │ ├── client.ts # API client
│ │ └── hooks.ts # React hooks
│ └── utils.ts # Utilities
├── types/
│ └── index.ts # TypeScript types
└── package.json # Frontend dependencies
GET /api/v1/sessions/- List all optimization sessionsPOST /api/v1/sessions/- Create a new sessionGET /api/v1/sessions/{id}- Get specific sessionPOST /api/v1/sessions/{id}/optimize- Optimize promptGET /api/v1/sessions/analytics/performance- Performance metrics
GET /api/v1/providers/- List AI providers and modelsGET /api/v1/providers/ollama/health- Check Ollama statusGET /api/v1/providers/ollama/models- List Ollama models
GET /health- API health checkGET /- API information
# Test backend health
curl http://127.0.0.1:8000/health
# Test sessions endpoint
curl http://127.0.0.1:8000/api/v1/sessions/
# Test providers
curl http://127.0.0.1:8000/api/v1/providers/- Open http://localhost:3000 in your browser
- Check that sessions load in the sidebar (should show existing sessions)
- Verify providers populate in the dashboard dropdown
- Test optimization flow:
- Enter prompt: "Help me write better emails"
- Select Ollama provider
- Choose an available model
- Click "Start Optimization"
- Wait for optimized result
- Create optimization session via frontend
- Verify it appears in backend API
- Check analytics update with new data
- Test session persistence after page refresh
The application supports multiple optimization techniques:
- Meta-Prompt: Uses meta-prompting for prompt improvement
- DSPy: Systematic prompt optimization using DSPy framework
- Simple: Basic prompt enhancement using direct LM feedback
PromptCraft leverages DSPy (Declarative Self-improving Language Programs) as a core framework for systematic prompt optimization. DSPy provides a programmatic approach to prompt engineering through composable modules and type-safe signatures. In this project, DSPy serves as the abstraction layer between the optimization service and local AI models via Ollama, enabling privacy-focused prompt optimization without external API dependencies. The integration uses DSPy's LM class for unified model management, context() for thread-safe operations in async environments, and sophisticated predictors like Predict() and ChainOfThought() for multi-step reasoning tasks. The architecture implements custom DSPy signatures with InputField and OutputField definitions, allowing structured prompt transformations with clear input/output contracts. This foundation supports the meta-prompt optimization method and enables task-specific prompt engineering for code generation, creative writing, analysis, and translation workflows, providing measurable improvement scores (0-100) with detailed performance metadata for each optimization session.
# Development (both services)
npm run dev
# Install dependencies
npm run install:web
npm run install:api
# Production
npm run start
# Linting
npm run lint:web
# Clean build artifacts
npm run clean# Start development server
cd API && python -m uvicorn app.main:app --reload
# Run tests
cd API && python -m pytest tests/ -v
# Install dependencies
cd API && pip install -r requirements.txt# Development
cd Web && npm run dev
# Production build
cd Web && npm run build
cd Web && npm start
# Linting
cd Web && npm run lint
# Install dependencies
cd Web && npm install --legacy-peer-deps- FastAPI - Modern Python web framework
- SQLAlchemy - Database ORM
- Pydantic - Data validation
- DSPy - Prompt optimization framework
- httpx - Async HTTP client
- SQLite - Database storage
- Next.js 15 - React framework with App Router
- TypeScript - Type safety
- Tailwind CSS - Utility-first styling
- shadcn/ui - Component library
- Recharts - Data visualization
- Lucide - Icons
# Required for optimization
OPENAI_API_KEY=sk-...
ANTHROPIC_API_KEY=sk-ant-...
# Optional providers
GOOGLE_API_KEY=AIza...
COHERE_API_KEY=...
# Ollama (if using local models)
OLLAMA_BASE_URL=http://localhost:11434
# Application settings
DATABASE_URL=sqlite:///./app.db
LOG_LEVEL=INFO
API_HOST=127.0.0.1
API_PORT=8000No environment variables required for basic functionality.
- CORS errors: Ensure backend is running on 127.0.0.1:8000
- Module not found: Run
npm install --legacy-peer-deps - Ollama not connecting: Check Ollama is running with
ollama serve - Database errors: Delete
app.dband restart backend to recreate
# Check if services are running
curl http://127.0.0.1:8000/health
curl -I http://localhost:3000
# Check Ollama
curl http://localhost:11434/api/tags
# View logs (if logging is enabled)
tail -f API/logs/*
# Check API directly
curl http://127.0.0.1:8000/api/v1/sessions/
curl http://127.0.0.1:8000/api/v1/providers/- Frontend: Optimized React components with proper loading states
- Backend: Async FastAPI with SQLite for fast local development
- Caching: API response caching for improved performance
- Real-time: WebSocket support for live optimization updates
- Fork the repository
- Create feature branch (
git checkout -b feature/amazing-feature) - Commit changes (
git commit -m 'Add amazing feature') - Push to branch (
git push origin feature/amazing-feature) - Open Pull Request
This project is licensed under the MIT License - see the LICENSE file for details.
- Issues: Open a GitHub issue for bugs and feature requests
- Documentation: Check the
/docsdirectory for detailed guides - API Docs: Visit http://127.0.0.1:8000/docs when backend is running
Built using FastAPI, Next.js, and modern AI optimization techniques
