An AI-enhanced search engine with a minimalist design, dark/light mode support, and production-grade architecture.
Recent Updates: See IMPROVEMENTS.md for all improvements made (type safety, error handling, testing, documentation).
npm installCopy .env.example to .env.local and fill in your API keys:
cp .env.example .env.localMinimum Required:
OPENROUTER_API_KEY- Get free key from OpenRouter
Optional (for web search):
SERPAPI_KEY- For real-time Google searchGOOGLE_SEARCH_API_KEY+GOOGLE_SEARCH_CX- Alternative search provider
See .env.example for detailed setup instructions.
npm run devVisit http://localhost:3000 in your browser.
- 🎨 Ultra-minimalist design with dark/light mode
- 🔍 Real-time AI suggestions powered by OpenRouter
- 📊 AI-generated summaries with traditional web results
- ⚡ Sub-100ms response times (Edge Runtime)
- 📱 Mobile-first responsive layout
- ♿ Accessible (ARIA labels, keyboard navigation)
- 🔐 Production-grade architecture with:
- Model racing for faster answers
- Semantic caching for duplicate queries
- Rate limiting per IP
- Input sanitization
- Structured error handling
- Comprehensive logging
- ARCHITECTURE.md - Complete system design, API docs, model racing strategy
- IMPROVEMENTS.md - All improvements made (type safety, error handling, testing)
- .env.example - Environment setup guide
npm run build # Production build
npm start # Run production server
npm run lint # Check code quality
npm test # Run test suite
npm test:watch # Watch modesrc/
├── app/
│ ├── page.tsx # Home page
│ ├── results/ # Search results page
│ ├── api/
│ │ └── ai/
│ │ ├── stream/ # Streaming answers (main feature)
│ │ ├── suggest/ # AI suggestions
│ │ └── cache/ # Cache monitoring
│ └── api/search/ # Web search aggregation
├── components/
│ ├── search/ # Search UI components
│ ├── layout/ # Layout components
│ └── ui/ # Base UI components
├── hooks/
│ ├── useSearch.ts # Search state management
│ ├── useStreamingAnswer.ts # SSE consumer
│ └── useSearchHistory.ts # History persistence
└── lib/
├── constants.ts # All magic numbers
├── errors.ts # Error types & formatting
├── logger.ts # Structured logging
├── sanitize.ts # Input validation
├── openrouter/ # AI integration
└── rate-limit.ts # Rate limiting
Simultaneously request answers from multiple AI models, streaming the first successful response (no waiting for slowest model). Losing connections immediately aborted to prevent token waste.
Normalize queries to catch "What is AI?" vs "what IS artificial intelligence?" as the same question, returning cached answers instantly.
All APIs run on Vercel Edge Runtime for <100ms cold starts, globally replicated, with built-in rate limiting and automatic failover.
# Run all tests
npm test
# Watch mode
npm test:watch
# Run specific file
npm test -- modelsTest Coverage:
- Rate limiting logic
- Model classification & naming
- Error handling
- Input sanitization
- Query validation
- Check ARCHITECTURE.md for guidelines
- Follow TypeScript strict mode
- Add tests for new features
- Use standardized error handling (
Errors.*) - Use structured logging (
logger.*) - Add JSDoc comments to exports
- Search Results: <500ms (SerpAPI direct)
- AI Answer: <2-3s (model racing + streaming)
- Suggestions: <1s (cached)
- Cache Hits: <50ms (instant)
- Input sanitization on all queries
- UTF-8 validation
- SQL injection protection
- Rate limiting per IP
- No API keys in client code
- Structured error messages (no details leaked)
MIT - See LICENSE for details.
"OPENROUTER_API_KEY missing"
- Create
.env.local(see .env.example) - Get free key from OpenRouter
- Restart
npm run dev
Search returning no results
- Set
SERPAPI_KEYor Google Custom Search keys - Or use fallback (less accurate)
- Check that rate limit not exceeded
Slow first query
- First query has model racing lookup (~2s)
- Subsequent queries cached (~50ms)
- Model list cached (10 min TTL)
See ARCHITECTURE.md for full debugging guide.
Built with Next.js, React, TypeScript, Tailwind CSS, and OpenRouter AI.