| 20,000+ Lines of Code |
2 π Lines Written by Hand |
$400 AI Tokens Used |
12 Hours to Build |
This entire platform was built using Claude Code - we literally wrote only 2 lines of code by hand. Everything else was AI-generated: the complex multi-agent system, React components, API endpoints, CSS styling. This represents the future of software development.
HackRadar is a revolutionary platform that transforms how hackathons evaluate projects. Teams get real-time AI feedback to improve their submissions, while organizers get consistent, fair evaluation across all projects.
- π¨ Multi-Format Support: Upload PDFs, screenshots, websites, pitch decks
- π§ AI-Powered Analysis: Multi-agent system with specialized evaluation agents
- π Real-Time Scoring: Instant feedback across 6 key criteria
- π‘ Actionable Feedback: Specific suggestions on HOW to improve
- π Progress Tracking: See your score evolution over time
- π Live Leaderboard: Track competition in real-time
A unique collaboration of three minds with one vision:
- Ciprian (Chip) Rarau - DevOps, deployment, AI coding implementation
- Yehor Sanko - AI architect, prompt engineering, UI/UX refinement
- Luca Rarau - Customer validation, sales, product feedback
The magic happened when Ciprian and Yehor discovered they had the same idea independently - creating a tool to help hackathon teams succeed.
π¨ Frontend Layer
βββ Next.js 15
βββ React 19
βββ TypeScript
βββ Tailwind CSS
β‘ API Layer
βββ /api/projects
βββ /api/timeline
βββ /api/assess
βββ /api/leaderboard
π§ AI Evaluation Engine
βββ BaseAgent (Orchestrator)
βββ TextEvaluator (Claude 3.5)
βββ SRTracker (Checklist)
πΎ Database Layer
βββ MongoDB Atlas
βββ Projects Collection
βββ Timeline Collection
βββ Evaluations Collection
Revolutionary cumulative scoring that maintains context across multiple submissions. Teams' scores can only improve or maintain - never decrease unfairly.
// Conversation continuity across evaluations
const messageHistory = await buildMessageHistory(projectId);
const evaluation = await evaluateWithContext(
content,
messageHistory,
conversationId
);Timeline-based architecture enables perfect audit trails and historical tracking.
Innovative score anchoring system ensures teams are protected from scoring volatility.
| Criterion | Points | Description |
|---|---|---|
| Clarity | 15 | Message clarity and structure |
| Problem Value | 20 | Pain point identification |
| Feasibility | 15 | Technical evidence |
| Originality | 15 | Innovation factor |
| Impact | 20 | Conversion potential |
| Readiness | 15 | Completeness check |
- Node.js 18+
- MongoDB connection string
- Anthropic API key
# Clone the repository
git clone https://github.com/crarau/HackRadar.git
cd HackRadar/hackradar-next
# Install dependencies
npm install
# Set up environment variables
cp .env.example .env.local
# Edit .env.local with your credentials
# Run development server
npm run dev
# Open http://localhost:7843MONGODB_URI=your_mongodb_connection_string
ANTHROPIC_API_KEY=your_anthropic_api_key
NEXT_PUBLIC_GOOGLE_CLIENT_ID=your_google_oauth_client_id| Hours | Achievement | Details |
|---|---|---|
| 0-2 | Domain & Infrastructure | Deployed on Azure/Vercel, configured domain |
| 2-4 | Core Development | Next.js app, authentication, database schema |
| 4-6 | AI Multi-Agent System | Claude API integration, evaluation agents |
| 6-8 | Frontend & UX | React components, real-time updates |
| 8-10 | Customer Validation | Talked with 10+ teams, gathered feedback |
| 10-12 | Testing & Polish | End-to-end tests, API simulation, UI testing |
During the hackathon, we:
- π£οΈ Interviewed 10+ teams directly
- π Saw teams' eyes light up when receiving actionable feedback
- π― Pitched to existing teams and watched them improve in real-time
- β Validated the need for real-time feedback during hackathons
We leveraged the power of LLMs to accelerate our testing strategy:
- End-to-End Tests for main application flows
- API Simulation to test without external dependencies
- UI Testing through simulated user interactions
- LLM-Assisted Testing: Used AI to write tests and validate API responses directly
This approach helped us accelerate development significantly. While we don't have 100% coverage, we ensured the critical paths were thoroughly tested.
- Frontend: Next.js 15, React 19, TypeScript, Tailwind CSS
- Backend: Next.js API Routes, Azure VM deployment
- Database: MongoDB Atlas
- AI: Anthropic Claude 3.5 Sonnet
- Authentication: Google OAuth
- Development: Claude Code (99% AI-generated!)
- 60 Git Commits
- 20,000+ Lines of Code
- 2 Lines Written Manually π
- $400 Worth of AI Tokens
- 10+ Teams Interviewed
- End-to-End Tests for Main Features
- Get specific, actionable feedback to improve your project
- Track your progress throughout the hackathon
- Understand exactly what judges are looking for
- Iterate quickly based on AI suggestions
- Consistent evaluation across all projects
- Reduce judge workload and bias
- Get comprehensive analytics
- Provide better experience for participants
- Submit Your Project: Upload any format - PDF, screenshots, code, URLs
- Get Instant Analysis: AI evaluates across 6 key criteria
- Receive Actionable Feedback: Specific suggestions for improvement
- Track Progress: See your score evolution over time
- Compete Live: Watch the leaderboard update in real-time
# Production build
npm run build
# Run tests
npm test
# Deploy to Vercel
vercel --prod- π Live Platform: hackradar.me
- π Technical Journey: hackradar.me/technical-journey
- π Leaderboard: hackradar.me/public-dashboard
MIT License - see LICENSE file for details
- AGI Ventures Canada for hosting the hackathon
- Anthropic for Claude API
- All the teams who provided invaluable feedback
- The hackathon community for the inspiration
