Transform natural language into production-ready system architectures in seconds.
Helix is an AI-powered platform that bridges the gap between system design ideas and real-world implementation. Simply describe your system in plain English, and watch as Helix generates complete architecture blueprints, cost estimations, performance simulationsand and all powered by LLMs.
Helix doesn't just generate diagramsβit creates complete, production-ready system designs with:
- π€ AI-Powered Architecture Generation: Convert plain English prompts into sophisticated, scalable system architectures with proper component identification, technology recommendations, and communication patterns
- π¨ Interactive Visual Whiteboard: Drag-and-drop diagram editor with real-time visualization, zoom, pan, and manual editing capabilities
- π° Intelligent Cost Estimation: Get detailed monthly infrastructure cost breakdowns (compute, storage, network) before you build
- β‘ Performance Simulation Engine: Simulate load testing scenarios with RPS calculations, latency metrics (P95, P99), and scalability predictions
- π» AI Code Generation: Generate complete boilerplate code for all services (TypeScript/Node.js) with proper structure, Dockerfiles, and API handlers
- π¦ One-Click Project Export: Download complete Docker-ready projects with docker-compose.yml, README, and all generated code
- π Pattern-Based Similarity Search: Find similar architectures from your past projects using intelligent pattern matching
- π¬ Conversational AI Assistant: Iterate on designs through natural language conversationsβask for improvements, evaluate risks, or request modifications
- π Real-Time Architecture Persistence: All your designs are automatically saved and can be loaded anytime
- π― Token-Based Usage System: Fair usage tracking with visual quota indicators and graceful handling when limits are reached
Input: Natural language prompt
Output: Complete architecture blueprint with services, databases, caches, queues, and connections
How it works:
- Uses LLMs to analyze your prompt
- Automatically identifies system components (microservices, databases, caches, message queues, CDNs, load balancers)
- Determines communication patterns (sync/async/pub-sub)
- Recommends appropriate technologies (Node.js, Go, Python, PostgreSQL, Redis, Kafka, etc.)
- Generates structured JSON blueprint with relationships and properties
Example Prompts:
"Design a scalable e-commerce platform with Redis caching and payment processing"
"Create an Uber-like ride-hailing system with real-time tracking"
"Build a chat application with WebSockets and message queuing"
Features:
- Drag-and-Drop Editing: Reposition components by dragging nodes
- Visual Connection Builder: Connect services with sync, async, or pub-sub relationships
- Component Library: Searchable palette with 30+ pre-configured components across categories:
- Edge & Routing (API Gateway, Load Balancer, CDN)
- Compute & Services (Node.js, Go, Python, Java services)
- Data Stores (PostgreSQL, MySQL, DynamoDB, Elasticsearch)
- Messaging & Events (Kafka, RabbitMQ, Pub/Sub, SQS)
- Caching Layers (Redis, Memcached, Edge Cache)
- Observability & Ops (Monitoring, Logging, Alerting)
- Identity & Security (Auth Service, Policy Service, Secrets Manager)
- Zoom & Pan Controls: Navigate large architectures with smooth zoom (35%-150%) and pan
- Fit-to-View: Automatically frame your entire architecture
- Save to Architecture: Persist manual edits back to the AI-generated design
Calculates:
- Monthly infrastructure costs broken down by:
- Compute: Service instances, containers, serverless functions
- Storage: Database storage, object storage, backups
- Network: Data transfer, CDN usage, API gateway requests
- Additional Services: Message queues, caches, monitoring tools
Output: Detailed cost breakdown with total monthly estimate
Simulates:
- Various request-per-second (RPS) scenarios
- Latency calculations (average, P95, P99)
- Success rate predictions
- Bottleneck identification
Visualization:
- Interactive D3.js line chart showing latency vs. load
- Real-time performance metrics display
- Scalability predictions
Features:
- Technology-aware code generation (Node.js/Express, Go/Fiber, Python/FastAPI, Java/Spring Boot)
- Streaming generation for multiple services (see progress in real-time)
- Copy individual files or entire service code
- Export all code as a ZIP file
- Automatic code persistence (saved with architecture)
Downloads complete project package:
docker-compose.yml- All services configured and readyREADME.md- Architecture documentation- Generated code for all services
- Architecture JSON blueprint
- Cost estimation summary
Ready to deploy with a single docker-compose up command!
Two modes:
Generate Mode:
- Create new architectures from scratch
- Iterate on existing designs ("add Redis caching", "make it more scalable")
- Refine architectures through conversation
Evaluate Mode:
- Get feedback on your current design
- Identify potential risks and bottlenecks
- Receive improvement suggestions
- Ask architecture questions
Features:
- Full conversation history
- Context-aware responses
- Iteration tracking
- Automatic architecture updates
Automatic saving:
- All generated architectures are saved to database
- Past projects accessible from sidebar
- Load previous designs with one click
- View project metadata (services count, connections, last updated)
Project Management:
- View all past projects
- Quick access to recent designs
- Automatic versioning through iterations
- Navigate to the application
- Create an account or log in
- You'll receive an initial token quota (default: 5,000 tokens)
-
Enter a prompt in the chat interface:
"Design a scalable microservices architecture for a food delivery app" -
Click Send or press Enter
-
Wait for generation (typically 10-30 seconds):
- AI analyzes your prompt
- Generates architecture blueprint
- Creates visualization
- Calculates cost estimation
-
View the result:
- Interactive diagram appears in the Design tab
- Architecture summary in chat
- Cost breakdown visible
Design Tab:
- View the interactive diagram
- Drag nodes to reposition
- Use zoom controls to navigate
- Click on nodes to see details
Whiteboard Mode:
- Click "Add Component" to manually add services
- Use "Connect" mode to draw relationships
- Save changes back to architecture
Continue the conversation:
"Add Redis caching layer"
"Make the database more scalable"
"Add a message queue for async processing"
Each iteration updates your architecture while preserving the conversation history.
- Navigate to Simulation & Cost tab
- Click "Run Simulation"
- View performance metrics:
- Max RPS capacity
- Average latency
- Performance graph
- Analyze scalability predictions
- Click "Export" button
- Download complete project package
Use the chat to get AI feedback:
"What are the potential bottlenecks in this design?"
"How can I improve scalability?"
"What security considerations should I add?"
Helix uses a token-based quota system to manage AI usage fairly:
- Initial Quota: 5,000 tokens per user (default)
- Header Badge: Shows remaining tokens
- Auto-refresh: Updates every 15 seconds
- Color coding:
- Green: Plenty of tokens remaining
- Red: Low tokens (approaching limit)
Important: When your token quota reaches 0:
-
Endpoints stop generating new content:
- Architecture generation is disabled
- Code generation is disabled
- New evaluations are blocked
-
You can still:
- View all past architectures
- View all previously generated code
- Export existing projects
- Navigate the interface
- Load and examine saved designs
-
User-friendly messaging:
- Clear error messages explain the situation
- Direct link to support page
- No abrupt redirectsβyou stay in control
-
Get more tokens:
- Click the "Contribute" button in header
- Or visit
/support-my-workpage - Support the project to receive additional tokens (1rs β 25 Tokens & 1$ β 2,125 Tokens)
- Email
sayanmajumder2002@gmail.comafter supporting
- Real-time tracking: See exactly how many tokens you've used
- Operation history: View detailed token usage per operation
- Fair limits: Prevents abuse while allowing generous usage
- Support-based expansion: Contributors receive additional tokens
Your support helps:
- Cover AI infrastructure costs (Gemini API usage)
- Maintain and improve the platform
- Add new features and capabilities
- Scale infrastructure for all users
- Click the "Contribute" button in the header (β icon)
- Or navigate to
/support-my-workpage - Click "Support with Coffee" button
- Complete your contribution
- Email
sayanmajumder2002@gmail.comwith your contribution details - Receive additional tokens and premium features
Support the project and receive tokens based on your contribution:
- 1 Rupee (βΉ1) β 15 Tokens
- 1 Dollar ($1) β 1,275 Tokens
Example: A $10 contribution gives you 12,750 tokens!
- More tokens for architecture generation
- Faster responses with priority processing
- Early access to new features and AI models
- Premium support and feature requests
- Help scale infrastructure for the community
- Next.js 14 - React framework with App Router
- TypeScript - Type-safe development
- Tailwind CSS - Utility-first styling
- shadcn/ui - Beautiful, accessible components
- D3.js - Interactive data visualizations
- React Hooks - Modern state management
- Next.js API Routes - Serverless API endpoints
- Node.js - Runtime environment
- PostgreSQL - Database (optional, for persistence)
- Drizzle ORM - Type-safe database queries
- Google Gemini AI - Architecture and code generation
- Sentry - Error monitoring and performance tracking (optional)
- Docker - Containerization
- Docker Compose - Multi-container orchestration
- Vercel - Deployment platform
- Node.js 18+ - Download
- npm or pnpm - Package manager
- Gemini API Key - Get from Google AI Studio
- PostgreSQL (optional) - For architecture persistence
-
Clone the repository:
git clone <your-repo-url> cd helix-app
-
Install dependencies:
npm install # or pnpm install -
Set up environment variables:
Create
.env.localfile:# Required GEMINI_API_KEY=your_gemini_api_key_here # Optional: Database (for persistence) DATABASE_URL=postgresql://user:password@localhost:5432/helix # Optional: Sentry (for error monitoring) SENTRY_DSN=your_sentry_dsn_here SENTRY_TRACES_SAMPLE_RATE=0.1 # Optional: Client-side Sentry NEXT_PUBLIC_SENTRY_DSN=your_sentry_dsn_here NEXT_PUBLIC_SENTRY_TRACES_SAMPLE_RATE=0.1
-
Run development server:
npm run dev
-
Open browser: Navigate to http://localhost:3000
For architecture persistence and advanced features:
-
Start PostgreSQL with Docker:
docker-compose up -d postgres
-
Install PostgreSQL client:
npm install pg
-
Initialize database: The schema will be automatically created from
lib/db/schema.sql -
Update
.env.local:DATABASE_URL=postgresql://postgres:postgres@localhost:5432/helix
-
Push to GitHub:
git push origin main
-
Import in Vercel:
- Go to vercel.com
- Click "New Project"
- Import your GitHub repository
-
Add environment variables:
GEMINI_API_KEY(required)DATABASE_URL(optional)SENTRY_DSN(optional)
-
Deploy:
- Click "Deploy"
- Wait for build to complete
- Your app is live!
-
Build the image:
docker build -t helix-app . -
Run the container:
docker run -p 3000:3000 \ -e GEMINI_API_KEY=your_key \ -e DATABASE_URL=your_db_url \ helix-app
-
Access the app: Open http://localhost:3000
-
Update
docker-compose.ymlwith your environment variables -
Start services:
docker-compose up -d
-
View logs:
docker-compose logs -f
helix-app/
βββ app/
β βββ api/ # API Routes
β β βββ design/ # Architecture generation
β β βββ chat/ # AI conversation/evaluation
β β βββ generate-code/ # Code generation
β β βββ simulate/ # Load simulation
<!-- β β βββ export/ # Project export
β β βββ export-boilerplate/ # Code export -->
β β βββ architectures/ # Architecture CRUD
β β βββ code-templates/ # Code template management
β β βββ token-usage/ # Token tracking
β β βββ auth/ # Authentication
β βββ workspace/ # Main workspace page
β βββ login/ # Login page
β βββ signup/ # Signup page
β βββ support-my-work/ # Support/contribution page
β βββ layout.tsx # Root layout
β βββ globals.css # Global styles
βββ components/
β βββ ui/ # shadcn/ui components
β βββ diagram/
β β βββ DiagramEditor.tsx # Main diagram editor
β β βββ CustomNode.tsx # Node component
β β βββ PropertiesPanel.tsx # Properties panel
β β βββ Sidebar.tsx # Component library sidebar
β βββ ArchitectureWhiteboard.tsx # Interactive whiteboard
β βββ LoadSimulationChart.tsx # Performance chart
β βββ ErrorBoundary.tsx # Error handling
βββ lib/
β βββ ai/
β β βββ gemini-client.ts # Gemini AI integration
β βββ auth/
β β βββ get-user.ts # User authentication
β β βββ utils.ts # Auth utilities
β βββ db/
β β βββ schema.ts # Database schema
β β βββ schema.sql # SQL schema
β β βββ drizzle.ts # Drizzle ORM setup
β β βββ client.ts # Database client
β βββ utils/
β β βββ cost-estimator.ts # Cost calculation
β β βββ similarity-search.ts # Pattern matching
β β βββ token-quota.ts # Token management
β βββ monitoring/
β β βββ api-monitoring.ts # API performance tracking
β βββ types.ts # TypeScript types
βββ docker-compose.yml # Docker services
βββ Dockerfile # Production container
βββ package.json # Dependencies
βββ tailwind.config.ts # Tailwind configuration
βββ tsconfig.json # TypeScript configuration
Prompt:
"Design a scalable e-commerce platform with microservices architecture.
Include user authentication, product catalog, shopping cart, payment processing,
and order management. Use Redis for caching and Kafka for event streaming."
Result:
- 8+ services generated (User Service, Product Service, Cart Service, Payment Service, Order Service, etc.)
- Redis cache layer
- Kafka message queue
- PostgreSQL database
- API Gateway
- Cost estimation: ~$2,500/month
- Complete code generated for all services
Prompt:
"Create a real-time chat application with WebSocket support,
message persistence, and user presence tracking.
Use Redis for pub/sub and PostgreSQL for message storage."
Result:
- WebSocket service
- Message service
- Presence service
- Redis pub/sub
- PostgreSQL database
- Load balancer
- Performance simulation shows 10,000+ concurrent users
Prompt:
"Design an Uber-like ride-hailing platform with real-time location tracking,
driver matching, payment processing, and notification system."
Result:
- Location service with geospatial database
- Matching service with algorithm
- Payment service
- Notification service
- Real-time tracking with WebSockets
- Message queue for async processing
Required:
GEMINI_API_KEY- Your Google Gemini API key
Optional:
DATABASE_URL- PostgreSQL connection stringSENTRY_DSN- Sentry error monitoringNEXT_PUBLIC_SENTRY_DSN- Client-side SentrySENTRY_TRACES_SAMPLE_RATE- Performance monitoring rate
The database schema includes:
users- User accounts and token quotasarchitectures- Saved architecture blueprintscode_templates- Generated code templatestoken_usage- Token consumption tracking
See lib/db/schema.sql for complete schema.
This is expected behavior! When your token quota reaches 0:
-
β You can still:
- View all past architectures
- View previously generated code
- Export existing projects
- Navigate the interface
-
β You cannot:
- Generate new architectures
- Generate new code
- Run new evaluations
Solution:
- Click the "Contribute" button (β icon) in the header
- Or visit
/support-my-workpage - Support the project to receive more tokens
- Email
sayanmajumder2002@gmail.comafter supporting
"Authentication required" error:
- Make sure you're logged in
- Clear cookies and try again
- Check that database is connected (if using persistence)
"Token limit reached" error:
- Your quota has been exhausted
- Support the project to get more tokens
- Check token usage in the header badge
"Failed to generate architecture":
- Check your
GEMINI_API_KEYis valid - Verify API key has sufficient quota
- Check network connection
Database connection issues:
- Verify
DATABASE_URLis correct - Ensure PostgreSQL is running
- Check database credentials
Contributions are welcome! Here's how you can help:
- Report bugs - Open an issue with detailed description
- Suggest features - Share your ideas for improvements
- Submit PRs - Fix bugs or add features
- Improve documentation - Help make docs better
- Support the project - Help cover infrastructure costs
- Fork the repository
- Create a feature branch
- Make your changes
- Test thoroughly
- Submit a pull request
MIT License - see LICENSE file for details
- Google Gemini AI - Powering the architecture and code generation
- Next.js Team - Amazing React framework
- shadcn/ui - Beautiful component library
- D3.js - Powerful visualization library
- All Contributors - Making Helix better every day
- Issues: GitHub Issues
- Email: sayanmajumder2002@gmail.com
- Support Page:
/support-my-work(in-app)
- Clone the repo and install dependencies
- Add your Gemini API key to
.env.local - Run
npm run dev - Start designing your first architecture!
Happy Architecting! π
Built with β€οΈ using Next.js, TypeScript, and Google Gemini AI