Skip to content

Shubham-Mohite7/HyperStack

Repository files navigation


██╗  ██╗██╗   ██╗██████╗ ███████╗██████╗ ███████╗████████╗ █████╗  ██████╗██╗  ██╗
██║  ██║╚██╗ ██╔╝██╔══██╗██╔════╝██╔══██╗██╔════╝╚══██╔══╝██╔══██╗██╔════╝██║ ██╔╝
███████║ ╚████╔╝ ██████╔╝█████╗  ██████╔╝███████╗   ██║   ███████║██║     █████╔╝ 
██╔══██║  ╚██╔╝  ██╔═══╝ ██╔══╝  ██╔══██╗╚════██║   ██║   ██╔══██║██║     ██╔═██╗ 
██║  ██║   ██║   ██║     ███████╗██║  ██║███████║   ██║   ██║  ██║╚██████╗██║  ██╗
╚═╝  ╚═╝   ╚═╝   ╚═╝     ╚══════╝╚═╝  ╚═╝╚══════╝   ╚═╝   ╚═╝  ╚═╝ ╚═════╝╚═╝  ╚═╝

AI-powered tech stack recommendation engine

Describe your project in plain English — get a confidence-scored,
layer-by-layer recommendation generated by Llama 3 70B via Groq.


What is HyperStack?

HyperStack takes a plain English description of your project and runs it through a three-stage AI pipeline to produce a precision-scored tech stack recommendation — across every architectural layer, with trade-offs, install commands, and a scalability roadmap.

No questionnaires. No dropdowns. Just describe your project and get a senior architect's recommendation in under 30 seconds.


Preview

Input Output
Plain English project description Layer-by-layer recommendations with confidence scores
Team size, budget, constraints Exact npm/pip install commands
Scale expectations Architecture pattern + scalability roadmap


Three-Stage AI Pipeline

project description
         │
         ▼
┌─────────────────────────────────┐
│  Stage 1 — Requirement          │  Temperature: 0.05
│  Extraction                     │  15 structured signals extracted
│                                 │  domain · scale · team · constraints
└──────────────────┬──────────────┘
                   │
                   ▼
┌─────────────────────────────────┐
│  Stage 2 — Stack Scoring        │  35+ technologies in knowledge base
│  with Retrieval (RAG)           │  Keyword TF-IDF retrieval → top 20
│                                 │  LLM scores per layer with 0–100 confidence
└──────────────────┬──────────────┘
                   │
                   ▼
┌─────────────────────────────────┐
│  Stage 3 — Report Generation    │  Full markdown report
│                                 │  Install commands · Trade-offs
│                                 │  Scalability roadmap
└─────────────────────────────────┘
         │
         ▼
  Saved to Supabase
  (history · user account · shareable link)


Tech Stack

Frontend

Technology Purpose
React 18 + Vite UI framework with fast HMR and TypeScript support
Tailwind CSS Utility-first styling with custom design tokens
React Router v6 Client-side routing with location state result passing
Axios HTTP client with interceptors and 120s timeout for LLM calls
react-markdown Renders the AI-generated recommendation report
Syne + DM Sans + DM Mono Typography system — display, body, and code fonts

Backend

Technology Purpose
Express.js (Node 18+) API server — runs locally and exports default for Vercel serverless
Groq SDK Llama 3 70B inference — fastest large model available
Supabase JS Database client for saving predictions and reading history
CORS middleware Scoped to configured origins — secure in production

Database & Auth

Technology Purpose
Supabase Auth Email/password + OAuth (Google, GitHub) — zero auth code written
Supabase PostgreSQL Persistent storage for predictions, user accounts, and history
Row-Level Security Every user can only read and write their own prediction history
Supabase Realtime Live updates when a prediction completes (optional)

Infrastructure

Technology Purpose
Vercel Frontend + serverless API deployment from one vercel.json
npm workspaces Monorepo — shared types across apps/web and apps/api


Project Structure

hyperstack/
│
├── apps/
│   │
│   ├── web/                          # React + Vite + Tailwind
│   │   └── src/
│   │       ├── components/
│   │       │   ├── features/         # Domain components
│   │       │   │   ├── ArchitectureBanner.tsx
│   │       │   │   ├── PipelineStatus.tsx
│   │       │   │   ├── ProjectInput.tsx
│   │       │   │   ├── RequirementsPanel.tsx
│   │       │   │   ├── ReportPanel.tsx
│   │       │   │   └── StackGrid.tsx
│   │       │   ├── layout/           # Shell components
│   │       │   │   ├── Navbar.tsx
│   │       │   │   └── RootLayout.tsx
│   │       │   └── ui/               # Primitives — Badge, Spinner, ConfidenceBar
│   │       ├── hooks/
│   │       │   └── usePrediction.ts  # Full prediction lifecycle hook
│   │       ├── lib/
│   │       │   ├── apiClient.ts      # Axios instance
│   │       │   └── supabaseClient.ts # Supabase browser client
│   │       ├── pages/
│   │       │   ├── HomePage.tsx
│   │       │   ├── ResultsPage.tsx
│   │       │   ├── HistoryPage.tsx   # Past predictions from Supabase
│   │       │   ├── AuthPage.tsx      # Login / signup via Supabase Auth
│   │       │   └── NotFoundPage.tsx
│   │       ├── styles/globals.css
│   │       └── types/index.ts
│   │
│   └── api/                          # Express — local dev + Vercel serverless
│       ├── lib/
│       │   ├── groqClient.js         # Groq SDK singleton
│       │   ├── knowledgeBase.js      # 35+ curated technology entries
│       │   ├── jsonParser.js         # Robust LLM output parser
│       │   ├── supabaseClient.js     # Supabase service-role client
│       │   └── validators.js
│       ├── middleware/
│       │   ├── cors.js
│       │   ├── errorHandler.js
│       │   └── requestLogger.js
│       ├── routes/
│       │   ├── health.js
│       │   └── predict.js            # POST /api/predict
│       ├── services/
│       │   └── pipeline.js           # Three-stage AI orchestration
│       └── index.js
│
├── packages/
│   └── shared/                       # Types shared between web and api
│
├── vercel.json                       # Routes /api/* to serverless, else React
├── package.json                      # npm workspace root
└── README.md


Database Schema

-- Stores every prediction result linked to the authenticated user
create table predictions (
  id          uuid primary key default gen_random_uuid(),
  user_id     uuid references auth.users(id) on delete cascade,
  description text not null,
  requirements jsonb,
  scored      jsonb,
  report      text,
  model       text,
  duration_ms int,
  created_at  timestamptz default now()
);

-- Row-level security: users can only access their own predictions
alter table predictions enable row level security;

create policy "Users read own predictions"
  on predictions for select
  using (auth.uid() = user_id);

create policy "Users insert own predictions"
  on predictions for insert
  with check (auth.uid() = user_id);


Getting Started

Prerequisites

1. Clone and install

git clone https://github.com/yourusername/hyperstack.git
cd hyperstack
npm install

2. Configure environment — API

Create apps/api/.env:

# Groq
GROQ_API_KEY=your_groq_api_key_here
GROQ_MODEL=llama3-70b-8192

# Server
PORT=3001
CLIENT_ORIGIN=http://localhost:5173

# Supabase (service role — server only, never expose to client)
SUPABASE_URL=https://your-project-ref.supabase.co
SUPABASE_SERVICE_ROLE_KEY=your_service_role_key_here

3. Configure environment — Web

Create apps/web/.env.local:

# Supabase (anon key — safe to expose to browser)
VITE_SUPABASE_URL=https://your-project-ref.supabase.co
VITE_SUPABASE_ANON_KEY=your_anon_key_here

4. Run the Supabase schema

Copy the SQL from the Database Schema section above and run it in your
Supabase project → SQL Editor.

5. Start both apps

npm run dev
App URL
Frontend http://localhost:5173
API http://localhost:3001

Vite proxies all /api requests to the Express server — no CORS configuration needed in development.



Deploying to Vercel

npm install -g vercel
vercel

Set these environment variables in the Vercel dashboard → Settings → Environment Variables:

Variable Where
GROQ_API_KEY API only
GROQ_MODEL API only — llama3-70b-8192
SUPABASE_URL Both
SUPABASE_SERVICE_ROLE_KEY API only
VITE_SUPABASE_URL Web only
VITE_SUPABASE_ANON_KEY Web only
CLIENT_ORIGIN API only — your Vercel domain


Knowledge Base — Adding Technologies

Append an entry to apps/api/lib/knowledgeBase.js. Tags drive retrieval accuracy — the more precise the tags, the better the system matches them against project descriptions.

{
  id:             'your_unique_id',
  name:           'Technology Name',
  layer:          'Frontend | Backend | Database | DevOps | Auth | AI/ML | Queue | Real-time | Monitoring',
  description:    'One-sentence summary of what it does.',
  best_for:       ['use case 1', 'use case 2'],
  avoid_when:     ['anti-pattern or team constraint'],
  scale_ceiling:  'startup | large | massive | enterprise | unlimited',
  learning_curve: 'low | medium | high',
  cost:           'free | free tier | pay per use | paid',
  maturity:       'experimental | beta | production',
  tags:           ['keyword1', 'keyword2', 'keyword3'],
}


Architecture Decisions

Why keyword scoring instead of vector embeddings?

The API runs as a Vercel serverless function with a cold start budget under 500ms. Loading a sentence-transformer model would blow through this. The keyword TF-IDF approach is fast, transparent, and produces accurate results when combined with the LLM's contextual scoring in Stage 2. Accuracy comes from the LLM, not the retrieval — retrieval just narrows the candidate set.

Why three separate LLM calls instead of one?

Splitting extraction, scoring, and report generation into separate calls with distinct system prompts significantly improves output quality. A single monolithic prompt asking the LLM to do all three tasks produces lower-fidelity extraction and less precise scoring. The added latency (~5–8s extra) is worth it for measurably better recommendations.

Why Supabase over a self-managed Postgres?

Supabase gives auth, row-level security, a managed Postgres instance, and a JavaScript SDK in one — which is exactly the kind of integrated offering that makes sense for a small team. The service-role key is used server-side in the API, the anon key is used client-side with RLS enforcing access control at the database level.

Why Express instead of a framework like NestJS?

The API surface is small — two routes, one service, three middleware. NestJS would introduce significant structural overhead for no tangible benefit at this scale. Express also exports cleanly as a Vercel serverless function via export default app, which NestJS does not support as cleanly.



Layers Covered

Layer Technologies
Frontend Next.js · React + Vite · Vue 3 + Nuxt · SvelteKit · React Native + Expo · Flutter
Backend FastAPI · Express.js · NestJS · Django + DRF · Go + Gin · Spring Boot
Database PostgreSQL · MongoDB · Supabase · Redis · Pinecone / pgvector · ClickHouse · Firebase
Auth Supabase Auth · NextAuth.js · Clerk · Auth0
DevOps Vercel · AWS · Docker + Kubernetes · Railway / Render
AI / ML OpenAI API · LangChain / LlamaIndex · HuggingFace Transformers
Real-time Socket.IO
Queue BullMQ · Celery
Monitoring Sentry · Datadog


Contributing

Pull requests are welcome. For significant changes, open an issue first to discuss what you want to change.

# Fork the repo, then:
git checkout -b feature/your-feature-name
git commit -m "feat: add your feature"
git push origin feature/your-feature-name
# Open a pull request


License

Distributed under the MIT License. See LICENSE for details.



Built by Shubham Mohite   


✴️If this helped you, consider giving it a star.✴️

About

Describe your project in plain English. Get a precision-scored recommendation across every architectural layer with reasoning, trade-offs, and bootstrap commands.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors