Powered by Google Gemini Β· Vertex AI Β· LangGraph Β· Next.js 16
Genesis (formerly Verbix AI) is a production-grade, multi-tenant AI content generation platform. It combines a LangGraph multi-agent orchestration graph with a rich suite of intelligence tools β trend analysis, SEO optimization, image generation, and social publishing β delivered through a modern Next.js frontend and a FastAPI backend.
Users can chat naturally with the AI to produce blog posts, social media content, and images in real time. The system automatically classifies intent, routes to the right AI agent, fetches live trend data, applies guardrails, and streams the result back to the user β all in one unified workflow.
| Feature | Description |
|---|---|
| π§ Multi-Agent Graph | LangGraph-powered orchestration with specialized agents for planning, execution, review, and safety |
| π¬ Intent Classification | Automatically routes requests to the correct agent (Blog, Image, Social, etc.) |
| π Blog Generation | SEO-optimized, long-form articles with readability scoring and keyword analysis |
| πΌοΈ Image Generation | Vertex AI Imagen + DALL-E 3 fallback via a structured prompt pipeline |
| π Trend Intelligence | Real-time data from Google (Serper), Twitter/X, and Reddit with Redis caching |
| π¨ Tone Engine | 5 configurable tones (Analytical, Opinionated, Critical, Investigative, Contrarian) |
| π Guardrails | Multi-layer content safety (bias detection, harm filtering, factual grounding) |
| ποΈ Context & Memory | Conversation history, RAG with pgvector embeddings, and checkpoint restoration |
| π Social Publisher | OAuth-connected publishing to LinkedIn and Twitter/X |
| π Developer Portal | API key management, usage analytics, and interactive documentation |
| π€ Guest Mode | Full functionality for unauthenticated users with session-based storage |
| πΎ Multi-Tier Caching | Upstash Redis (L1) + PostgreSQL prompt cache (L2) with automatic invalidation |
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β CLIENT (Browser) β
β Next.js 16 + React 19 + Tailwind CSS v4 β
β ββββββββββββ ββββββββββββ ββββββββββββββββββ βββββββββββββββ β
β β /home β β / chat β β /auth (SSR) β β /developer β β
β β Landing β βInterface β β Login/Signup β β Portal β β
β ββββββββββββ ββββββββββββ ββββββββββββββββββ βββββββββββββββ β
ββββββββββββββββββββββββββββββ¬βββββββββββββββββββββββββββββββββββββββββ
β HTTPS / REST
ββββββββββββββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββββ
β FastAPI Backend (Python) β
β host: localhost:8000 / GCP β
β β
β ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ β
β β API Layer (api/v1/*) β β
β β /content /blog /agent /classifier /context /trends β β
β β /social /guardrails /embeddings /guest /advanced β β
β βββββββββββββββββββββββββββββ¬βββββββββββββββββββββββββββββββββββββ β
β β β
β βββββββββββββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββ β
β β LangGraph Agent Orchestration (graph/) β β
β β β β
β β βββββββββββ ββββββββββββ ββββββββββββ βββββββββββββββ β β
β β β Planner ββ βCoordinatorββ β Executor ββ β Reviewer β β β
β β βββββββββββ ββββββββββββ ββββββββββββ ββββββββ¬βββββββ β β
β β β β β
β β βββββββββββββββββββββββββββββββββββββββββββββββββββΌβββββββ β β
β β β Safety Agent (bias check Β· harm filter Β· grounding) β β β
β β ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ β β
β βββββββββββββββββββββββββββββ¬βββββββββββββββββββββββββββββββββββββ β
β β β
β βββββββββββββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββ β
β β Intelligence Modules β β
β β ββββββββββββββ ββββββββββββββββ βββββββββββββββββββββββ β β
β β β Trend β β Tone Enhancerβ β SEO Suite β β β
β β β Analyzer β β (5 modes) β β keywordΒ·metaΒ·read β β β
β β ββββββββββββββ ββββββββββββββββ βββββββββββββββββββββββ β β
β β ββββββββββββββ ββββββββββββββββ βββββββββββββββββββββββ β β
β β β Image β β Social β β RAG / Vector β β β
β β β Prompter β β Publisher β β Embeddings β β β
β β ββββββββββββββ ββββββββββββββββ βββββββββββββββββββββββ β β
β βββββββββββββββββββββββββββββ¬βββββββββββββββββββββββββββββββββββββ β
β β β
β βββββββββββββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββ β
β β Core Services (core/) β β
β β Rate Limiter Β· Response Cache Β· Guardrails Β· LLM Factory β β
β β Token Counter Β· Logging Β· Supabase Client Β· Upstash Redis β β
β ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ β
ββββββββββββββββββββββ¬βββββββββββββββββββββββββββ¬ββββββββββββββββββββββ
β β
βββββββββββββΌββββββββββββ ββββββββββββΌβββββββββββ
β Supabase (PostgreSQL) β β Upstash (Redis) β
β + pgvector extension β β L1 Response Cache β
β Users Β· Sessions β β Trend Data Cache β
β Conversations Β· Cache β β Rate Limit State β
β Embeddings Β· Metrics β βββββββββββββββββββββββ
ββββββββββββββββββββββββββ
Genesis/ # Monorepo root (pnpm workspaces + Turborepo)
βββ apps/
β βββ frontend/ # Next.js 16 application
β β βββ app/
β β β βββ page.tsx # Root β redirects to /chat
β β β βββ home/ # Public landing page
β β β βββ chat/ # Main chat interface
β β β βββ auth/ # Login / signup / callback pages
β β β βββ developer/ # API key mgmt + docs portal
β β β βββ settings/ # User preferences
β β β βββ api/ # Next.js API routes (proxies)
β β βββ components/ # Reusable UI components
β β β βββ chat-interface.tsx
β β β βββ sidebar-editor.tsx # CKEditor 5 rich-text editor
β β β βββ message-bubble.tsx
β β β βββ tone-selector.tsx
β β β βββ ...
β β βββ lib/ # Utilities, hooks, API client
β β βββ api-client.ts
β β βββ supabase/ # Supabase SSR helpers
β β βββ tone-options.ts
β β
β βββ backend/ # FastAPI application
β βββ main.py # App entry point, CORS, router registration
β βββ api/
β β βββ v1/ # All REST endpoints
β β βββ content.py # Primary content generation endpoint
β β βββ blog.py # Dedicated blog generation
β β βββ agent.py # Agent graph invocation
β β βββ classifier.py # Intent classification
β β βββ context.py # Conversation context & checkpoints
β β βββ guest.py # Guest session management
β β βββ social.py # Social media OAuth + posting
β β βββ guardrails.py # Content safety checks
β β βββ embeddings.py # Vector embedding endpoints
β β βββ advanced.py # Power-user generation features
β βββ agents/ # LangGraph node implementations
β β βββ orchestrator.py # Graph assembly & routing
β β βββ planner.py # Task decomposition
β β βββ coordinator.py # Agent coordination logic
β β βββ executor.py # LLM call execution
β β βββ reviewer.py # Output quality review
β β βββ blog_writer.py # Blog-specific writer agent
β β βββ safety.py # Safety/guardrail agent
β βββ graph/ # LangGraph state & pipeline
β βββ intelligence/ # AI intelligence modules
β β βββ trend_collector.py # Multi-source trend fetching
β β βββ trend_analyzer.py # Scoring & insight engine
β β βββ tone_enhancer.py # Prompt tone injection
β β βββ image_prompter.py # Image prompt builder
β β βββ image_collector.py # Image artifact storage
β β βββ social_publisher.py # LinkedIn/Twitter publisher
β β βββ seo/ # Full SEO optimization suite
β β βββ optimizer.py
β β βββ keyword_analyzer.py
β β βββ readability_analyzer.py
β β βββ metadata_generator.py
β β βββ hashtag_optimizer.py
β β βββ suggestions.py
β βββ core/ # Shared services
β β βββ config.py # Pydantic settings (env vars)
β β βββ guardrails.py # Multi-layer content safety
β β βββ embeddings.py # Sentence-transformer embeddings
β β βββ rag_service.py # Retrieval-augmented generation
β β βββ rate_limiter.py # Per-user/guest rate limiting
β β βββ response_cache.py # Response caching logic
β β βββ llm_factory.py # Model selection (Gemini/GPT/Groq)
β β βββ token_counter.py # Token usage tracking
β β βββ chatgpt_cache.py # Prompt-level cache (L2)
β β βββ vertex_ai.py # Vertex AI client wrapper
β β βββ upstash_redis.py # Redis client singleton
β βββ database/ # DB models & migrations (Alembic)
β
βββ packages/ # Shared packages (Turborepo)
βββ turbo.json # Turborepo pipeline config
βββ pnpm-workspace.yaml
βββ package.json
User types a message in the chat interface
β
βΌ
ββββββββββ Frontend ββββββββββββββββββββββββββββββ
β 1. Captures prompt + conversation history β
β 2. Attaches tone, format, and safety options β
β 3. POSTs to /v1/content (or /v1/agent) β
ββββββββββββββββββββββββββββββββββββββββββββββββ-β
β
βΌ
ββββββββββ Backend: API Layer βββββββββββββββββββββ
β 1. Rate limit check (per user/guest/IP) β
β 2. Hash prompt β check L1 Redis cache β
β βββ HIT β return cached response β
β βββ MISS β continue β
β 3. Check L2 PostgreSQL prompt cache β
βββββββββββββββββββββββββββββββββββββββββββββββββββ
β
βΌ
ββββββββββ Intent Classifier ββββββββββββββββββββββ
β Classifies intent into one of: β
β blog | image | social | general | research β
β Extracts: topic, refined_query β
βββββββββββββββββββββββββββββββββββββββββββββββββββ
β
βΌ
ββββββββββ Agent Graph (LangGraph) ββββββββββββββββ
β Planner β Coordinator β Executor β
β β β β β
β β Intelligence LLM Factory β
β β - Trend Data (Gemini/GPT/Groq) β
β β - Tone Prompt β
β β - SEO Context β
β βββββββββββ Reviewer βββββββββββββββββββ β
β β β
β Safety Agent β
β (bias check Β· harm filter) β
βββββββββββββββββββββββββββββββββββββββββββββββββββ
β
βΌ
ββββββββββ Post-Processing ββββββββββββββββββββββββ
β 1. Calculate metadata (word count, sections) β
β 2. Compute uniqueness score (embedding diff) β
β 3. Store response to DB + update cache β
β 4. Log token usage & metrics β
βββββββββββββββββββββββββββββββββββββββββββββββββββ
β
βΌ
Response streamed to frontend
with content + metadata badges
The backend uses LangGraph to assemble a directed graph of specialized agents. Each node handles a distinct phase of content creation:
| Agent | File | Responsibility |
|---|---|---|
| Planner | agents/planner.py |
Decomposes user intent into sub-tasks |
| Coordinator | agents/coordinator.py |
Routes sub-tasks to appropriate executors |
| Executor | agents/executor.py |
Performs the actual LLM generation call |
| Blog Writer | agents/blog_writer.py |
Specialized node for long-form blog drafts |
| Reviewer | agents/reviewer.py |
Scores output quality; triggers re-generation if below threshold |
| Safety | agents/safety.py |
Applies guardrails; blocks or modifies harmful output |
| Orchestrator | agents/orchestrator.py |
Assembles the graph and manages state transitions |
All endpoints are prefixed with the base URL of the backend server.
| Method | Path | Description |
|---|---|---|
POST |
/v1/content/generate |
Primary generation endpoint (text, tones, formats) |
POST |
/v1/blog/generate |
Dedicated long-form blog generation |
POST |
/v1/agent/invoke |
Direct LangGraph agent graph invocation |
POST |
/v1/advanced/generate |
Advanced generation with fine-grained parameters |
| Method | Path | Description |
|---|---|---|
POST |
/v1/trends/analyze |
Analyze keywords across Google, Twitter, Reddit |
POST |
/v1/trends/generate-context |
Generate trend context for the AI writer |
GET |
/v1/trends/top |
Fetch current top trending topics |
POST |
/v1/classifier/classify |
Classify prompt intent and extract topic |
| Method | Path | Description |
|---|---|---|
POST |
/v1/context/save |
Save conversation checkpoint |
GET |
/v1/context/restore/{session_id} |
Restore context from a checkpoint |
GET |
/v1/context/history |
Retrieve paginated conversation history |
POST |
/v1/embeddings/store |
Store content as a vector embedding |
POST |
/v1/embeddings/search |
Semantic search over stored embeddings |
| Method | Path | Description |
|---|---|---|
GET |
/v1/social/auth/linkedin |
OAuth flow for LinkedIn |
GET |
/v1/social/auth/twitter |
OAuth flow for Twitter/X |
POST |
/v1/social/publish/linkedin |
Publish content to LinkedIn |
POST |
/v1/social/publish/twitter |
Post a tweet or thread |
| Method | Path | Description |
|---|---|---|
POST |
/v1/guardrails/check |
Run multi-layer safety analysis on content |
| Method | Path | Description |
|---|---|---|
POST |
/v1/guest/session |
Create a guest session |
GET |
/v1/guest/history/{guest_id} |
Fetch guest conversation history |
POST |
/v1/guest/migrate |
Migrate guest data to authenticated user |
| Method | Path | Description |
|---|---|---|
GET |
/health |
Basic liveness check |
GET |
/v1/health/redis |
Redis connectivity check |
The Tone Enhancer injects personality into every generation. Five tones are available:
| Tone | System Persona | Best For |
|---|---|---|
analytical |
Thoughtful analyst & critic | Deep dives, technical research |
opinionated |
Bold commentator with strong views | Opinion pieces, editorials |
critical |
Discerning critic | Reviews, evaluations |
investigative |
Investigative journalist | ExposΓ©s, investigative pieces |
contrarian |
Thoughtful contrarian | Counter-narratives, debate prep |
Each tone can be augmented with optional enrichment sections: Critical Analysis, Alternative Perspectives, Real-World Implications, and Questions to Consider.
User Prompt
β
βΌ
Keyword Extraction
β
ββββΊ Google (Serper API) βββ
ββββΊ Twitter/X API βββββββββ€βββΊ Aggregator βββΊ Redis Cache (30 min TTL)
ββββΊ Reddit API ββββββββββββ β
βΌ
Trend Analyzer
βββββββββββββββββββββββββββββββββββββββ
β Scoring (0β100): β
β β’ Keyword Match 40% β
β β’ Source Credibility 20% β
β β’ Engagement (log) 20% β
β β’ Recency 10% β
β β’ Content Quality 10% β
βββββββββββββββββββββββββββββββββββββββ
β
βΌ
AI Content Context
(target audience Β· trending angles Β· keywords)
The intelligence/seo/ module provides a complete pipeline for content optimization:
keyword_analyzer.pyβ Extracts primary and secondary keywords, scores densityreadability_analyzer.pyβ Flesch-Kincaid, sentence length, complexity metricsmetadata_generator.pyβ Auto-generates title tags, meta descriptions, Open Graph datahashtag_optimizer.pyβ Suggests platform-specific hashtags for social postssuggestions.pyβ End-to-end improvement recommendationsoptimizer.pyβ Orchestrates the full SEO pass over generated content
Every piece of generated content passes through a multi-layer safety pipeline:
- Bias Detection β Scans for demographic, political, and factual biases
- Harm Filtering β Blocks NSFW, violent, or dangerous content
- Factual Grounding β Cross-references claims with trusted sources where possible
- Rate Limiting β Per-user and per-IP limits enforced via Redis
| Layer | Technology | TTL | Scope |
|---|---|---|---|
| L1 | Upstash Redis | 30 min (trends) / configurable | Trend data, rate limit counters |
| L2 | PostgreSQL (prompt_cache table) | Persistent | Identical prompt responses |
| Embeddings | pgvector | Persistent | Semantic search index |
Cache keys are generated from sorted keyword lists and prompt hashes to maximize hit rates across similar queries.
| Requirement | Version |
|---|---|
| Node.js | 18+ |
| pnpm | 8+ |
| Python | 3.10+ |
| Supabase project | β |
| Google Cloud project (GCP) | For Vertex AI |
git clone https://github.com/yourusername/genesis.git
cd genesiscd apps/backend
# Create and activate virtual environment
python -m venv .venv
.venv\Scripts\activate # Windows
# source .venv/bin/activate # macOS/Linux
# Install dependencies
pip install -r requirements.txt
# Configure environment variables
cp .env.example .env
# Edit .env with your keys (see Environment Variables section below)
# Start the development server
uvicorn main:app --reload --port 8000cd apps/frontend
# Install dependencies
pnpm install
# Configure environment variables
cp .env.example .env.local
# Add your Supabase URL and Anon Key
# Start the development server
pnpm devFrontend runs on http://localhost:3000 Β· Backend runs on http://localhost:8000
# From the project root
pnpm install
pnpm dev # Starts both frontend and backend concurrently# ββ AI Models βββββββββββββββββββββββββββββββββββββββββββββββ
OPENAI_API_KEY=sk-...
GROQ_API_KEY=gsk_...
# ββ Google Cloud / Vertex AI ββββββββββββββββββββββββββββββββ
GCP_PROJECT_ID=your-gcp-project-id
GOOGLE_APPLICATION_CREDENTIALS=./your-service-account.json
# ββ Supabase ββββββββββββββββββββββββββββββββββββββββββββββββ
SUPABASE_URL=https://xxxx.supabase.co
SUPABASE_SERVICE_ROLE_KEY=eyJ...
# ββ Upstash Redis βββββββββββββββββββββββββββββββββββββββββββ
UPSTASH_REDIS_REST_URL=https://xxxx.upstash.io
UPSTASH_REDIS_REST_TOKEN=AXxx...
# ββ Trend Intelligence ββββββββββββββββββββββββββββββββββββββ
SERPER_API_KEY=your_serper_key # Google Trends via Serper.dev
TWITTER_BEARER_TOKEN=AAAA... # Twitter API v2
REDDIT_CLIENT_ID=your_reddit_client_id
REDDIT_CLIENT_SECRET=your_reddit_secret
REDDIT_USER_AGENT=Genesis/1.0
# ββ Social Publishing βββββββββββββββββββββββββββββββββββββββ
LINKEDIN_CLIENT_ID=...
LINKEDIN_CLIENT_SECRET=...
TWITTER_CLIENT_ID=...
TWITTER_CLIENT_SECRET=...
# ββ CORS ββββββββββββββββββββββββββββββββββββββββββββββββββββ
ALLOWED_ORIGINS=https://your-frontend-domain.vercel.appNEXT_PUBLIC_SUPABASE_URL=https://xxxx.supabase.co
NEXT_PUBLIC_SUPABASE_ANON_KEY=eyJ...
NEXT_PUBLIC_API_URL=http://localhost:8000| Library | Version | Purpose |
|---|---|---|
| Next.js | 16 | React framework (App Router + RSC) |
| React | 19 | UI rendering |
| Tailwind CSS | v4 | Utility-first styling |
| Shadcn/UI + Radix UI | latest | Accessible component library |
| CKEditor 5 | 47 | Rich-text sidebar editor |
| Supabase JS | 2 | Auth + realtime DB client |
| react-markdown | 10 | Markdown rendering in chat |
| Sonner | 2 | Toast notifications |
| Lucide React | latest | Icon system |
| Library | Version | Purpose |
|---|---|---|
| FastAPI | 0.124+ | Async REST API framework |
| LangChain | 0.3+ | LLM abstraction layer |
| LangGraph | 0.2+ | Multi-agent orchestration graph |
| Google Generative AI | latest | Gemini model access |
| Vertex AI | 1.35+ | Imagen image generation |
| LangChain OpenAI | 0.1+ | GPT-4o integration |
| LangChain Groq | 0.1+ | Llama 3.3 via Groq |
| SQLAlchemy + Alembic | 2.0+ | ORM and DB migrations |
| pgvector | 0.2+ | Vector similarity search |
| Supabase Python | 2.4+ | Supabase DB/Auth client |
| sentence-transformers | 3.0+ | Local text embeddings |
| Upstash Redis | latest | Serverless Redis caching |
| PRAW | 7.7+ | Reddit API wrapper |
| Tweepy | 4.14+ | Twitter API client |
| textstat | 0.7+ | Readability scoring |
| httpx | 0.27+ | Async HTTP for trend fetching |
| BeautifulSoup4 | 4.12+ | Web scraping |
The frontend is configured for zero-config Vercel deployment via vercel.json.
vercel --prodA Dockerfile and deploy_gcp.ps1 script are included for Cloud Run deployment.
# Build and push Docker image
gcloud builds submit --tag gcr.io/YOUR_PROJECT_ID/genesis-backend
# Deploy to Cloud Run
gcloud run deploy genesis-backend \
--image gcr.io/YOUR_PROJECT_ID/genesis-backend \
--platform managed \
--region us-central1 \
--allow-unauthenticatedSee apps/backend/DEPLOYMENT.md for detailed GCP instructions.
- Fork the Project
- Create a Feature Branch:
git checkout -b feature/YourFeature - Commit your Changes:
git commit -m 'feat: Add YourFeature' - Push to the Branch:
git push origin feature/YourFeature - Open a Pull Request
Distributed under the MIT License. See LICENSE for details.