Skip to content

Tendo33/IdeaGo

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

73 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

IdeaGo

IdeaGo Banner

AI-powered competitor research engine for startup ideas.

Python FastAPI React LangGraph License

简体中文 · English


Table of Contents


What IdeaGo Does

IdeaGo turns one natural-language startup idea into a structured competitor report with:

  • Market summary and recommendation (go / caution / no_go)
  • Competitor list with traceable source links
  • Differentiation opportunities
  • Confidence, evidence, and runtime cost transparency

It is designed for fast founder validation: start with one sentence, get an auditable research report.


Key Features

  • End-to-end pipeline: intent parsing → source search → extraction → aggregation → report generation
  • Multi-source retrieval: GitHub, Tavily Web Search, Hacker News, App Store, Product Hunt
  • Resilient LLM layer: retry, JSON parse recovery, endpoint failover
  • Strict link grounding: extracted links are filtered against fetched source URLs
  • Graceful degradation: extraction/aggregation failures still return usable output
  • Real-time UX: SSE streaming events with reconnect and cancellation
  • Transparent reports: confidence/evidence/cost/failover metadata in every report
  • Performance-focused UI: lazy routes, virtualized competitor lists, compare panel, export & print
  • Caching + runtime state: file cache (TTL) + LangGraph SQLite checkpoints + status files

Architecture

flowchart TD
    A["User Idea"] --> B["POST /api/v1/analyze"]
    B --> C["LangGraph Engine"]
    C --> D["parse_intent"]
    D --> E["cache_lookup"]
    E -->|hit| F["report_ready"]
    E -->|miss| G["fetch_sources"]
    G --> H["extract_map"]
    H --> I["aggregate"]
    I --> J["assemble_report"]
    J --> K["persist_report"]
    K --> F
    F --> L["GET /api/v1/reports/{id}"]
    C --> M["SSE /api/v1/reports/{id}/stream"]
Loading

Runtime notes

  • POST /analyze starts background execution and returns report_id immediately.
  • Frontend subscribes to SSE to render stage-by-stage progress.
  • In-flight duplicate requests for the same normalized query are deduplicated.
  • Built-in in-memory rate limiter for analyze endpoint: 10 requests / 60s (per IP/session key).

Tech Stack

Backend

  • Python 3.10+
  • FastAPI + Uvicorn
  • LangGraph state machine pipeline
  • LangChain OpenAI client
  • Pydantic v2 / pydantic-settings
  • File cache + SQLite checkpoint store

Frontend

  • React 19 + TypeScript + Vite 7
  • Tailwind CSS 4
  • React Router 7
  • i18next (zh/en)
  • Framer Motion + Recharts

Quick Start

1) Prerequisites

  • Python 3.10+
  • uv
  • Node.js 20+

2) Install dependencies

# Backend
uv sync --all-extras

# Frontend
npm --prefix frontend install

3) Configure environment

cp .env.example .env

Minimum recommended setup:

  • Required: OPENAI_API_KEY
  • Recommended: TAVILY_API_KEY

4) Development mode (hot reload)

Terminal 1:

uv run uvicorn ideago.api.app:create_app --factory --reload --port 8000

Terminal 2:

npm --prefix frontend run dev

Open:

5) Single-process local run (serve built frontend from backend)

npm --prefix frontend run build
uv run python -m ideago

Open: http://localhost:8000

6) Docker

cp .env.example .env
docker compose up --build -d

Open: http://localhost:8000


API Overview

Base path: /api/v1

Method Path Description
POST /analyze Start analysis, return report_id
GET /health Service health + source availability
GET /reports List reports (limit, offset)
GET /reports/{report_id} Get report (202 while processing)
GET /reports/{report_id}/status Runtime status (processing/failed/cancelled/complete/not_found)
GET /reports/{report_id}/stream SSE progress stream
GET /reports/{report_id}/export Export markdown
DELETE /reports/{report_id} Delete report
DELETE /reports/{report_id}/cancel Cancel active analysis

SSE event types

intent_started, intent_parsed, source_started, source_completed, source_failed, extraction_started, extraction_completed, aggregation_started, aggregation_completed, report_ready, cancelled, error

Example

# Start analysis
curl -X POST http://localhost:8000/api/v1/analyze \
  -H "Content-Type: application/json" \
  -d '{"query":"An AI assistant for indie game analytics"}'

# Stream events
curl -N http://localhost:8000/api/v1/reports/<report_id>/stream

# Fetch report
curl http://localhost:8000/api/v1/reports/<report_id>

Report Model

Each report includes:

  • Core analysis: competitors, market summary, recommendation, differentiation angles
  • Confidence: sample size, source coverage, source success rate, confidence score, freshness hint
  • Evidence: top evidence and structured evidence items
  • Cost telemetry: LLM calls/retries/failovers, token usage, pipeline latency
  • Fault-tolerance metadata: endpoint fallback usage and last error class

This makes conclusions inspectable instead of black-box.


Configuration

See full defaults in .env.example and schema in src/ideago/config/settings.py.

Variable Required Default Purpose
OPENAI_API_KEY Yes "" LLM access key
OPENAI_MODEL No gpt-4o-mini Primary model
OPENAI_BASE_URL No "" OpenAI-compatible endpoint
OPENAI_FALLBACK_ENDPOINTS No "" JSON array of fallback endpoints
OPENAI_TIMEOUT_SECONDS No 60 LLM timeout
LANGGRAPH_MAX_RETRIES No 2 Retry budget
LANGGRAPH_JSON_PARSE_MAX_RETRIES No 1 JSON recovery retries
TAVILY_API_KEY Recommended "" Enable Tavily source
GITHUB_TOKEN No "" Higher GitHub rate limit
PRODUCTHUNT_DEV_TOKEN No "" Enable Product Hunt source
APPSTORE_COUNTRY No us App Store country code
PRODUCTHUNT_POSTED_AFTER_DAYS No 730 Product Hunt freshness window (days)
MAX_RESULTS_PER_SOURCE No 10 Raw results per source
SOURCE_TIMEOUT_SECONDS No 30 Source timeout
SOURCE_QUERY_CONCURRENCY No 2 Per-source concurrency
EXTRACTION_TIMEOUT_SECONDS No 60 LLM extraction timeout
CACHE_DIR No .cache/ideago Cache directory
CACHE_TTL_HOURS No 24 Cache TTL
LANGGRAPH_CHECKPOINT_DB_PATH No .cache/ideago/langgraph-checkpoints.db LangGraph checkpoint DB
CORS_ALLOW_ORIGINS No * CORS origins
HOST / PORT No 0.0.0.0 / 8000 Server bind address
VITE_API_BASE_URL No "" Optional frontend API prefix

Project Structure

.
├── src/ideago
│   ├── api/             # FastAPI app, routes, schemas, dependencies
│   ├── pipeline/        # LangGraph engine, nodes, events, state
│   ├── llm/             # Chat model client + prompt templates
│   ├── sources/         # Source plugins (GitHub/Tavily/HN/AppStore/Product Hunt)
│   ├── cache/           # File-based report/status cache
│   ├── models/          # Pydantic domain models
│   ├── config/          # Runtime settings
│   └── observability/   # Logging config
├── frontend/            # React + TypeScript UI
├── tests/               # Pytest suite
├── scripts/             # Release/dev automation scripts
├── doc/                 # Engineering docs
└── docs/                # Plans and design assets

Development & Quality

Run relevant checks before submitting:

uv run ruff check src tests scripts
uv run ruff format --check src tests scripts
uv run mypy src
uv run pytest
npm --prefix frontend run lint
npm --prefix frontend run typecheck
npm --prefix frontend run test
npm --prefix frontend run build

Roadmap & Docs


Contributing

Issues and pull requests are welcome. Please read CONTRIBUTING.md before starting.


License

MIT License. See LICENSE.

About

AI-powered competitor research engine for startup ideas.

Topics

Resources

License

Contributing

Stars

Watchers

Forks

Packages

 
 
 

Contributors