Skip to content

Sukarth/PanathosAI

Repository files navigation

Panathos AI

AI-powered search and research assistant built with Next.js, Supabase, and Gemini 2.5.

Features

  • 🔍 Multi-Stage Search: 4 search modes from instant answers to deep research
  • 🤖 AI-Powered: Gemini 2.5 Flash via LiteLLM for intelligent responses
  • 📄 File Analysis: Upload and analyze PDFs, documents, and more
  • 🔐 Secure Auth: Email, Google, and GitHub authentication via Supabase
  • 📊 Usage Tracking: Monitor your searches and API usage
  • 🎨 Beautiful UI: Modern design with dark/light mode support

Tech Stack

  • Frontend: Next.js 14, React, TypeScript, Tailwind CSS
  • Backend: Next.js API Routes, LiteLLM (Python FastAPI)
  • Database: Supabase (PostgreSQL with pgvector)
  • Auth: Supabase Auth
  • Search: Tavily API
  • AI: Google Gemini 2.5 Flash

Getting Started

Prerequisites

  • Node.js 18+
  • Python 3.11+
  • Supabase account
  • Tavily API key
  • Google AI API key

Installation

  1. Clone the repository:

    git clone https://github.com/yourusername/panathos-ai.git
    cd panathos-ai
  2. Install dependencies:

    npm install
  3. Set up environment variables:

    cp .env.example .env.local

    Fill in your API keys and configuration.

  4. Start the development server:

    npm run dev
  5. (Optional) Start the LiteLLM backend:

    cd backend
    pip install -r requirements.txt
    uvicorn main:app --reload --port 8000

Environment Variables

See .env.example for required environment variables.

Project Structure

├── app/                    # Next.js App Router
│   ├── api/               # API routes
│   ├── auth/              # Auth pages
│   ├── chat/              # Chat interface
│   ├── dashboard/         # Usage dashboard
│   ├── files/             # File management
│   ├── research/          # Research projects
│   └── settings/          # User settings
├── components/            # React components
│   ├── chat/              # Chat-specific components
│   ├── layout/            # Layout components
│   ├── providers/         # Context providers
│   └── ui/                # shadcn/ui components
├── lib/                   # Utility libraries
│   ├── context/           # Context management
│   ├── prompts/           # System prompts
│   ├── search/            # Tavily integration
│   ├── supabase/          # Supabase client
│   └── usage/             # Usage tracking
├── backend/               # LiteLLM Python service
└── docs/                  # Documentation

Search Modes

Mode Time Stages Use Case
Instant ~2s 1 Quick facts and definitions
Fast ~10s 2-3 Simple research questions
Medium ~30s 4-6 Thorough analysis
Slow ~60s 6-8 Comprehensive research

Deployment

Vercel (Frontend)

  1. Push to GitHub
  2. Import project to Vercel
  3. Add environment variables
  4. Deploy

Backend (Railway/Fly.io)

  1. Deploy the backend/ directory
  2. Set environment variables
  3. Update LITELLM_API_URL in Vercel

Contributing

Contributions are welcome! Please read our contributing guidelines first.

License

MIT License - see LICENSE for details.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors