AI-powered layered explanations for any topic. Built with FastAPI (Python) and React (TypeScript).
This repository is the deprecated KnowBear v1 codebase, kept for historical reference.
- Active repository (v2): https://github.com/voidcommit-afk/know-bear
- This repo (v1): legacy snapshot, no active feature development
- Layered Explanations: ELI5 to ELI15, meme-style, technical deep dives
- Mode-Based Routing:
fastuses a single low-latency model,ensembleruns multi-model generation with judge-based selection - Multi-Model Ensemble: Parallel generation with judge-based voting
- Redis Caching: Smart caching for fast repeat queries
- Export Options: Download as .txt, .json, or .pdf
- Dark Theme UI: Minimalist, space-themed design
-
fastmode:- Uses one Groq model:
llama-3.1-8b-instant - Lower latency path (no judge)
- Uses one Groq model:
-
ensemblemode:- Runs these Groq models in parallel:
llama-3.1-8b-instantllama-3.3-70b-versatilellama-3.1-70b-versatiledeepseek-r1-distill-llama-70bmixtral-8x7b-32768
- Uses
llama-3.3-70b-versatileas a judge to select the best response - Server enforces premium gating for
ensemble
- Runs these Groq models in parallel:
- If Groq fails, the backend fallback chain is:
- Hugging Face Inference API (
microsoft/Phi-3-mini-4k-instruct) whenHF_TOKENis configured - Google Gemini (
gemini-2.0-flash) whenGEMINI_API_KEYis configured
- Hugging Face Inference API (
- Streaming path uses
llama-3.1-8b-instantin current implementation. - Response chunks are adaptively buffered/flushed for smoother UX, with truncation signaling when token limits are hit.
KnowBear/
├── api/ # FastAPI backend (Serverless compatible)
│ ├── main.py # App entry point
│ ├── routers/ # API endpoints
│ └── services/ # Business logic (Routing, Inference, Cache)
├── src/ # React frontend
│ ├── components/ # Reusable UI components
│ ├── pages/ # Page layouts
│ └── hooks/ # Custom React hooks
├── public/ # Static assets
└── vercel.json # Deployment configuration
| Endpoint | Method | Description |
|---|---|---|
/api/pinned |
GET | Curated popular topics |
/api/query |
POST | Generate layered explanations |
/api/export |
POST | Export results as file |
/api/health |
GET | System health check (Redis/Dependencies) |
- Frontend: React, Vite, TailwindCSS, Framer Motion
- Backend: FastAPI, Pydantic, Structlog
- AI/LLM: Groq (Llama, DeepSeek, Mixtral), Google Gemini, Hugging Face Inference API
- Database/Cache: Redis (Upstash/Cloud), Supabase (Auth)