CoResearcher is a document research assistant built specifically for podcast interview preparation and research. It provides real-time document summarization, interactive chat, and automatic generation of study materials (flashcards and interview questions). Originally built for the Dwarkesh Podcast research workflow.
- π File Browser: Navigate and select local markdown and text files from Projects folder
- βοΈ Document Editor: Edit documents with syntax highlighting using CodeMirror
- π¬ AI Chat: Real-time streaming chat with document context
- π― Auto Summary: Automatic document summarization when you open a file
- π Flashcards: Andy Matuschak-style spaced repetition cards
- π€ Interview Questions: Generate deep, thought-provoking podcast interview questions
- β‘ Parallel Processing: Summary, flashcards, and questions generate simultaneously
- π Stop Button: Interrupt AI streaming at any time and continue conversation
- GitHub account
- Vercel account (free at vercel.com)
- Anthropic API key
- Create a new repository on GitHub
- Push your code:
git remote add origin https://github.com/YOUR_USERNAME/coresearcher-2.git
git branch -M main
git push -u origin main- Go to vercel.com and sign in
- Click "New Project"
- Import your GitHub repository
- Configure build settings (should auto-detect Next.js):
- Framework Preset: Next.js
- Build Command:
npm run build - Output Directory:
.next
In Vercel project settings, add:
ANTHROPIC_API_KEY= your_api_key_here
- Click "Deploy"
- Wait for build to complete
- Your app will be live at
https://your-project.vercel.app
- Use a cloud storage solution (S3, Google Cloud Storage)
- Or implement client-side file handling
- Or use a database for document storage
- Node.js 18+ installed
- npm or yarn package manager
- Anthropic API key (get one at console.anthropic.com)
- Clone the repository:
git clone <repository-url>
cd coresearcher-2- Install dependencies:
npm install- Set up environment variables:
cp .env.local.example .env.local- Edit
.env.localand add your Anthropic API key:
ANTHROPIC_API_KEY=your_actual_api_key_here
Start the development server:
npm run devOpen your browser and navigate to http://localhost:3000
- Browse Files: Use the left panel to navigate your local file system
- Select a Document: Click on any
.mdor.txtfile to open it - Automatic Analysis: The AI will immediately start:
- Streaming a comprehensive summary in the chat
- Generating flashcards in the background
- Creating study questions in parallel
- Interactive Chat: Ask follow-up questions about the document
- Study Materials: Switch between Chat, Flashcards, and Questions tabs
Frontend (Next.js)
βββ File Browser (left panel)
βββ Document Editor (center panel)
βββ AI Interface (right panel)
βββ Chat Tab (with streaming)
βββ Flashcards Tab
βββ Questions Tab
Backend (Next.js API Routes)
βββ /api/files - File system operations
βββ /api/chat - SSE streaming for chat
βββ /api/generate - Flashcard/question generation
AI Service
βββ Claude 3.5 Sonnet (Anthropic)
- Next.js 14: React framework with App Router
- TypeScript: Type-safe development
- Tailwind CSS: Utility-first styling
- CodeMirror 6: Advanced code editor
- Anthropic Claude Opus 4.1: Latest AI model (claude-opus-4-1-20250805)
- Server-Sent Events: Real-time streaming
- Lucide Icons: Beautiful icon set
Build for production:
npm run buildStart production server:
npm start- Currently supports only
.mdand.txtfiles - File editing is local only (no save functionality yet)
- Requires active internet connection for AI features
- API rate limits apply based on your Anthropic plan
- Save edited documents
- Support for PDF files
- Export flashcards to Anki format
- Persistent chat history
- Multiple document tabs
- Customizable AI prompts
- Offline mode with cached responses
CoResearcher started as a general document analysis tool and evolved into a specialized research assistant for podcast interview preparation. The app helps analyze documents and generate thoughtful interview questions in the style of the Dwarkesh Podcast.
- Left: File browser limited to
Projects/folder for organization - Center: CodeMirror editor with markdown syntax highlighting
- Right: AI panel with Chat, Flashcards, and Questions tabs
- Model: claude-opus-4-1-20250805
- Streaming responses using Server-Sent Events
- Parallel generation of summaries, flashcards, and questions
- Custom prompts tailored for podcast research
- Initial Load Bug: Separated file path and content effects to ensure generation starts on first file open
- Chat Glitching: Added proper file tracking to prevent re-summarization on every edit
- Folder Navigation: Fixed dropdown to expand without changing directory
- Stop Functionality: Added ability to interrupt streaming and continue conversation
- Question Format: Changed from Q&A format to interview-style questions only
Prompts were customized specifically for the Dwarkesh Podcast workflow:
- Simple, direct summary prompt
- Andy Matuschak-style flashcards for spaced repetition
- Interview questions with examples from actual podcast episodes (Kotkin, Church, Karpathy)
app/
βββ api/
β βββ chat/route.ts # SSE streaming chat endpoint
β βββ files/route.ts # File system operations
β βββ generate/route.ts # Flashcard/question generation
βββ layout.tsx
βββ page.tsx # Main app with state management
components/
βββ AIPanel.tsx # Chat, flashcards, questions UI
βββ Editor.tsx # CodeMirror markdown editor
βββ FileBrowser.tsx # File navigation component
lib/
βββ anthropic.ts # Claude client setup
βββ prompts.ts # All AI prompts (customized for podcast)
Projects/ # User documents go here
βββ sergey-levine/ # Example research folder
- Lifted state in main page.tsx
- Document content flows: FileBrowser β Editor β AIPanel
- Proper cleanup of abort controllers for streaming
- Debouncing to prevent excessive regeneration
- File System: Current implementation uses Node.js fs module which won't work on Vercel
- No Save: Editor changes aren't persisted to disk
- No Auth: No user authentication or multi-tenancy
- API Keys: Each user needs their own Anthropic API key
- Cloud storage integration (S3/GCS) for production deployment
- Save functionality for edited documents
- Export interview questions to various formats
- Multi-document context for AI
- Voice transcription for interview prep
- Integration with podcast recording tools
MIT
Contributions are welcome! Please feel free to submit a Pull Request.