Searchable podcast transcripts for your AI.
Podcasts are one of the richest sources of expert knowledge on the internet — and almost none of it is searchable. Hours of insight from founders, researchers, and operators sit locked inside audio files where no AI tool can reach them. Creators and researchers waste time scrubbing through episodes trying to relocate a single quote, data point, or idea. The content exists, but it's invisible to the tools we actually use to think.
Podify fixes that. Subscribe to any podcast RSS feed or YouTube channel, and Podify automatically downloads new episodes, transcribes them with production-grade speech-to-text, and uploads clean transcripts to a dedicated Google Drive folder — making every word instantly searchable with Claude, ChatGPT, Gemini, or any AI assistant. Podify monitors your subscribed feeds daily for new episodes and transcribes them without any manual intervention. Set it up once, and your podcast library becomes a searchable knowledge base that grows on its own.
Built as a full-stack Next.js SaaS application with event-driven background processing, Podify uses ElevenLabs Scribe as its primary transcription engine (with Deepgram as an automatic fallback), Stripe for subscription billing, and Inngest for reliable async job orchestration with automatic retries.
- Next.js 15 (canary) — App Router with Server Components, experimental PPR, and Turbopack dev server
- React 19 — Server and client components
- TypeScript 5.8 — Strict typing across the entire codebase
- PostgreSQL (Neon) — Serverless Postgres database
- Drizzle ORM 0.43 — Type-safe schema definitions, migrations, and queries
- Drizzle Kit — Migration generation, execution, and Drizzle Studio GUI
- jose — JWT token creation and verification (httpOnly cookies)
- bcryptjs — Password hashing
- AES-256-GCM — Encryption for stored OAuth tokens (Node.js
crypto)
- ElevenLabs Scribe API — Primary speech-to-text transcription
- Deepgram API — Fallback transcription service
- youtube-transcript / youtubei.js — YouTube transcript extraction and video metadata
- Stripe — Checkout sessions, webhook handling, and subscription management
- Inngest — Event-driven job orchestration with automatic retries (feed polling, transcription pipelines, Drive uploads)
- Google OAuth 2.0 — User authorization for Drive access
- Google Drive API — Transcript file creation and folder management
- PodcastIndex API — Podcast search and feed resolution
- Tailwind CSS 4 — Utility-first styling
- Radix UI — Accessible dialog, alert dialog, and collapsible primitives
- Lucide React — Icon library
- class-variance-authority / clsx / tailwind-merge — Component variant utilities
- Sonner — Toast notifications
- tw-animate-css — CSS animations
- Sentry (@sentry/nextjs) — Error reporting, performance monitoring, and source map uploads
- Vitest — Unit testing framework
- Testing Library (React, DOM, user-event, jest-dom) — Component testing utilities
- Storybook 10 — Component documentation and visual testing
- ESLint 9 — Linting with Next.js and Storybook plugins
- jsdom — DOM environment for tests
- SWR — Client-side data fetching with caching
- Zod 4 — Runtime schema validation for API inputs and forms
- Vercel — Deployment platform (configured via
vercel.json) - pnpm — Package manager
- PostCSS / Autoprefixer — CSS processing pipeline
- dotenv — Environment variable loading
- Node.js 18+ (20+ recommended)
- pnpm — Install with
npm install -g pnpm - A Neon PostgreSQL database — Free tier at neon.tech
git clone https://github.com/your-org/podify.git
cd podifypnpm installYou'll need accounts and credentials from the following services:
| Service | What to Get | Where to Get It |
|---|---|---|
| Neon | PostgreSQL connection string | neon.tech |
| Stripe | Secret key, publishable key, webhook secret | dashboard.stripe.com/apikeys |
| Google Cloud | OAuth 2.0 client ID and secret (enable Drive API) | console.cloud.google.com/apis/credentials |
| ElevenLabs | API key | elevenlabs.io/app/settings/api-keys |
| Deepgram | API key (optional, fallback transcription) | console.deepgram.com |
| Inngest | Signing key and event key | app.inngest.com |
| PodcastIndex | API key and secret | api.podcastindex.org |
| Sentry | DSN, org, project, auth token (optional) | sentry.io |
Copy the example file and fill in your credentials:
cp .env.example .envEdit .env with your values:
# Database (Neon PostgreSQL)
POSTGRES_URL=postgresql://user:password@host/dbname?sslmode=require
# Authentication
AUTH_SECRET= # Any random string — generate with: openssl rand -base64 32
# Application
BASE_URL=http://localhost:3000
NEXT_PUBLIC_APP_URL=http://localhost:3000
# Stripe
STRIPE_SECRET_KEY=sk_test_...
STRIPE_WEBHOOK_SECRET=whsec_...
NEXT_PUBLIC_STRIPE_PUBLISHABLE_KEY=pk_test_...
# Google OAuth
GOOGLE_CLIENT_ID=your-client-id.apps.googleusercontent.com
GOOGLE_CLIENT_SECRET=GOCSPX-...
GOOGLE_REDIRECT_URI=http://localhost:3000/api/auth/google/callback
# Transcription
ELEVENLABS_API_KEY=sk_...
DEEPGRAM_API_KEY= # Optional fallback
# Encryption (for stored OAuth tokens)
TOKEN_ENCRYPTION_KEY= # Generate with: openssl rand -base64 32
# Inngest (Background Jobs)
INNGEST_SIGNING_KEY=signkey-...
INNGEST_EVENT_KEY=...
# Podcast Search
PODCASTINDEX_API_KEY=
PODCASTINDEX_API_SECRET=
# Sentry (Optional)
SENTRY_DSN=
NEXT_PUBLIC_SENTRY_DSN=
SENTRY_ORG=
SENTRY_PROJECT=
SENTRY_AUTH_TOKEN=pnpm db:setup # Create tables
pnpm db:seed # Seed with test data (optional)pnpm devThe app will be running at http://localhost:3000.
In a separate terminal, start the Inngest dev server so feed polling and transcription jobs execute locally:
npx inngest-cli@latest devpnpm build # Production build
pnpm start # Start production server
pnpm test # Run tests
pnpm test:watch # Run tests in watch mode
pnpm lint # Lint the codebase
pnpm typecheck # TypeScript type checking
pnpm db:generate # Generate new migrations from schema changes
pnpm db:migrate # Run pending migrations
pnpm db:studio # Open Drizzle Studio (database GUI)
pnpm storybook # Launch Storybook on port 6006