Find which OTT platform has your favorite Bollywood & regional movies
A serverless web application that helps users discover which OTT platform (Netflix, Prime Video, Disney+, Hotstar, etc.) streams a specific movie. Focuses on Bollywood and regional Indian cinema (Tamil, Telugu, Malayalam, Kannada).
- Fast Autocomplete Search: Real-time search with typo tolerance powered by Meilisearch
- Movie Details: Comprehensive information from Wikipedia including cast, director, plot, and more
- OTT Platform Availability: Shows which platforms currently stream each movie
- Daily Updates: Automated scraping pipeline to track new OTT releases
- Regional Cinema Support: Hindi, Tamil, Telugu, Malayalam, Kannada, and more
- Responsive Design: Mobile-first UI built with shadcn/ui and Tailwind CSS
- Next.js 14 - React framework with App Router
- React 18 - UI library
- Tailwind CSS - Styling
- shadcn/ui - Component library
- Lucide React - Icons
- Next.js API Routes - Serverless functions
- Supabase - PostgreSQL database and authentication
- Meilisearch - Open-source search engine
- Wikipedia API - Movie details (free, commercial-friendly)
- News Scraping - OTT release information from entertainment news
- z.ai/OpenAI API - LLM-powered content extraction
┌─────────────────────────────────────────────────────────────┐
│ User Interface │
│ ┌──────────────┐ ┌──────────────┐ ┌──────────────┐ │
│ │ Home Page │ │ Search Page │ │ Movie Detail │ │
│ └──────────────┘ └──────────────┘ └──────────────┘ │
└─────────────────────────────────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────────┐
│ API Routes (Next.js) │
│ ┌──────────────┐ ┌──────────────┐ ┌──────────────┐ │
│ │ /api/search │ │ /api/movie/* │ │/api/cron/* │ │
│ └──────────────┘ └──────────────┘ └──────────────┘ │
└─────────────────────────────────────────────────────────────┘
│ │ │
▼ ▼ ▼
┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐
│ Meilisearch │ │ Supabase │ │ RSS Feeds/LLM │
│ (Search Index) │ │ (PostgreSQL) │ │ (Scraper) │
└─────────────────┘ └─────────────────┘ └─────────────────┘
- Movie Discovery → Wikipedia API search
- Movie Details → Parse Wikipedia infobox data
- OTT Information → Scrape news articles → Extract with LLM
- Search Indexing → Sync to Meilisearch
- Daily Updates → Cron job runs scraping pipeline
- Node.js 18.x or higher
- npm or yarn or pnpm
- Docker (for local Meilisearch)
- Supabase account (free tier works)
- z.ai or OpenAI API key (for scraping)
git clone https://github.com/yourusername/justott.git
cd justottnpm install
# or
yarn install
# or
pnpm installCopy the example environment file:
cp .env.example .envEdit .env and add your API keys:
# Supabase
SUPABASE_URL=your_supabase_project_url
SUPABASE_ANON_KEY=your_supabase_anon_key
SUPABASE_SERVICE_ROLE_KEY=your_supabase_service_role_key
# Meilisearch
MEILISEARCH_HOST=http://localhost:7700
MEILISEARCH_MASTER_KEY=your_master_key_here
# z.ai / OpenAI API
ZAI_API_KEY=your_zai_api_key_here
ZAI_API_URL=https://api.openai.com/v1
ZAI_MODEL=gpt-4o-mini
# Cron Security
CRON_SECRET=generate_a_random_secret_here
# App Config
NEXT_PUBLIC_SITE_URL=http://localhost:3000Generate a secure random secret for protecting cron endpoints:
# On macOS/Linux
openssl rand -base64 32
# Or use Node.js
node -e "console.log(require('crypto').randomBytes(32).toString('base64'))"-
Create a Supabase Project
- Go to supabase.com
- Click "New Project"
- Set your database password (save it!)
-
Run Database Migrations
- Go to SQL Editor in Supabase dashboard
- Copy contents of
supabase/migrations/001_initial_schema.sql - Paste and run
- Copy contents of
supabase/migrations/002_add_wikipedia_and_scraper_schema.sql - Paste and run
-
Get API Keys
- Go to Project Settings → API
- Copy
URL,anonkey, andservice_rolekey - Add to your
.envfile
-
Update docker-compose.yml with your master key:
environment: - MEILI_MASTER_KEY=your_actual_master_key_here
-
Start Meilisearch:
docker-compose up -d
-
Verify it's running:
- Go to http://localhost:7700
- You should see the Meilisearch welcome page
- Sign up at cloud.meilisearch.com
- Create a new project
- Get your host URL and API keys
- Update
.env:MEILISEARCH_HOST=https://your-project.meilisearch.com MEILISEARCH_SEARCH_ONLY_KEY=your_search_key
The scraper uses an LLM to extract OTT information from news articles.
- Sign up at z.ai
- Get your API key from the dashboard
- Add to
.env:ZAI_API_KEY=your_zai_api_key ZAI_API_URL=https://api.openai.com/v1 # or your z.ai endpoint
- Sign up at platform.openai.com
- Create an API key
- Add to
.env:OPENAI_API_KEY=your_openai_api_key
npm run dev
# or
yarn dev
# or
pnpm devOpen http://localhost:3000 in your browser.
npm run build
npm start
# or
yarn build
yarn start
# or
pnpm build
pnpm startCreate a script to add some initial movies from Wikipedia:
# Run the seed script
npm run seedManually trigger the scraping pipeline:
# Option 1: Using Authorization header (recommended - no encoding needed)
curl -X POST http://localhost:3000/api/cron/daily-sync \
-H "Authorization: Bearer YOUR_CRON_SECRET"
# Option 2: Using query parameter (URL encode the secret if it has special characters)
# Special characters like + need to be encoded as %2B
curl "http://localhost:3000/api/cron/daily-sync?secret=YOUR_URL_ENCODED_SECRET"This will:
- Fetch recent articles from configured news sources
- Extract OTT release information using LLM
- Match movies to Wikipedia data
- Store in database
- Update Meilisearch index
You can also add movies manually through the Supabase dashboard or by calling the API directly.
- Deploy your app to Vercel
- Add your cron secret to Vercel environment variables
- Update
vercel.jsonwith your schedule:{ "crons": [{ "path": "/api/cron/daily-sync", "schedule": "0 2 * * *" }] }
You can also use:
- GitHub Actions - Create a workflow file
- Cron-job.org - Free external cron service
- Your server's cron - Use curl to trigger the endpoint
-
Install Vercel CLI:
npm i -g vercel
-
Deploy:
vercel
-
Add Environment Variables in Vercel Dashboard:
- Go to Project Settings → Environment Variables
- Add all variables from your
.envfile
-
Deploy to Production:
vercel --prod
The app can be deployed to any platform that supports Next.js:
- Netlify - Use Next.js plugin
- Railway - Deploy with Docker
- Self-hosted - Use Docker Compose
justott/
├── app/ # Next.js App Router
│ ├── (main)/ # Main app pages
│ │ ├── layout.tsx # Root layout
│ │ ├── page.tsx # Home page
│ │ ├── search/ # Search results
│ │ └── movie/[id]/ # Movie details
│ └── api/ # API routes
│ ├── search/route.ts # Search endpoint
│ ├── movie/[id]/route.ts # Movie details API
│ └── cron/ # Cron jobs
│ └── daily-sync/ # Daily scraping pipeline
├── components/ # React components
│ ├── search/ # Search components
│ ├── movie/ # Movie components
│ └── ui/ # shadcn/ui components
├── lib/ # Core libraries
│ ├── wikipedia/ # Wikipedia API client
│ ├── scraper/ # News scraper
│ ├── search/ # Meilisearch client
│ ├── db/ # Database queries
│ └── utils/ # Utility functions
├── supabase/
│ └── migrations/ # Database migrations
├── public/ # Static assets
├── docker-compose.yml # Meilisearch setup
├── next.config.js # Next.js config
├── tailwind.config.ts # Tailwind config
└── package.json # Dependencies
Search for movies with autocomplete
Query Parameters:
q- Search query (min 2 characters)page- Page number (default: 1)perPage- Results per page (default: 20)platforms- Filter by platforms (comma-separated)languages- Filter by languages (comma-separated)genres- Filter by genres (comma-separated)
Response:
{
"hits": [
{
"document": {
"id": "uuid",
"title": "Movie Name",
"year": 2024,
"platforms": ["netflix", "prime_video"]
}
}
],
"found": 150,
"page": 1
}Get detailed movie information
Response:
{
"movie": {
"id": "uuid",
"title": "Movie Name",
"year": 2024,
"plot": "...",
"director": "Director Name",
"cast": [...],
"ott_availability": [...]
}
}Trigger the daily scraping pipeline
Response:
{
"success": true,
"scraping": {
"totalArticles": 100,
"processedArticles": 50,
"successfulExtractions": 25
},
"matching": {
"matchedCount": 20,
"unmatchedCount": 5
}
}id- UUID (primary key)wikipedia_title- Wikipedia page titletitle- Movie titleyear- Release yeardirector- Director name(s)plot- Movie plot summarygenres- Array of genresposter_url- Poster image URLprimary_language- Main language code
movie_id- Reference to movieplatform- OTT platform enumis_available- Currently available booleanavailable_since- Availability start datesource_url- News article sourcesource_name- Source website
url- Article URLsource_name- Source websitetitle- Article titleprocessed- Whether extraction was successfulextracted_movies- JSON array of extracted movies
Problem: Can't connect to Meilisearch
Solutions:
- Verify Docker is running:
docker ps - Check Meilisearch logs:
docker logs justott-meilisearch - Verify MEILISEARCH_HOST in
.envmatches your setup - Check firewall settings
Problem: Database queries failing
Solutions:
- Verify SUPABASE_URL is correct
- Check API keys haven't expired
- Ensure migrations have been run
- Check Row Level Security policies
Problem: Cron job returns no results
Solutions:
- Verify ZAI_API_KEY or OPENAI_API_KEY is set
- Check RSS feeds are accessible
- Check logs for specific errors
- Try running manually with debug output
Problem: npm run build fails
Solutions:
- Delete
node_modulesand.nextfolders - Run
npm installagain - Ensure all environment variables are set (even if using defaults)
- Check for TypeScript errors:
npm run type-check
| Service | Free Tier | Paid Tier |
|---|---|---|
| Vercel Hosting | 100GB bandwidth | $20/month |
| Supabase Database | 500MB storage | $25/month |
| Meilisearch Cloud | 14 days trial | $36/month |
| Total (Self-hosted) | $0/month | $20/month |
| Total (Cloud) | $0/month | ~$81/month |
Based on z.ai/OpenAI pricing:
- GPT-4o-mini: ~$0.15 per 1M tokens
- Estimated: 100 articles ≈ 200K tokens ≈ $0.03
- Monthly estimate (1000 articles): ~$0.30
To reduce costs:
- Run scraper less frequently (weekly instead of daily)
- Use smaller models (GPT-4o-mini instead of GPT-4)
- Self-host Meilisearch instead of using cloud
- Use rule-based extraction fallback
Contributions are welcome! Please follow these steps:
- Fork the repository
- Create a feature branch:
git checkout -b feature/my-feature - Make your changes
- Run tests:
npm run test - Commit changes:
git commit -am 'Add new feature' - Push to branch:
git push origin feature/my-feature - Submit a pull request
This project is open source and available under the MIT License.
- Wikipedia - For providing free movie data
- Meilisearch - For the excellent open-source search engine
- Supabase - For the amazing PostgreSQL hosting
- shadcn - For the beautiful UI components
- Issues: GitHub Issues
- Discussions: GitHub Discussions
- Add TV show support
- User watchlist functionality
- Email notifications for OTT releases
- Mobile app (React Native)
- Regional language support for UI
- Advanced filtering options
- User reviews and ratings
Made with ❤️ for Indian cinema lovers