A video-sharing platform where users can upload videos and generate AI-powered "brain rot" audio overlays using OpenAI TTS. Share and discover chaotic, funny content!
- π₯ Video Upload - Upload videos with drag-and-drop support
- π€ AI Audio Generation - Automatic brain rot commentary using OpenAI GPT + TTS
- π¬ Video Processing - FFmpeg-powered audio-video merging
- π€ User Authentication - Secure login and registration
- πΊ YouTube-like Feed - Browse and discover videos
- π User Dashboard - Manage your uploaded videos
- π₯ User Profiles - View other creators' content
- βοΈ Cloud Storage - Google Cloud Storage integration
- π¨ Modern UI - Dark theme with glassmorphism and animations
- Frontend: Next.js 15, React 19, TypeScript
- Styling: Vanilla CSS with modern design patterns
- Backend: Next.js API Routes
- Database: MongoDB with Mongoose
- Authentication: NextAuth.js
- Storage: Google Cloud Storage
- AI: OpenAI GPT-4 + TTS
- Video Processing: FFmpeg
- Node.js 18+ (v20.9.0+ recommended)
- MongoDB (local or Atlas)
- OpenAI API Key
- Google Cloud Platform account
- FFmpeg installed on your system
Windows:
# Using Chocolatey
choco install ffmpeg
# Or download from https://ffmpeg.org/download.htmlmacOS:
brew install ffmpegLinux:
sudo apt-get install ffmpegcd LOLTrackr
npm installOption A: Local MongoDB
# Install MongoDB locally and start the service
# Connection string: mongodb://localhost:27017/loltrackrOption B: MongoDB Atlas (Recommended)
- Create a free account at MongoDB Atlas
- Create a new cluster
- Get your connection string
- Sign up at OpenAI
- Create an API key
- Ensure you have credits for GPT-4 and TTS usage
If you want to avoid OpenAI API costs, use the local AI setup guide:
LOCAL_AI_SETUP.md
-
Create a GCP project at Google Cloud Console
-
Enable the Cloud Storage API
-
Create a storage bucket (e.g.,
loltrackr-videos) -
Create a service account:
- Go to IAM & Admin β Service Accounts
- Create service account with "Storage Admin" role
- Create and download JSON key
- Save as
gcp-service-account.jsonin project root
-
Make the bucket public (or use signed URLs):
gsutil iam ch allUsers:objectViewer gs://your-bucket-name
Create .env.local file in the root directory:
# MongoDB
MONGODB_URI=mongodb+srv://user:password@cluster.mongodb.net/loltrackr
# NextAuth
NEXTAUTH_SECRET=your-random-secret-key-here
NEXTAUTH_URL=http://localhost:3000
# OpenAI
OPENAI_API_KEY=sk-your-openai-api-key
# Google Cloud Platform
GCP_PROJECT_ID=your-gcp-project-id
GCP_BUCKET_NAME=loltrackr-videos
GOOGLE_APPLICATION_CREDENTIALS=./gcp-service-account.jsonGenerate NextAuth Secret:
node -e "console.log(require('crypto').randomBytes(32).toString('hex'))"npm run devOpen http://localhost:3000 in your browser.
- Sign Up - Create an account at
/auth/signup - Upload Video - Go to
/uploadand select a video file - AI Processing - The system will:
- Generate a brain rot script using GPT-4
- Create audio using OpenAI TTS
- Merge audio with your video using FFmpeg
- Upload to Google Cloud Storage
- Watch & Share - View your video and share with others!
LOLTrackr/
βββ app/ # Next.js app directory
β βββ api/ # API routes
β β βββ auth/ # NextAuth & signup
β β βββ videos/ # Video CRUD operations
β β βββ upload/ # Video upload endpoint
β βββ auth/ # Authentication pages
β βββ dashboard/ # User dashboard
β βββ profile/ # User profiles
β βββ upload/ # Upload page
β βββ video/ # Video player page
β βββ page.tsx # Homepage
βββ components/ # React components
β βββ Navbar.tsx
β βββ VideoCard.tsx
β βββ VideoFeed.tsx
β βββ VideoPlayer.tsx
βββ lib/ # Utilities
β βββ mongodb.ts # Database connection
β βββ storage.ts # GCS integration
β βββ videoProcessor.ts # FFmpeg & AI processing
βββ models/ # Mongoose models
β βββ User.ts
β βββ Video.ts
βββ public/ # Static files
POST /api/auth/signup- User registrationGET/POST /api/auth/[...nextauth]- NextAuth authenticationPOST /api/upload- Upload and process videoGET /api/videos- Fetch videos (with filters)GET /api/videos/[id]- Get single videoPUT /api/videos/[id]- Update videoDELETE /api/videos/[id]- Delete video
Edit lib/videoProcessor.ts:
voice: 'alloy' | 'echo' | 'fable' | 'onyx' | 'nova' | 'shimmer'Edit the system prompt in lib/videoProcessor.ts β generateBrainRotScript()
Edit app/upload/page.tsx and next.config.mjs for max file size
- Ensure FFmpeg is installed and in your PATH
- Test:
ffmpeg -version
- Check your connection string
- Verify network access in MongoDB Atlas
- Verify service account permissions
- Check bucket name and project ID
- Ensure billing is enabled
- Check your API key
- Verify you have available credits
- Check rate limits
- Vercel (easiest for Next.js)
- Railway
- DigitalOcean App Platform
- AWS / GCP
- Set all environment variables in your platform
- Ensure FFmpeg is available in the deployment environment
- Consider using job queues (Bull, BullMQ) for video processing
- Set up proper error monitoring (Sentry, LogRocket)
MIT
Contributions are welcome! Please open an issue or submit a pull request.
Made with π₯ and π§ (rot)