Skip to content

vickatGit/Video-Transcoder

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

24 Commits
 
 
 
 
 
 
 
 

Repository files navigation

Video Transcoder 🎬⚡

A production-ready video transcoding system. Upload → S3 → SQS → ECS (FFmpeg) — orchestrated with Pulumi.


✨ What this repo contains

  • client/ — React + Vite frontend (upload UI & realtime progress)
  • server/ — Express API (handles uploads, creates DB records, streams upload progress)
  • IAC/ — Pulumi TypeScript infra (S3, SQS, Lambdas, ECS/ECR, VPC, etc.)
  • IAC/VideoEncodingService/ — video transcoder (FFmpeg via fluent-ffmpeg)

Tip: This README shows exact scripts, env vars, and deploy steps discovered in the codebase.


🧭 Quick links

  • Local Dev: client/, server/, IAC/VideoEncodingService/
  • Pulumi infra: IAC/index.ts
  • Video transcode logic: IAC/VideoEncodingService/src/index.ts

✅ Highlights (why you'll like it)

  • Scalable: ECS Fargate tasks are launched on demand via Lambdas + SQS
  • Resilient: S3 for durable storage, SQS for work queueing
  • Observable: uploads emit realtime progress via socket.io
  • Reproducible infra: Pulumi TypeScript stacks

🚀 Quick prerequisites

Make sure you have the following installed and configured:

  • Node.js 16+ (Node 18 for production build & Pulumi lambdas)
  • npm
  • Docker (for building images & pushing to ECR)
  • AWS CLI with credentials configured
  • Pulumi CLI (logged in)
  • MongoDB (Atlas or local) and Redis reachable by services

🧩 Scripts & dependencies (extracted from package.json)

Client (client/package.json)

  • npm scripts:
    • npm run dev — start Vite dev server
    • npm run buildtsc -b && vite build
    • npm run linteslint .
    • npm run previewvite preview
  • Key deps: react, axios, @reduxjs/toolkit, socket.io-client, tailwindcss, vite

Server (server/package.json)

  • npm scripts:
    • npm run devnodemon --ext ts --exec ts-node src/index.ts
    • npm run buildtsc
    • npm run startnode dist/index.js
  • Key deps: express, mongoose, multer, @aws-sdk/client-s3, @aws-sdk/lib-storage, socket.io

Video Encoding Service (IAC/VideoEncodingService/package.json)

  • npm scripts:
    • npm run devts-node src/index.ts
    • npm run buildtsc
    • npm run startnode dist/index.js
  • Key deps: fluent-ffmpeg, ffmpeg-static, @ffmpeg-installer/ffmpeg, @aws-sdk/client-s3, @aws-sdk/client-sqs, redis

Pulumi / IAC (IAC/package.json)

  • Key packages: @pulumi/pulumi, @pulumi/aws, @pulumi/docker, @pulumi/awsx, esbuild, archiver

🔐 Environment variables (by service)

Create .env files per service. Never commit secrets.

1) Root / IAC .env (development helper — DO NOT commit)

Purpose: secrets & runtime values used by Pulumi scripts and local builds. The repo contains an IAC/.env file but it may contain real secrets — remove or secure it before pushing.

Variable Required Purpose
AWS_ACCESS_KEY_ID AWS credentials (use IAM user with limited permissions)
AWS_SECRET_ACCESS_KEY AWS secret (store in secret store)
AWS_REGION Default region for Pulumi & SDKs
REDIS_CLOUD_HOST Redis host used by Lambdas/ECS
REDIS_CLOUD_PORT Redis port
REDIS_CLOUD_PASSWORD Redis password (secret)
MONGOOSE_DB_URL MongoDB connection string (secret)

Example IAC/.env.example (placeholders):

# IAC/.env.example — DO NOT commit with real values
AWS_ACCESS_KEY_ID=REDACTED_AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY=REDACTED_AWS_SECRET_ACCESS_KEY
AWS_REGION=ap-south-1

REDIS_CLOUD_HOST=REDACTED_REDIS_HOST
REDIS_CLOUD_PORT=REDACTED_REDIS_PORT
REDIS_CLOUD_PASSWORD=REDACTED_REDIS_PASSWORD

MONGOOSE_DB_URL=mongodb+srv://<REDACTED_USER>:<REDACTED_PASSWORD>@cluster0.example.net/mydb

Recommended production flow: pulumi config set aws:accessKeyId <id> and pulumi config set --secret aws:secretAccessKey <secret> (or use an IAM role / environment credentials on CI hosts).


2) Server .env (development)

Path: server/.env

Variable Required Purpose
PORT Server port (dev default: 5000)
MONGOOSE_DB_URL MongoDB connection URL
AWS_REGION AWS region
AWS_ACCESS_KEY_ID AWS key (for dev only)
AWS_SECRET_ACCESS_KEY AWS secret (for dev only)
S3_BUCKET_NAME S3 bucket used by server uploads
ENV optional 'DEV' or 'PROD' — affects client origin checks

Example server/.env.example:

PORT=5000
MONGOOSE_DB_URL=mongodb+srv://<REDACTED_USER>:<REDACTED_PASSWORD>@cluster0.example.net/transcoder
AWS_REGION=ap-south-1
AWS_ACCESS_KEY_ID=REDACTED_AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY=REDACTED_AWS_SECRET_ACCESS_KEY
S3_BUCKET_NAME=REDACTED_S3_BUCKET_NAME
ENV=DEV

3) Client .env (development)

Path: client/.env (use Vite conventions or env injected at build time)

Variable Required Purpose
VITE_API_URL URL of the server (e.g., http://localhost:5000)
VITE_SOMETHING_ELSE optional other client-side toggles (non-secret)

Example client/.env.example:

VITE_API_URL=http://localhost:5000
# Non-secret flags for the client
VITE_ENABLE_DEBUG=true

Note: Client envs must not contain secrets (they end up in the browser). Keep only public/feature flags here.


4) Video Encoding Service (ECS / local) .env

Path: IAC/VideoEncodingService/.env

Variable Required Purpose
MONGOOSE_DB_URL MongoDB URL to update VideoModel
S3_BUCKET Bucket where the originals and outputs are stored
SQS_QUEUE_URL URL of the SQS queue used by the system
REDIS_CLOUD_HOST Redis host for worker counters
REDIS_CLOUD_PORT Redis port
REDIS_CLOUD_PASSWORD Redis password (secret)
AWS_REGION AWS region
AWS_ACCESS_KEY_ID optional Only if running locally without instance role
AWS_SECRET_ACCESS_KEY optional Only if running locally without instance role
S3_FILE_KEY runtime Set by ECS container instance (from SQS)
RECEIPT_HANDLE runtime SQS receipt handle for deletion

Example IAC/VideoEncodingService/.env.example:

MONGOOSE_DB_URL=mongodb+srv://<REDACTED_USER>:<REDACTED_PASSWORD>@cluster0.example.net/transcoder
S3_BUCKET=REDACTED_S3_BUCKET_NAME
SQS_QUEUE_URL=REDACTED_SQS_QUEUE_URL
REDIS_CLOUD_HOST=REDACTED_REDIS_HOST
REDIS_CLOUD_PORT=REDACTED_REDIS_PORT
REDIS_CLOUD_PASSWORD=REDACTED_REDIS_PASSWORD
AWS_REGION=ap-south-1

🧪 Local development (step-by-step)

  1. Clone the repo
git clone <repo-url> Transcoder
cd Transcoder
  1. Install deps
# Client
cd client
npm install
# Server
cd ..\server
npm install
# IAC
cd ..\IAC
npm install
# Video encoding service
cd .\VideoEncodingService
npm install
  1. Run services (dev)
  • Server (dev)
cd server
npm run dev
# Listening: http://localhost:5000
  • Client (dev)
cd ..\client
npm run dev
# Frontend: http://localhost:5173
  • VideoEncodingService (dev - needs .env)
cd ..\IAC\VideoEncodingService
# create .env (see templates above)
npm run dev

Tip: Use LocalStack for local S3/SQS testing or ngrok for exposing local endpoints.


📡 API Endpoints & examples

Health check

GET /
# Response: { "message": "Server Headl is Good 🚀🚀" }

Upload (multipart) — POST /api/upload

  • Form field: file (binary)
  • Body field: socketId (socket.io id to receive progress events)

Example (curl):

curl -v -X POST "http://localhost:5000/api/upload" \
  -F "file=@./path/to/video.mp4" \
  -F "socketId=<SOCKET_ID>"

Server flow (brief):

  1. Server receives file with multer (memory storage)
  2. Server creates a DB record in VideoModel
  3. Server uploads original to S3 under temp/ using @aws-sdk/lib-storage and emits upload_progress via socket.io
  4. S3 ObjectCreated (prefix temp/) triggers JobDispatcher Lambda → sends message to SQS
  5. JobConsumer Lambda receives SQS message → launches ECS task (Fargate) to transcode


🔁 What happens after a video is uploaded

This explains the end-to-end flow once a user uploads a file via the frontend.

  1. Client sends a multipart POST to /api/upload with the file field and socketId.
  2. Server (server/) accepts the file (multer memory storage), creates a VideoModel record in MongoDB, and starts an S3 multipart upload to temp/<timestamp>-<name>. It streams upload progress to the client via socket.io using the provided socketId.
  3. When the object is created in S3 (prefix temp/), the bucket notification triggers the JobDispatcher Lambda which enqueues a message in SQS containing the S3 key and metadata.
  4. JobConsumer Lambda receives the SQS message, performs a quick Redis concurrency check, and calls ECS RunTask to start a Fargate task (container image from ECR) with environment variables pointing to the S3 key and SQS receipt handle.
  5. The ECS task (VideoEncodingService) downloads the original from S3, transcodes to the configured resolutions, uploads each resolved file back to S3 (under resolution-specific prefixes), updates the corresponding VideoModel document in MongoDB with URLs and status, and finally deletes the SQS message using the provided receipt handle.
  6. (Optional) A scheduled S3Cleanup Lambda runs periodically to remove old generated files.

This design makes the pipeline event-driven, scalable, and fault-tolerant: SQS provides retry and dead-letter handling, ECS tasks are isolated workers, and Lambdas orchestrate the glue.


🛠 Recommended short actions I can take for you

If you'd like, I can now:

  • add .env.example files to each service folder (client, server, IAC, VideoEncodingService)
  • create a Pulumi stack example with secure pulumi config set --secret commands documented
  • add a small developer quickstart script (PowerShell) to automate the local dev start sequence

Tell me which of these you'd like me to create and I will add them to the repo.


📡 API Endpoints & examples (full list)

Health check

GET /
# Response: { "message": "Server Headl is Good 🚀🚀" }

Upload (multipart) — POST /api/upload

  • Form field: file (binary)
  • Body field: socketId (socket.io id to receive progress events)

Example (curl):

curl -v -X POST "http://localhost:5000/api/upload" \
  -F "file=@./path/to/video.mp4" \
  -F "socketId=<SOCKET_ID>"

Server flow (brief):

  1. Server receives file with multer (memory storage)
  2. Server creates a DB record in VideoModel
  3. Server uploads original to S3 under temp/ using @aws-sdk/lib-storage and emits upload_progress via socket.io
  4. S3 ObjectCreated (prefix temp/) triggers JobDispatcher Lambda → sends message to SQS
  5. JobConsumer Lambda receives SQS message → launches ECS task (Fargate) to transcode

Get videos (placeholder) — GET /get_videos

  • Note: The router defines /get_videos but it is currently unimplemented in server/src/routes/index.ts.

🏗 Architecture (visual)

flowchart LR
  subgraph Client
    A[👩‍💻 User Browser]
  end

  A -->|Upload (+socketId)| B[📡 Server /api/upload]
  B -->|Upload temp/Key| S3[(📦 S3 Bucket)]
  S3 -. S3:ObjectCreated(temp/) .-> JD[🛰 JobDispatcher Lambda]
  JD -->|Sends msg| SQS[(📥 SQS Queue)]
  SQS -->|Event| JC[🛰 JobConsumer Lambda]
  JC -->|Run Task| ECS[🚀 ECS (Fargate) Task]
  ECS -->|Downloads| S3
  ECS -->|Transcodes & Uploads to| S3
  S3 -->|(Optional) S3Cleanup Lambda (cron)| Cleanup[🧹 Cleanup Lambda]

  classDef aws fill:#fffbdd,stroke:#ffb86b
  class S3,SQS,JD,JC,ECS,Cleanup aws
Loading

Short summary:

  • Client uploads → Server → S3
  • S3 triggers JobDispatcher (Lambda) → SQS
  • JobConsumer (Lambda) → launches ECS task
  • ECS task downloads original → transcodes → uploads outputs to S3
  • Cleanup Lambda removes old generated files periodically

☁️ AWS infra & how components interact

Provisioned by Pulumi (IAC/index.ts):

  • VPC, Subnet, Internet Gateway, Route Table
  • S3 bucket (trans-bucket) — stores temp/ originals and generated outputs
  • SQS queue (transQueue) — decouples dispatcher & worker
  • Lambdas:
    • JobDispatcher — S3 event → send message to SQS
    • JobConsumer — SQS event → RunTask (ECS)
    • S3Cleanup — scheduled cleanup job
  • ECR repo + Docker image push + ECS Cluster + task definitions

Permissions: Lambdas are attached with roles/policies (S3, SQS, CloudWatch, ECS permissions — review for least privilege before prod).

Note: Pulumi does not create MongoDB or Redis — those are expected to be provided and accessible.


⚙️ Pulumi — setup & deploy

⚠️ Important: Running Pulumi triggers local Docker build & ECR push (Pulumi code uses docker build + docker push). Ensure Docker is running and you have AWS ECR push permissions.

  1. Login to Pulumi
pulumi login
  1. Initialize stack and set region
cd IAC
pulumi stack init transcoder-dev
pulumi config set aws:region ap-south-1 --stack transcoder-dev
  1. Export AWS credentials to env (PowerShell example)
$env:AWS_ACCESS_KEY_ID = "AKIA..."
$env:AWS_SECRET_ACCESS_KEY = "..."
$env:AWS_DEFAULT_REGION = "ap-south-1"
# Optionally set REDIS_CLOUD_* envs used by some lambdas
  1. Run Pulumi deploy
pulumi up --stack transcoder-dev
# Review preview and confirm
  1. Rollback / destroy
pulumi destroy --stack transcoder-dev
pulumi stack rm --yes transcoder-dev

🐳 Docker & ECR (manual steps)

If Pulumi's automatic image build/push fails, you can build/push manually:

# Build locally
docker build -t video-transcoder-service:local .

# Tag for ECR
docker tag video-transcoder-service:local <ECR_REPO_URL>:latest

# Authenticate to ECR
aws ecr get-login-password --region ap-south-1 | docker login --username AWS --password-stdin <ECR_REPO_URL>

docker push <ECR_REPO_URL>:latest

🛠 Troubleshooting & tips

  • Pulumi + Docker: Ensure Docker daemon is running and you've permission to push to ECR.
  • S3 triggers: Verify S3 Notification & Lambda permission to be invoked by S3.
  • Upload progress: Server uses @aws-sdk/lib-storage httpUploadProgress events — pass a valid socketId from client.
  • FFmpeg: @ffmpeg-installer/ffmpeg or ffmpeg-static provide binaries; on Linux containers ensure runtime has required libs.
  • Redis / concurrency: JobConsumer uses Redis server_count to limit concurrency — supply a reachable Redis instance.

🤝 Contributing

We welcome contributions! Please:

  • Open an issue before changing infra (Pulumi)
  • Add/update tests where applicable
  • Document new env vars and update this README

✅ Pre-production checklist

  • Provide production MongoDB and Redis endpoints
  • Harden IAM policies (least privilege)
  • Add S3 encryption & bucket policies
  • Configure logging & monitoring (CloudWatch, alerts)
  • Store secrets in Pulumi config / Secrets Manager

If you'd like, I can now:

  • add .env.example files to each service folder
  • create a Pulumi stack example with secure config commands
  • add a developer quickstart script to automate local dev

Happy hacking! 🚀

About

Video Transcoder platform

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages