A production-ready video transcoding system. Upload → S3 → SQS → ECS (FFmpeg) — orchestrated with Pulumi.
- client/ — React + Vite frontend (upload UI & realtime progress)
- server/ — Express API (handles uploads, creates DB records, streams upload progress)
- IAC/ — Pulumi TypeScript infra (S3, SQS, Lambdas, ECS/ECR, VPC, etc.)
- IAC/VideoEncodingService/ — video transcoder (FFmpeg via fluent-ffmpeg)
Tip: This README shows exact scripts, env vars, and deploy steps discovered in the codebase.
- Local Dev:
client/,server/,IAC/VideoEncodingService/ - Pulumi infra:
IAC/index.ts - Video transcode logic:
IAC/VideoEncodingService/src/index.ts
- Scalable: ECS Fargate tasks are launched on demand via Lambdas + SQS
- Resilient: S3 for durable storage, SQS for work queueing
- Observable: uploads emit realtime progress via socket.io
- Reproducible infra: Pulumi TypeScript stacks
Make sure you have the following installed and configured:
- Node.js 16+ (Node 18 for production build & Pulumi lambdas)
- npm
- Docker (for building images & pushing to ECR)
- AWS CLI with credentials configured
- Pulumi CLI (logged in)
- MongoDB (Atlas or local) and Redis reachable by services
- npm scripts:
npm run dev— start Vite dev servernpm run build—tsc -b && vite buildnpm run lint—eslint .npm run preview—vite preview
- Key deps: react, axios, @reduxjs/toolkit, socket.io-client, tailwindcss, vite
- npm scripts:
npm run dev—nodemon --ext ts --exec ts-node src/index.tsnpm run build—tscnpm run start—node dist/index.js
- Key deps: express, mongoose, multer, @aws-sdk/client-s3, @aws-sdk/lib-storage, socket.io
- npm scripts:
npm run dev—ts-node src/index.tsnpm run build—tscnpm run start—node dist/index.js
- Key deps: fluent-ffmpeg, ffmpeg-static, @ffmpeg-installer/ffmpeg, @aws-sdk/client-s3, @aws-sdk/client-sqs, redis
- Key packages: @pulumi/pulumi, @pulumi/aws, @pulumi/docker, @pulumi/awsx, esbuild, archiver
Create
.envfiles per service. Never commit secrets.
Purpose: secrets & runtime values used by Pulumi scripts and local builds. The repo contains an IAC/.env file but it may contain real secrets — remove or secure it before pushing.
| Variable | Required | Purpose |
|---|---|---|
| AWS_ACCESS_KEY_ID | ✅ | AWS credentials (use IAM user with limited permissions) |
| AWS_SECRET_ACCESS_KEY | ✅ | AWS secret (store in secret store) |
| AWS_REGION | ✅ | Default region for Pulumi & SDKs |
| REDIS_CLOUD_HOST | ✅ | Redis host used by Lambdas/ECS |
| REDIS_CLOUD_PORT | ✅ | Redis port |
| REDIS_CLOUD_PASSWORD | ✅ | Redis password (secret) |
| MONGOOSE_DB_URL | ✅ | MongoDB connection string (secret) |
Example IAC/.env.example (placeholders):
# IAC/.env.example — DO NOT commit with real values
AWS_ACCESS_KEY_ID=REDACTED_AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY=REDACTED_AWS_SECRET_ACCESS_KEY
AWS_REGION=ap-south-1
REDIS_CLOUD_HOST=REDACTED_REDIS_HOST
REDIS_CLOUD_PORT=REDACTED_REDIS_PORT
REDIS_CLOUD_PASSWORD=REDACTED_REDIS_PASSWORD
MONGOOSE_DB_URL=mongodb+srv://<REDACTED_USER>:<REDACTED_PASSWORD>@cluster0.example.net/mydb
Recommended production flow:
pulumi config set aws:accessKeyId <id>andpulumi config set --secret aws:secretAccessKey <secret>(or use an IAM role / environment credentials on CI hosts).
Path: server/.env
| Variable | Required | Purpose |
|---|---|---|
| PORT | ✅ | Server port (dev default: 5000) |
| MONGOOSE_DB_URL | ✅ | MongoDB connection URL |
| AWS_REGION | ✅ | AWS region |
| AWS_ACCESS_KEY_ID | ✅ | AWS key (for dev only) |
| AWS_SECRET_ACCESS_KEY | ✅ | AWS secret (for dev only) |
| S3_BUCKET_NAME | ✅ | S3 bucket used by server uploads |
| ENV | optional | 'DEV' or 'PROD' — affects client origin checks |
Example server/.env.example:
PORT=5000
MONGOOSE_DB_URL=mongodb+srv://<REDACTED_USER>:<REDACTED_PASSWORD>@cluster0.example.net/transcoder
AWS_REGION=ap-south-1
AWS_ACCESS_KEY_ID=REDACTED_AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY=REDACTED_AWS_SECRET_ACCESS_KEY
S3_BUCKET_NAME=REDACTED_S3_BUCKET_NAME
ENV=DEV
Path: client/.env (use Vite conventions or env injected at build time)
| Variable | Required | Purpose |
|---|---|---|
| VITE_API_URL | ✅ | URL of the server (e.g., http://localhost:5000) |
| VITE_SOMETHING_ELSE | optional | other client-side toggles (non-secret) |
Example client/.env.example:
VITE_API_URL=http://localhost:5000
# Non-secret flags for the client
VITE_ENABLE_DEBUG=true
Note: Client envs must not contain secrets (they end up in the browser). Keep only public/feature flags here.
Path: IAC/VideoEncodingService/.env
| Variable | Required | Purpose |
|---|---|---|
| MONGOOSE_DB_URL | ✅ | MongoDB URL to update VideoModel |
| S3_BUCKET | ✅ | Bucket where the originals and outputs are stored |
| SQS_QUEUE_URL | ✅ | URL of the SQS queue used by the system |
| REDIS_CLOUD_HOST | ✅ | Redis host for worker counters |
| REDIS_CLOUD_PORT | ✅ | Redis port |
| REDIS_CLOUD_PASSWORD | ✅ | Redis password (secret) |
| AWS_REGION | ✅ | AWS region |
| AWS_ACCESS_KEY_ID | optional | Only if running locally without instance role |
| AWS_SECRET_ACCESS_KEY | optional | Only if running locally without instance role |
| S3_FILE_KEY | runtime | Set by ECS container instance (from SQS) |
| RECEIPT_HANDLE | runtime | SQS receipt handle for deletion |
Example IAC/VideoEncodingService/.env.example:
MONGOOSE_DB_URL=mongodb+srv://<REDACTED_USER>:<REDACTED_PASSWORD>@cluster0.example.net/transcoder
S3_BUCKET=REDACTED_S3_BUCKET_NAME
SQS_QUEUE_URL=REDACTED_SQS_QUEUE_URL
REDIS_CLOUD_HOST=REDACTED_REDIS_HOST
REDIS_CLOUD_PORT=REDACTED_REDIS_PORT
REDIS_CLOUD_PASSWORD=REDACTED_REDIS_PASSWORD
AWS_REGION=ap-south-1
- Clone the repo
git clone <repo-url> Transcoder
cd Transcoder- Install deps
# Client
cd client
npm install
# Server
cd ..\server
npm install
# IAC
cd ..\IAC
npm install
# Video encoding service
cd .\VideoEncodingService
npm install- Run services (dev)
- Server (dev)
cd server
npm run dev
# Listening: http://localhost:5000- Client (dev)
cd ..\client
npm run dev
# Frontend: http://localhost:5173- VideoEncodingService (dev - needs .env)
cd ..\IAC\VideoEncodingService
# create .env (see templates above)
npm run devTip: Use LocalStack for local S3/SQS testing or ngrok for exposing local endpoints.
GET /
# Response: { "message": "Server Headl is Good 🚀🚀" }- Form field: file (binary)
- Body field: socketId (socket.io id to receive progress events)
Example (curl):
curl -v -X POST "http://localhost:5000/api/upload" \
-F "file=@./path/to/video.mp4" \
-F "socketId=<SOCKET_ID>"Server flow (brief):
- Server receives file with
multer(memory storage) - Server creates a DB record in VideoModel
- Server uploads original to S3 under
temp/using@aws-sdk/lib-storageand emitsupload_progressvia socket.io - S3
ObjectCreated(prefixtemp/) triggers JobDispatcher Lambda → sends message to SQS - JobConsumer Lambda receives SQS message → launches ECS task (Fargate) to transcode
This explains the end-to-end flow once a user uploads a file via the frontend.
- Client sends a multipart POST to
/api/uploadwith thefilefield andsocketId. - Server (
server/) accepts the file (multer memory storage), creates aVideoModelrecord in MongoDB, and starts an S3 multipart upload totemp/<timestamp>-<name>. It streams upload progress to the client via socket.io using the providedsocketId. - When the object is created in S3 (prefix
temp/), the bucket notification triggers the JobDispatcher Lambda which enqueues a message in SQS containing the S3 key and metadata. - JobConsumer Lambda receives the SQS message, performs a quick Redis concurrency check, and calls ECS RunTask to start a Fargate task (container image from ECR) with environment variables pointing to the S3 key and SQS receipt handle.
- The ECS task (VideoEncodingService) downloads the original from S3, transcodes to the configured resolutions, uploads each resolved file back to S3 (under resolution-specific prefixes), updates the corresponding
VideoModeldocument in MongoDB with URLs and status, and finally deletes the SQS message using the provided receipt handle. - (Optional) A scheduled S3Cleanup Lambda runs periodically to remove old generated files.
This design makes the pipeline event-driven, scalable, and fault-tolerant: SQS provides retry and dead-letter handling, ECS tasks are isolated workers, and Lambdas orchestrate the glue.
If you'd like, I can now:
- add
.env.examplefiles to each service folder (client, server, IAC, VideoEncodingService) - create a Pulumi stack example with secure
pulumi config set --secretcommands documented - add a small developer quickstart script (PowerShell) to automate the local dev start sequence
Tell me which of these you'd like me to create and I will add them to the repo.
GET /
# Response: { "message": "Server Headl is Good 🚀🚀" }- Form field: file (binary)
- Body field: socketId (socket.io id to receive progress events)
Example (curl):
curl -v -X POST "http://localhost:5000/api/upload" \
-F "file=@./path/to/video.mp4" \
-F "socketId=<SOCKET_ID>"Server flow (brief):
- Server receives file with
multer(memory storage) - Server creates a DB record in VideoModel
- Server uploads original to S3 under
temp/using@aws-sdk/lib-storageand emitsupload_progressvia socket.io - S3
ObjectCreated(prefixtemp/) triggers JobDispatcher Lambda → sends message to SQS - JobConsumer Lambda receives SQS message → launches ECS task (Fargate) to transcode
- Note: The router defines
/get_videosbut it is currently unimplemented inserver/src/routes/index.ts.
flowchart LR
subgraph Client
A[👩💻 User Browser]
end
A -->|Upload (+socketId)| B[📡 Server /api/upload]
B -->|Upload temp/Key| S3[(📦 S3 Bucket)]
S3 -. S3:ObjectCreated(temp/) .-> JD[🛰 JobDispatcher Lambda]
JD -->|Sends msg| SQS[(📥 SQS Queue)]
SQS -->|Event| JC[🛰 JobConsumer Lambda]
JC -->|Run Task| ECS[🚀 ECS (Fargate) Task]
ECS -->|Downloads| S3
ECS -->|Transcodes & Uploads to| S3
S3 -->|(Optional) S3Cleanup Lambda (cron)| Cleanup[🧹 Cleanup Lambda]
classDef aws fill:#fffbdd,stroke:#ffb86b
class S3,SQS,JD,JC,ECS,Cleanup aws
Short summary:
- Client uploads → Server → S3
- S3 triggers JobDispatcher (Lambda) → SQS
- JobConsumer (Lambda) → launches ECS task
- ECS task downloads original → transcodes → uploads outputs to S3
- Cleanup Lambda removes old generated files periodically
Provisioned by Pulumi (IAC/index.ts):
- VPC, Subnet, Internet Gateway, Route Table
- S3 bucket (
trans-bucket) — storestemp/originals and generated outputs - SQS queue (
transQueue) — decouples dispatcher & worker - Lambdas:
- JobDispatcher — S3 event → send message to SQS
- JobConsumer — SQS event → RunTask (ECS)
- S3Cleanup — scheduled cleanup job
- ECR repo + Docker image push + ECS Cluster + task definitions
Permissions: Lambdas are attached with roles/policies (S3, SQS, CloudWatch, ECS permissions — review for least privilege before prod).
Note: Pulumi does not create MongoDB or Redis — those are expected to be provided and accessible.
⚠️ Important: Running Pulumi triggers local Docker build & ECR push (Pulumi code usesdocker build+docker push). Ensure Docker is running and you have AWS ECR push permissions.
- Login to Pulumi
pulumi login- Initialize stack and set region
cd IAC
pulumi stack init transcoder-dev
pulumi config set aws:region ap-south-1 --stack transcoder-dev- Export AWS credentials to env (PowerShell example)
$env:AWS_ACCESS_KEY_ID = "AKIA..."
$env:AWS_SECRET_ACCESS_KEY = "..."
$env:AWS_DEFAULT_REGION = "ap-south-1"
# Optionally set REDIS_CLOUD_* envs used by some lambdas- Run Pulumi deploy
pulumi up --stack transcoder-dev
# Review preview and confirm- Rollback / destroy
pulumi destroy --stack transcoder-dev
pulumi stack rm --yes transcoder-devIf Pulumi's automatic image build/push fails, you can build/push manually:
# Build locally
docker build -t video-transcoder-service:local .
# Tag for ECR
docker tag video-transcoder-service:local <ECR_REPO_URL>:latest
# Authenticate to ECR
aws ecr get-login-password --region ap-south-1 | docker login --username AWS --password-stdin <ECR_REPO_URL>
docker push <ECR_REPO_URL>:latest- Pulumi + Docker: Ensure Docker daemon is running and you've permission to push to ECR.
- S3 triggers: Verify S3 Notification & Lambda permission to be invoked by S3.
- Upload progress: Server uses
@aws-sdk/lib-storagehttpUploadProgressevents — pass a validsocketIdfrom client. - FFmpeg:
@ffmpeg-installer/ffmpegorffmpeg-staticprovide binaries; on Linux containers ensure runtime has required libs. - Redis / concurrency: JobConsumer uses Redis
server_countto limit concurrency — supply a reachable Redis instance.
We welcome contributions! Please:
- Open an issue before changing infra (Pulumi)
- Add/update tests where applicable
- Document new env vars and update this README
- Provide production MongoDB and Redis endpoints
- Harden IAM policies (least privilege)
- Add S3 encryption & bucket policies
- Configure logging & monitoring (CloudWatch, alerts)
- Store secrets in Pulumi config / Secrets Manager
If you'd like, I can now:
- add
.env.examplefiles to each service folder - create a Pulumi stack example with secure config commands
- add a developer quickstart script to automate local dev
Happy hacking! 🚀