A full-stack web application for downloading videos from YouTube and other platforms using yt-dlp.
For autonomous agent development, see AGENTS.md for project conventions and commands.
# Start backend (Zig)
cd backend && zig build && ./zig-out/bin/dldl-backend
# Start frontend (Vue.js) in another terminal
cd frontend && npm install && npm run devAccess the app at http://localhost:5173
# Build and start all services
docker-compose up -d
# View logs
docker-compose logs -fAccess the app at http://localhost
dldl/
βββ backend/ # Zig backend
β βββ src/
β βββ main.zig # Entry point
β βββ server.zig # HTTP server
β βββ ytdlp.zig # yt-dlp integration
β βββ types.zig # Type definitions
βββ frontend/ # Vue.js frontend
β βββ src/
β βββ components/ # UI components
β βββ views/ # Page views
β βββ stores/ # Pinia stores
βββ Dockerfile # Container build
βββ docker-compose.yml # Multi-container setup
βββ haproxy.cfg # Load balancer config
βββ SPEC.md # Full specification
| Layer | Technology |
|---|---|
| Frontend | Vue.js 3, Vite, TypeScript, Pinia |
| Backend | Zig (std.http) |
| Download | yt-dlp |
| Load Balancer | HAProxy |
| Container | Docker, Docker Compose |
| Testing | Vitest, Playwright |
| Code Quality | ESLint, Prettier, lint-staged, knip, jscpd |
| Method | Path | Description |
|---|---|---|
GET |
/health |
Health check |
POST |
/api/info |
Get video info |
POST |
/api/download |
Start download |
GET |
/api/status/:id |
Download status |
# Get video info
curl -X POST http://localhost:8080/api/info \
-H "Content-Type: application/json" \
-d '{"url": "https://www.youtube.com/watch?v=dQw4w9WgXcQ"}'
# Start download
curl -X POST http://localhost:8080/api/download \
-H "Content-Type: application/json" \
-d '{"url": "https://www.youtube.com/watch?v=dQw4w9WgXcQ", "format_id": "best"}'
# Check status
curl http://localhost:8080/api/status/dl-1234567890 βββββββββββββββββββ
β HAProxy β
β (Port 80/8080) β
ββββββββββ¬βββββββββ
β
βββββββββββββββββββββΌββββββββββββββββββββ
β β β
ββββββΌβββββ ββββββΌβββββ ββββββΌβββββ
β App 1 β β App 2 β β App 3 β ...
β 1 CPU β β 1 CPU β β 1 CPU β
β 4GB β β 4GB β β 4GB β
βββββββββββ βββββββββββ βββββββββββ
Each app instance is limited to 1 CPU core and 4GB RAM.
| Variable | Default | Description |
|---|---|---|
PORT |
8080 |
Server port |
DOWNLOAD_DIR |
./downloads |
Download directory |
YTDLP_PATH |
yt-dlp |
yt-dlp executable path |
- Web UI: http://localhost
- Stats Page: http://localhost:8080/haproxy_stats
When adding logging to the application:
- Never log sensitive data: URLs, user inputs, or error messages containing user data
- Sanitize logs: Remove or mask any PII before logging
- Use structured logging: JSON format is preferred for production logs
- Log levels: Use appropriate levels (ERROR for failures, INFO for significant events, DEBUG for development)
// BAD - logs potentially sensitive data
console.error(`Failed to fetch: ${url}`);
// GOOD - logs only safe metadata
console.error(`Failed to fetch video info`, { error: 'network_timeout', timestamp: Date.now() });This project tracks technical debt through TODO/FIXME comments. When adding technical debt:
- Link to issues: All TODO/FIXME comments must reference an issue:
TODO(#123): description - Use descriptive language: Explain why this is debt and what the ideal solution would be
- Prioritize: Include severity hints like
TODO(high):orTODO(low):
// TODO(#45): Refactor to use a proper state management library
// Currently using localStorage directly - should be abstracted behind
// a service interface for better testability. Priority: medium.This project uses environment variables for configuration. In production deployments:
- Never commit
.envfiles - they are gitignored - Use
.env.exampleas a template for required variables - Inject secrets via Docker using
-eflags ordocker-compose.override.yml - For production, consider using Docker secrets or external secret managers
# docker-compose.prod.yml
services:
app-1:
environment:
- PORT=8080
- DOWNLOAD_DIR=/app/downloads
# For production, use: --env-file or Docker secretscd frontend
npm install
npm run dev # Development server (port 5173)
npm run build # Production build
npm run lint # Run ESLint
npm run check # Run all quality checks (lint, format, cpd, dead code)cd backend
zig build # Debug build
zig build -OReleaseSafe # Release buildcd frontend
npm test # Run unit tests (Vitest)
npm run test:coverage # Run with coverage
npm run test:integration # Run Playwright integration tests
npm run test:all # Run all testsThe project enforces quality standards:
- ESLint: TypeScript and Vue linting with complexity and naming conventions
- Prettier: Code formatting
- lint-staged: Auto-format on commit
- knip: Dead code and unused dependency detection
- jscpd: Copy-paste detection
Run all checks:
npm run check # lint && format:check && cpd && deadAutomated workflows run on every push to master:
| Workflow | Purpose |
|---|---|
| Release | Run tests, build, publish Docker images |
| Security Scanning | OWASP ZAP baseline scan |
| Observability | Sentry monitoring and metrics |
| Documentation | Auto-generate API docs and changelog |
| Dependabot | Weekly dependency updates |
See .github/workflows/ for details.
The frontend uses pino for structured JSON logging:
- Development: Human-readable format with colors
- Production: JSON format for log aggregation
- Automatic redaction of sensitive fields (auth headers, cookies, passwords)
Sentry integration for error tracking:
- Configure via
VITE_SENTRY_DSNenvironment variable - Enable via
VITE_SENTRY_ENABLED=true - Captures exceptions, breadcrumbs, and session replays
OpenTelemetry for request tracing:
- Configure via
VITE_OTEL_ENDPOINTenvironment variable - X-Request-ID propagation across services
- Automatic span creation for API requests
PostHog for product analytics:
- Configure via
VITE_POSTHOG_KEYenvironment variable - Enable via
VITE_POSTHOG_ENABLED=true - Tracks user interactions and feature usage
- Feature flags for progressive rollouts
Built-in circuit breakers protect against cascading failures:
api: External API calls (threshold: 5 failures, reset: 60s)download: yt-dlp subprocess calls
Prometheus-compatible metrics available at runtime:
import { metricsApi } from './utils/metrics';
metricsApi.getMetricsJSON();
metricsApi.getMetrics(); // Prometheus text formatAlerts are configured via:
- Sentry issue creation for critical errors
- GitHub Actions workflow for scheduled monitoring
- PagerDuty/OpsGenie integration (configure secrets)
See docs/runbooks/ for incident response procedures.
The application uses PostHog feature flags for safe rollouts:
- Configure flags in
frontend/src/config/featureFlags.ts - Monitor flag status in PostHog dashboard
- See docs/runbooks/progressive-rollout.md
- Trigger Release Workflow: Go to Actions > Release > Run workflow
- Select Version Type: patch, minor, or major
- Automated Steps:
- Run tests
- Build frontend
- Generate changelog
- Create Docker images
- Publish GitHub Release
See docs/runbooks/rollback.md for rollback procedures.
| Method | Use Case | Time |
|---|---|---|
| Disable Feature Flag | Flag-related issues | < 1 min |
| Git Revert | Code issues | 5-10 min |
| Docker Image Rollback | Container issues | 10-15 min |
| HAProxy Config Rollback | Config issues | 2-5 min |
MIT