Auto-generated by EasyDeploy. Readable by humans and AI agents alike. This file is the canonical reference for this app: deploy flow, secrets, observability, and runtime setup.
| Resource | URL |
|---|---|
| Dev | https://my-app-dev.easy-deploy.135.181.177.246.nip.io |
| Prod | https://my-app.easy-deploy.135.181.177.246.nip.io |
| ArgoCD | https://argocd.easy-deploy.135.181.177.246.nip.io/applications/my-app-dev |
| Grafana dashboard | https://shanzindlr.grafana.net/d/easydeploy-my-app/my-app |
| Infisical secrets | https://infisical.easy-deploy.135.181.177.246.nip.io → project my-app |
| Portal | https://portal-dev.easy-deploy.135.181.177.246.nip.io |
| GitHub repo | https://github.com/easydeploytest/my-app |
This app runs on EasyDeploy — a k3s cluster on Hetzner (IP 135.181.177.246) managed by:
- ArgoCD — GitOps continuous delivery. Every push to
maintriggers a deploy to dev. Prod is triggered by a GitHub Release. - Infisical — Secret management. All environment variables are stored here, injected into pods automatically. No secrets in git.
- Grafana Cloud — Observability. Metrics, traces, and logs. Requires manual OTel SDK setup in your app (see below).
- Docker registry — Private image registry at
registry.easy-deploy.135.181.177.246.nip.io.
If this repo still runs the EasyDeploy template app, follow these steps:
-
Edit
app.yaml— setname(alreadymy-app),team, andport(your app's HTTP listen port):name: my-app team: easy-deploy port: 3000
-
Write your app in
src/— delete template files, add your own. Any language is supported. -
Update
Dockerfile— build your app and expose port3000. A/healthzendpoint returning HTTP 200 is required.Examples:
# Node.js FROM node:20-alpine WORKDIR /app COPY package*.json ./ RUN npm ci --production COPY src/ ./src/ EXPOSE 3000 CMD ["node", "src/index.js"]
# Bun + Elysia FROM oven/bun:1-alpine WORKDIR /app COPY package.json bun.lockb ./ RUN bun install --frozen-lockfile COPY src/ ./src/ EXPOSE 3000 CMD ["bun", "src/index.ts"]
# Python (FastAPI / uvicorn) FROM python:3.12-slim WORKDIR /app COPY requirements.txt . RUN pip install -r requirements.txt COPY . . EXPOSE 3000 CMD ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "3000"]
# Go (multi-stage) FROM golang:1.22-alpine AS build WORKDIR /app COPY go.* ./ RUN go mod download COPY . . RUN go build -o server . FROM alpine:3.19 COPY --from=build /app/server /server EXPOSE 3000 CMD ["/server"]
-
Push to
main— CI builds, pushes image, ArgoCD deploys to dev automatically.git add -A git commit -m "feat: replace template with my app" git push
push to main
→ GitHub Actions builds Docker image → pushes to registry.easy-deploy.135.181.177.246.nip.io
→ CI writes new image tag to apps/my-app/values-dev.yaml → commits back to EasyDeploy repo
→ ArgoCD detects change → syncs my-app-dev namespace
→ pod rolls out → dev URL is live
→ portal receives deploy notification
Prod deploy:
gh release create v1.0.0 --title "v1.0.0" --notes "First prod release"
→ CI re-tags image with release version
→ ArgoCD syncs my-app-prod namespace
→ prod URL is live: https://my-app.easy-deploy.135.181.177.246.nip.io
All secrets are in Infisical — https://infisical.easy-deploy.135.181.177.246.nip.io → project my-app.
- Use the dev environment for dev pods, prod for production.
- Changes propagate to running pods within ~5 minutes — no redeploy needed.
- Read them in your app as normal env vars:
- Node/Bun:
process.env.MY_SECRET - Python:
os.environ["MY_SECRET"] - Go:
os.Getenv("MY_SECRET")
- Node/Bun:
Platform-managed secrets (set automatically by EasyDeploy, do not set manually):
| Variable | Description |
|---|---|
OTEL_SERVICE_NAME |
Set to my-app by the Helm chart |
OTEL_EXPORTER_OTLP_PROTOCOL |
Set to http/protobuf by the Helm chart |
DEPLOYMENT_ENV |
dev or prod |
You must set in Infisical (ask your platform team for values):
| Variable | Description |
|---|---|
OTEL_EXPORTER_OTLP_ENDPOINT |
Grafana Cloud OTLP gateway URL |
OTEL_EXPORTER_OTLP_HEADERS |
Authorization=Basic <base64(instanceId:token)> |
IMPORTANT: Auto-instrumentation is NOT available.
The OTel Operator does not support all runtimes. You must add the SDK to your app.
Without it the Grafana dashboard (https://shanzindlr.grafana.net/d/easydeploy-my-app/my-app) will have no data.
The platform pre-sets OTEL_SERVICE_NAME, OTEL_EXPORTER_OTLP_ENDPOINT,
OTEL_EXPORTER_OTLP_HEADERS, and OTEL_EXPORTER_OTLP_PROTOCOL via Helm + Infisical.
Your SDK reads these automatically — no endpoint/header config needed in code.
npm install @opentelemetry/sdk-node \
@opentelemetry/auto-instrumentations-node \
@opentelemetry/exporter-trace-otlp-http \
@opentelemetry/exporter-metrics-otlp-http \
@opentelemetry/sdk-metricsCreate src/instrumentation.js (import this as the very first line of your entry point):
const { NodeSDK } = require('@opentelemetry/sdk-node');
const { getNodeAutoInstrumentations } = require('@opentelemetry/auto-instrumentations-node');
const { OTLPTraceExporter } = require('@opentelemetry/exporter-trace-otlp-http');
const { OTLPMetricExporter } = require('@opentelemetry/exporter-metrics-otlp-http');
const { PeriodicExportingMetricReader } = require('@opentelemetry/sdk-metrics');
const sdk = new NodeSDK({
traceExporter: new OTLPTraceExporter(),
metricReader: new PeriodicExportingMetricReader({
exporter: new OTLPMetricExporter(),
exportIntervalMillis: 15_000,
}),
instrumentations: [getNodeAutoInstrumentations()],
});
sdk.start();
process.on('SIGTERM', () => sdk.shutdown());Entry point (src/index.js):
require('./instrumentation'); // must be first line
const http = require('http');
// ... your appStructured logs (stdout → Grafana Loki):
const log = (level, msg, extra = {}) =>
console.log(JSON.stringify({ level, message: msg, app: process.env.OTEL_SERVICE_NAME, ...extra }));
log('info', 'server started', { port: 3000 });
log('error', 'request failed', { error: err.message, path: req.url });bun add @elysiajs/opentelemetry @opentelemetry/sdk-node \
@opentelemetry/exporter-trace-otlp-http \
@opentelemetry/exporter-metrics-otlp-http \
@opentelemetry/sdk-metricsCreate src/instrumentation.ts:
import { NodeSDK } from '@opentelemetry/sdk-node';
import { OTLPTraceExporter } from '@opentelemetry/exporter-trace-otlp-http';
import { OTLPMetricExporter } from '@opentelemetry/exporter-metrics-otlp-http';
import { PeriodicExportingMetricReader } from '@opentelemetry/sdk-metrics';
const sdk = new NodeSDK({
traceExporter: new OTLPTraceExporter(),
metricReader: new PeriodicExportingMetricReader({
exporter: new OTLPMetricExporter(),
exportIntervalMillis: 15_000,
}),
});
sdk.start();
process.on('SIGTERM', () => sdk.shutdown());Wire into Elysia (src/index.ts):
import './instrumentation'; // must be first import
import { Elysia } from 'elysia';
import { opentelemetry } from '@elysiajs/opentelemetry';
new Elysia()
.use(opentelemetry())
.get('/healthz', () => ({ status: 'ok' }))
.listen(3000);pip install opentelemetry-sdk opentelemetry-exporter-otlp \
opentelemetry-instrumentation-fastapi opentelemetry-instrumentation-httpxCreate instrumentation.py:
from opentelemetry import trace, metrics
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.trace.export import BatchSpanProcessor
from opentelemetry.sdk.metrics import MeterProvider
from opentelemetry.sdk.metrics.export import PeriodicExportingMetricReader
from opentelemetry.exporter.otlp.proto.http.trace_exporter import OTLPSpanExporter
from opentelemetry.exporter.otlp.proto.http.metric_exporter import OTLPMetricExporter
from opentelemetry.instrumentation.fastapi import FastAPIInstrumentor
def setup_telemetry(app=None):
# Reads OTEL_EXPORTER_OTLP_ENDPOINT and OTEL_EXPORTER_OTLP_HEADERS automatically
tp = TracerProvider()
tp.add_span_processor(BatchSpanProcessor(OTLPSpanExporter()))
trace.set_tracer_provider(tp)
reader = PeriodicExportingMetricReader(OTLPMetricExporter(), export_interval_millis=15000)
metrics.set_meter_provider(MeterProvider(metric_readers=[reader]))
if app:
FastAPIInstrumentor.instrument_app(app)Use in main.py:
from fastapi import FastAPI
from instrumentation import setup_telemetry
import logging, json, os
app = FastAPI()
setup_telemetry(app)
@app.get('/healthz')
def health(): return {'status': 'ok'}Structured logs:
import json, logging
logging.basicConfig(format='%(message)s')
logger = logging.getLogger(__name__)
def log(level, message, **extra):
logger.info(json.dumps({'level': level, 'message': message,
'app': os.environ.get('OTEL_SERVICE_NAME'), **extra}))go get go.opentelemetry.io/otel \
go.opentelemetry.io/otel/sdk/trace \
go.opentelemetry.io/otel/sdk/metric \
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp \
go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetrichttp \
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttpCreate telemetry.go:
package main
import (
"context"
"time"
"go.opentelemetry.io/otel"
tracehttp "go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp"
metrichttp "go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetrichttp"
sdkmetric "go.opentelemetry.io/otel/sdk/metric"
sdktrace "go.opentelemetry.io/otel/sdk/trace"
)
func setupTelemetry(ctx context.Context) func() {
// Reads OTEL_EXPORTER_OTLP_ENDPOINT + OTEL_EXPORTER_OTLP_HEADERS automatically
traceExp, _ := tracehttp.New(ctx)
tp := sdktrace.NewTracerProvider(sdktrace.WithBatcher(traceExp))
otel.SetTracerProvider(tp)
metricExp, _ := metrichttp.New(ctx)
mp := sdkmetric.NewMeterProvider(sdkmetric.WithReader(
sdkmetric.NewPeriodicReader(metricExp, sdkmetric.WithInterval(15*time.Second)),
))
otel.SetMeterProvider(mp)
return func() { tp.Shutdown(ctx); mp.Shutdown(ctx) }
}Use in main.go:
import "go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp"
func main() {
shutdown := setupTelemetry(context.Background())
defer shutdown()
mux := http.NewServeMux()
mux.HandleFunc("/healthz", func(w http.ResponseWriter, r *http.Request) {
w.Write([]byte(`{"status":"ok"}`))
})
http.ListenAndServe(":3000", otelhttp.NewHandler(mux, "server"))
}Structured logs (Go):
import ("encoding/json"; "log"; "os")
func logJSON(level, message string, extra map[string]any) {
extra["level"] = level
extra["message"] = message
extra["app"] = os.Getenv("OTEL_SERVICE_NAME")
b, _ := json.Marshal(extra)
log.Println(string(b))
}# ArgoCD CLI
argocd app get my-app-dev
argocd app get my-app-prod
# kubectl
kubectl get pods -n my-app-dev
kubectl logs -n my-app-dev -l app=my-app --tail=50# ArgoCD — roll back to previous revision
argocd app rollback my-app-devargocd app sync my-app-dev --forcekubectl top pods -n my-app-devEdit app.yaml:
scaling:
min: 1 # minimum replicas
max: 5 # maximum replicas (HPA)
cpu_target: 70 # scale up when CPU > 70%Push to main — ArgoCD applies the new HPA config.
Your app must expose GET /healthz returning HTTP 200.
The Helm chart configures liveness and readiness probes against this endpoint.
If it doesn't exist, pods will restart in a crash loop.
Minimal implementations:
- Node.js:
if (req.url === '/healthz') { res.end(JSON.stringify({status:'ok'})) } - Bun/Elysia:
.get('/healthz', () => ({ status: 'ok' })) - Python/FastAPI:
@app.get('/healthz')\ndef health(): return {'status': 'ok'} - Go:
mux.HandleFunc("/healthz", func(w http.ResponseWriter, r *http.Request) { w.Write([]byte(\{"status":"ok"}`)) })`