imager load test tool
This repository is configured for modern Go tooling with:
go 1.26.0toolchain go1.26.0
If your local machine is on an older Go release, install Go 1.26.x and run:
go mod tidyImager is a Kubernetes-native distributed load-testing system with two processes:
orchestrator(single instance): discovers target pods/services, schedules load, dispatches jobs, and publishes orchestrator metrics.executor(N replicas): registers with the orchestrator, picks up jobs, executes HTTP request batches, and publishes executor metrics.
This demo deploys:
sumservicein namespaceimagerdemoimager-orchestratorandimager-executorin namespaceimager
- Build images:
docker build -t imager/orchestrator:local -f Dockerfile.orchestrator .
docker build -t imager/executor:local -f Dockerfile.executor .
docker build -t imager/sumservice:local -f Dockerfile.sumservice .- Create a local cluster and install metrics-server:
kind delete cluster --name kind || true
kind create cluster --name kind
kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml
kubectl -n kube-system rollout status deploy/metrics-server --timeout=180s- Load images and deploy the demo manifests:
kind load docker-image imager/orchestrator:local --name kind
kind load docker-image imager/executor:local --name kind
kind load docker-image imager/sumservice:local --name kind
kubectl apply -k deploy/demo
kubectl -n imagerdemo rollout status deploy/sumservice --timeout=180s
kubectl -n imager rollout status deploy/imager-orchestrator --timeout=180s
kubectl -n imager rollout status deploy/imager-executor --timeout=180s- Verify endpoint and metrics:
kubectl get --raw '/api/v1/namespaces/imagerdemo/services/http:sumservice:8080/proxy/sum?a=7&b=9'
kubectl get --raw '/api/v1/namespaces/imager/services/http:imager-orchestrator:8099/proxy/metrics' | rg 'imager_orchestrator_(registered_executors|jobs_dispatched_total|job_requests_total|target_pod_)'
kubectl get --raw '/api/v1/namespaces/imager/services/http:imager-executor-metrics:9100/proxy/metrics' | rg 'imager_(executor_jobs_picked_up_total|executor_job_requests_total|success|failed)'For a complete demo runbook, see docs/DEMO_SUMSERVICE.md.
If you want to run the load generator without the sumservice demo, use the built-in stack in deploy/k8s:
- built-in target deployment/service:
imager-test-service - built-in request source:
deploy/k8s/configmap-requests.yaml - built-in load profile: step (
min-rps=5,max-rps=50,step-rps=5)
- Ensure a local cluster exists and metrics-server is available:
kind get clusters
kubectl -n kube-system get deploy metrics-server- Build and load orchestrator/executor images:
docker build -t imager/orchestrator:local -f Dockerfile.orchestrator .
docker build -t imager/executor:local -f Dockerfile.executor .
kind load docker-image imager/orchestrator:local --name kind
kind load docker-image imager/executor:local --name kind- Deploy built-in manifests:
kubectl apply -k deploy/k8s
kubectl -n imager rollout status deploy/imager-orchestrator --timeout=180s
kubectl -n imager rollout status deploy/imager-executor --timeout=180s
kubectl -n imager rollout status deploy/imager-test-service --timeout=180s- Verify it is actively running:
kubectl -n imager get pods,svc
kubectl get --raw '/api/v1/namespaces/imager/services/http:imager-orchestrator:8099/proxy/metrics' | rg 'imager_orchestrator_(registered_executors|jobs_dispatched_total|job_requests_total)'- Optional: switch orchestrator to service target mode:
kubectl apply -f deploy/k8s/orchestrator-service-mode.yaml
kubectl -n imager rollout status deploy/imager-orchestrator --timeout=180s- Optional: switch orchestrator to arbitrary URL mode:
kubectl apply -f deploy/k8s/orchestrator-url-mode.yaml
kubectl -n imager rollout status deploy/imager-orchestrator --timeout=180sIn url mode:
- no Kubernetes pod discovery is required
- target pod CPU/memory metrics are not emitted
- executor latency and error metrics are still emitted (
imager_duration,imager_successDuration,imager_success,imager_failed)
The built-in file request source uses StreamReader, which reads newline-delimited JSON records.
Each line is one RequestSpec object. Supported fields:
method(required)path(required)queryString(optional)headers(optional map of string to array of strings)body(optional)
Example file:
{"method":"GET","path":"/"}
{"method":"GET","path":"/status"}
{"method":"GET","path":"/anything","headers":{"X-Imager-Run":["true"]}}
{"method":"POST","path":"/submit","headers":{"Content-Type":["application/json"]},"body":"{\"run\":\"demo\"}"}You can start from deploy/examples/requests.json.
To configure deployment to use this file:
- Put the records into the cluster ConfigMap used by the orchestrator:
kubectl -n imager create configmap imager-request-source \
--from-file=requests.json=./deploy/examples/requests.json \
--dry-run=client -o yaml | kubectl apply -f -- Ensure orchestrator deployment points to file mode and the mounted path:
-request-source-type=file-request-source-file=/config/requests.json- volume mount at
/configfrom ConfigMapimager-request-source
The default manifests in deploy/k8s/orchestrator-pod-mode.yaml and
deploy/k8s/orchestrator-service-mode.yaml already use this path. If you changed them,
re-apply your chosen manifest before restarting:
kubectl apply -f deploy/k8s/orchestrator-pod-mode.yaml
# or
kubectl apply -f deploy/k8s/orchestrator-service-mode.yaml- Restart orchestrator:
kubectl -n imager rollout restart deploy/imager-orchestrator
kubectl -n imager rollout status deploy/imager-orchestrator --timeout=180sIf you use a different filename or mount path, update -request-source-file accordingly.
In orchestrator args, set:
- -request-source-type=random-sum
- -random-sum-path=/sum
- -random-sum-min=1
- -random-sum-max=1000If you switch fully to random-sum, remove -request-source-file=... and the /config ConfigMap volume mount.
-load-calculator=stepwith-min-rps,-max-rps,-step-rps-load-calculator=exponentialwith-min-rps,-max-rps-load-calculator=logarithmicwith-min-rps,-max-rps-load-calculator=adaptive-exponentialwith:-min-rps,-max-rps-adaptive-max-latency-ms=<p99-ms-threshold>- if
-adaptive-max-latency-ms=0, it switches to timeout mode and ramps until >=50% of requests time out (503,504, or no response headers for >=1 minute) - after a threshold breach it runs recovery rounds at
<=1rps, then performs binary search in 10 rps increments to find the highest sustainable rate
Example step profile from 10 to 500 rps:
- -load-calculator=step
- -min-rps=10
- -max-rps=500
- -step-rps=10Example adaptive profile with a 250 ms p99 threshold:
- -load-calculator=adaptive-exponential
- -min-rps=10
- -max-rps=500
- -adaptive-max-latency-ms=250-target-mode=pod:- requires
-target-namespaceand-target-deployment - orchestrator picks one pod and reports target pod CPU/memory metrics
- requires
-target-mode=service:- requires
-target-namespaceand-target-service - workers target the service endpoint; Kubernetes load-balances across backing pods
- orchestrator still resolves backing pods to collect/report their CPU/memory metrics
- requires
-target-mode=url:- requires
-target-url(absolute URL, e.g.https://api.example.com) - no pod metrics are collected
- executor request/error/latency metrics remain available
- requires
More local-cluster notes are in docs/LOCAL_KIND.md.
Imager exposes an importable orchestrator runtime in github.com/PeladoCollado/imager/orchestrator/app.
You can start the orchestrator from another project and inject custom factories without patching this repo.
- Implement custom components in your project:
- a request source implementing
types.RequestSource - a load calculator implementing
manager.LoadCalculator
Example: database-backed request source (reads one record per Next() call and loops on EOF):
package requests
import (
"database/sql"
"errors"
"sync"
"github.com/PeladoCollado/imager/types"
)
type DBRequestSource struct {
db *sql.DB
mu sync.Mutex
offset int
}
func NewDBRequestSource(db *sql.DB) *DBRequestSource {
return &DBRequestSource{db: db}
}
func (s *DBRequestSource) Next() (types.RequestSpec, error) {
s.mu.Lock()
defer s.mu.Unlock()
req, err := s.queryByOffset(s.offset)
if errors.Is(err, sql.ErrNoRows) {
s.offset = 0
req, err = s.queryByOffset(s.offset)
}
if err != nil {
return types.RequestSpec{}, err
}
s.offset++
return req, nil
}
func (s *DBRequestSource) queryByOffset(offset int) (types.RequestSpec, error) {
row := s.db.QueryRow(`
SELECT method, path, query_string, body
FROM request_specs
ORDER BY id LIMIT 1 OFFSET ?`, offset)
var req types.RequestSpec
var queryString sql.NullString
var body sql.NullString
err := row.Scan(&req.Method, &req.Path, &queryString, &body)
if err != nil {
return types.RequestSpec{}, err
}
if queryString.Valid {
req.QueryString = queryString.String
}
if body.Valid {
req.Body = body.String
}
return req, nil
}
func (s *DBRequestSource) Reset() error {
s.mu.Lock()
defer s.mu.Unlock()
s.offset = 0
return nil
}- Implement a custom load calculator:
Example: random spike load calculator (baseline load with occasional large spikes):
package manager
import (
"math/rand"
"time"
)
type RandomSpikeLoadCalculator struct {
minRPS int
maxRPS int
baselineRPS int
maxSpikeRPS int
spikeChance int // 0-100
rng *rand.Rand
}
func NewRandomSpikeLoadCalculator(minRPS, maxRPS, baselineRPS, maxSpikeRPS, spikeChance int) LoadCalculator {
return &RandomSpikeLoadCalculator{
minRPS: minRPS,
maxRPS: maxRPS,
baselineRPS: baselineRPS,
maxSpikeRPS: maxSpikeRPS,
spikeChance: spikeChance,
rng: rand.New(rand.NewSource(time.Now().UnixNano())),
}
}
func (c *RandomSpikeLoadCalculator) Next() int {
rps := c.baselineRPS
if c.rng.Intn(100) < c.spikeChance {
rps += c.rng.Intn(c.maxSpikeRPS + 1)
}
if rps < c.minRPS {
return c.minRPS
}
if rps > c.maxRPS {
return c.maxRPS
}
return rps
}- Start the orchestrator with custom factories from an external project:
package main
import (
"context"
"errors"
"log"
"os"
"os/signal"
"syscall"
imagerapp "github.com/PeladoCollado/imager/orchestrator/app"
"github.com/PeladoCollado/imager/orchestrator/manager"
"github.com/PeladoCollado/imager/types"
)
func main() {
cfg, err := imagerapp.ParseConfig(os.Args[1:])
if err != nil {
log.Fatalf("parse config: %v", err)
}
ctx, cancel := signal.NotifyContext(context.Background(), os.Interrupt, syscall.SIGTERM)
defer cancel()
opts := imagerapp.RunOptions{
RequestSourceFactory: imagerapp.RequestSourceFactoryFunc(func(cfg imagerapp.Config) (types.RequestSource, error) {
if cfg.RequestSourceType == "db" {
// initialize DB connection/pool once in real code and reuse it.
return NewDBRequestSource(openDB()), nil
}
return imagerapp.NewBuiltInRequestSource(cfg)
}),
LoadCalculatorFactory: imagerapp.LoadCalculatorFactoryFunc(func(cfg imagerapp.Config) (manager.LoadCalculator, error) {
if cfg.LoadCalculator == "random-spike" {
return NewRandomSpikeLoadCalculator(cfg.MinRPS, cfg.MaxRPS, cfg.MinRPS, 400, 15), nil
}
return imagerapp.NewBuiltInLoadCalculator(cfg)
}),
}
if err := imagerapp.Run(ctx, cfg, opts); err != nil && !errors.Is(err, context.Canceled) {
log.Fatalf("run orchestrator: %v", err)
}
}With this setup:
-request-source-type=dbuses your DB-backed source-load-calculator=random-spikeuses your spike calculator- all other values continue using built-in behavior via
NewBuiltInRequestSourceandNewBuiltInLoadCalculator
- Rebuild and redeploy:
go test ./...
docker build -t imager/orchestrator:local -f Dockerfile.orchestrator .
kind load docker-image imager/orchestrator:local --name kind
kubectl -n imager rollout restart deploy/imager-orchestrator
kubectl -n imager rollout status deploy/imager-orchestrator --timeout=180s