Skip to content

PeladoCollado/imager

Repository files navigation

imager

imager load test tool

Go Version

This repository is configured for modern Go tooling with:

  • go 1.26.0
  • toolchain go1.26.0

If your local machine is on an older Go release, install Go 1.26.x and run:

go mod tidy

Overview

Imager is a Kubernetes-native distributed load-testing system with two processes:

  • orchestrator (single instance): discovers target pods/services, schedules load, dispatches jobs, and publishes orchestrator metrics.
  • executor (N replicas): registers with the orchestrator, picks up jobs, executes HTTP request batches, and publishes executor metrics.

1. Run The Demo (sumservice on kind)

This demo deploys:

  • sumservice in namespace imagerdemo
  • imager-orchestrator and imager-executor in namespace imager
  1. Build images:
docker build -t imager/orchestrator:local -f Dockerfile.orchestrator .
docker build -t imager/executor:local -f Dockerfile.executor .
docker build -t imager/sumservice:local -f Dockerfile.sumservice .
  1. Create a local cluster and install metrics-server:
kind delete cluster --name kind || true
kind create cluster --name kind
kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml
kubectl -n kube-system rollout status deploy/metrics-server --timeout=180s
  1. Load images and deploy the demo manifests:
kind load docker-image imager/orchestrator:local --name kind
kind load docker-image imager/executor:local --name kind
kind load docker-image imager/sumservice:local --name kind
kubectl apply -k deploy/demo
kubectl -n imagerdemo rollout status deploy/sumservice --timeout=180s
kubectl -n imager rollout status deploy/imager-orchestrator --timeout=180s
kubectl -n imager rollout status deploy/imager-executor --timeout=180s
  1. Verify endpoint and metrics:
kubectl get --raw '/api/v1/namespaces/imagerdemo/services/http:sumservice:8080/proxy/sum?a=7&b=9'
kubectl get --raw '/api/v1/namespaces/imager/services/http:imager-orchestrator:8099/proxy/metrics' | rg 'imager_orchestrator_(registered_executors|jobs_dispatched_total|job_requests_total|target_pod_)'
kubectl get --raw '/api/v1/namespaces/imager/services/http:imager-executor-metrics:9100/proxy/metrics' | rg 'imager_(executor_jobs_picked_up_total|executor_job_requests_total|success|failed)'

For a complete demo runbook, see docs/DEMO_SUMSERVICE.md.

2. Run Imager On Its Own (Built-In + Config-Only Customization)

If you want to run the load generator without the sumservice demo, use the built-in stack in deploy/k8s:

  • built-in target deployment/service: imager-test-service
  • built-in request source: deploy/k8s/configmap-requests.yaml
  • built-in load profile: step (min-rps=5, max-rps=50, step-rps=5)
  1. Ensure a local cluster exists and metrics-server is available:
kind get clusters
kubectl -n kube-system get deploy metrics-server
  1. Build and load orchestrator/executor images:
docker build -t imager/orchestrator:local -f Dockerfile.orchestrator .
docker build -t imager/executor:local -f Dockerfile.executor .
kind load docker-image imager/orchestrator:local --name kind
kind load docker-image imager/executor:local --name kind
  1. Deploy built-in manifests:
kubectl apply -k deploy/k8s
kubectl -n imager rollout status deploy/imager-orchestrator --timeout=180s
kubectl -n imager rollout status deploy/imager-executor --timeout=180s
kubectl -n imager rollout status deploy/imager-test-service --timeout=180s
  1. Verify it is actively running:
kubectl -n imager get pods,svc
kubectl get --raw '/api/v1/namespaces/imager/services/http:imager-orchestrator:8099/proxy/metrics' | rg 'imager_orchestrator_(registered_executors|jobs_dispatched_total|job_requests_total)'
  1. Optional: switch orchestrator to service target mode:
kubectl apply -f deploy/k8s/orchestrator-service-mode.yaml
kubectl -n imager rollout status deploy/imager-orchestrator --timeout=180s
  1. Optional: switch orchestrator to arbitrary URL mode:
kubectl apply -f deploy/k8s/orchestrator-url-mode.yaml
kubectl -n imager rollout status deploy/imager-orchestrator --timeout=180s

In url mode:

  • no Kubernetes pod discovery is required
  • target pod CPU/memory metrics are not emitted
  • executor latency and error metrics are still emitted (imager_duration, imager_successDuration, imager_success, imager_failed)

Config-only customization (no code changes)

StreamReader file source (JSON records, one per line)

The built-in file request source uses StreamReader, which reads newline-delimited JSON records. Each line is one RequestSpec object. Supported fields:

  • method (required)
  • path (required)
  • queryString (optional)
  • headers (optional map of string to array of strings)
  • body (optional)

Example file:

{"method":"GET","path":"/"}
{"method":"GET","path":"/status"}
{"method":"GET","path":"/anything","headers":{"X-Imager-Run":["true"]}}
{"method":"POST","path":"/submit","headers":{"Content-Type":["application/json"]},"body":"{\"run\":\"demo\"}"}

You can start from deploy/examples/requests.json.

To configure deployment to use this file:

  1. Put the records into the cluster ConfigMap used by the orchestrator:
kubectl -n imager create configmap imager-request-source \
  --from-file=requests.json=./deploy/examples/requests.json \
  --dry-run=client -o yaml | kubectl apply -f -
  1. Ensure orchestrator deployment points to file mode and the mounted path:
  • -request-source-type=file
  • -request-source-file=/config/requests.json
  • volume mount at /config from ConfigMap imager-request-source

The default manifests in deploy/k8s/orchestrator-pod-mode.yaml and deploy/k8s/orchestrator-service-mode.yaml already use this path. If you changed them, re-apply your chosen manifest before restarting:

kubectl apply -f deploy/k8s/orchestrator-pod-mode.yaml
# or
kubectl apply -f deploy/k8s/orchestrator-service-mode.yaml
  1. Restart orchestrator:
kubectl -n imager rollout restart deploy/imager-orchestrator
kubectl -n imager rollout status deploy/imager-orchestrator --timeout=180s

If you use a different filename or mount path, update -request-source-file accordingly.

Random-sum request source

In orchestrator args, set:

- -request-source-type=random-sum
- -random-sum-path=/sum
- -random-sum-min=1
- -random-sum-max=1000

If you switch fully to random-sum, remove -request-source-file=... and the /config ConfigMap volume mount.

Built-in load calculators

  • -load-calculator=step with -min-rps, -max-rps, -step-rps
  • -load-calculator=exponential with -min-rps, -max-rps
  • -load-calculator=logarithmic with -min-rps, -max-rps
  • -load-calculator=adaptive-exponential with:
    • -min-rps, -max-rps
    • -adaptive-max-latency-ms=<p99-ms-threshold>
    • if -adaptive-max-latency-ms=0, it switches to timeout mode and ramps until >=50% of requests time out (503, 504, or no response headers for >=1 minute)
    • after a threshold breach it runs recovery rounds at <=1 rps, then performs binary search in 10 rps increments to find the highest sustainable rate

Example step profile from 10 to 500 rps:

- -load-calculator=step
- -min-rps=10
- -max-rps=500
- -step-rps=10

Example adaptive profile with a 250 ms p99 threshold:

- -load-calculator=adaptive-exponential
- -min-rps=10
- -max-rps=500
- -adaptive-max-latency-ms=250

Target modes

  • -target-mode=pod:
    • requires -target-namespace and -target-deployment
    • orchestrator picks one pod and reports target pod CPU/memory metrics
  • -target-mode=service:
    • requires -target-namespace and -target-service
    • workers target the service endpoint; Kubernetes load-balances across backing pods
    • orchestrator still resolves backing pods to collect/report their CPU/memory metrics
  • -target-mode=url:
    • requires -target-url (absolute URL, e.g. https://api.example.com)
    • no pod metrics are collected
    • executor request/error/latency metrics remain available

More local-cluster notes are in docs/LOCAL_KIND.md.

3. Code-level customization

Imager exposes an importable orchestrator runtime in github.com/PeladoCollado/imager/orchestrator/app. You can start the orchestrator from another project and inject custom factories without patching this repo.

  1. Implement custom components in your project:
  • a request source implementing types.RequestSource
  • a load calculator implementing manager.LoadCalculator

Example: database-backed request source (reads one record per Next() call and loops on EOF):

package requests

import (
	"database/sql"
	"errors"
	"sync"

	"github.com/PeladoCollado/imager/types"
)

type DBRequestSource struct {
	db     *sql.DB
	mu     sync.Mutex
	offset int
}

func NewDBRequestSource(db *sql.DB) *DBRequestSource {
	return &DBRequestSource{db: db}
}

func (s *DBRequestSource) Next() (types.RequestSpec, error) {
	s.mu.Lock()
	defer s.mu.Unlock()

	req, err := s.queryByOffset(s.offset)
	if errors.Is(err, sql.ErrNoRows) {
		s.offset = 0
		req, err = s.queryByOffset(s.offset)
	}
	if err != nil {
		return types.RequestSpec{}, err
	}
	s.offset++
	return req, nil
}

func (s *DBRequestSource) queryByOffset(offset int) (types.RequestSpec, error) {
	row := s.db.QueryRow(`
		SELECT method, path, query_string, body
		FROM request_specs
		ORDER BY id LIMIT 1 OFFSET ?`, offset)

	var req types.RequestSpec
	var queryString sql.NullString
	var body sql.NullString
	err := row.Scan(&req.Method, &req.Path, &queryString, &body)
	if err != nil {
		return types.RequestSpec{}, err
	}

	if queryString.Valid {
		req.QueryString = queryString.String
	}
	if body.Valid {
		req.Body = body.String
	}
	return req, nil
}

func (s *DBRequestSource) Reset() error {
	s.mu.Lock()
	defer s.mu.Unlock()
	s.offset = 0
	return nil
}
  1. Implement a custom load calculator:

Example: random spike load calculator (baseline load with occasional large spikes):

package manager

import (
	"math/rand"
	"time"
)

type RandomSpikeLoadCalculator struct {
	minRPS      int
	maxRPS      int
	baselineRPS int
	maxSpikeRPS int
	spikeChance int // 0-100
	rng         *rand.Rand
}

func NewRandomSpikeLoadCalculator(minRPS, maxRPS, baselineRPS, maxSpikeRPS, spikeChance int) LoadCalculator {
	return &RandomSpikeLoadCalculator{
		minRPS:      minRPS,
		maxRPS:      maxRPS,
		baselineRPS: baselineRPS,
		maxSpikeRPS: maxSpikeRPS,
		spikeChance: spikeChance,
		rng:         rand.New(rand.NewSource(time.Now().UnixNano())),
	}
}

func (c *RandomSpikeLoadCalculator) Next() int {
	rps := c.baselineRPS
	if c.rng.Intn(100) < c.spikeChance {
		rps += c.rng.Intn(c.maxSpikeRPS + 1)
	}
	if rps < c.minRPS {
		return c.minRPS
	}
	if rps > c.maxRPS {
		return c.maxRPS
	}
	return rps
}
  1. Start the orchestrator with custom factories from an external project:
package main

import (
	"context"
	"errors"
	"log"
	"os"
	"os/signal"
	"syscall"

	imagerapp "github.com/PeladoCollado/imager/orchestrator/app"
	"github.com/PeladoCollado/imager/orchestrator/manager"
	"github.com/PeladoCollado/imager/types"
)

func main() {
	cfg, err := imagerapp.ParseConfig(os.Args[1:])
	if err != nil {
		log.Fatalf("parse config: %v", err)
	}

	ctx, cancel := signal.NotifyContext(context.Background(), os.Interrupt, syscall.SIGTERM)
	defer cancel()

	opts := imagerapp.RunOptions{
		RequestSourceFactory: imagerapp.RequestSourceFactoryFunc(func(cfg imagerapp.Config) (types.RequestSource, error) {
			if cfg.RequestSourceType == "db" {
				// initialize DB connection/pool once in real code and reuse it.
				return NewDBRequestSource(openDB()), nil
			}
			return imagerapp.NewBuiltInRequestSource(cfg)
		}),
		LoadCalculatorFactory: imagerapp.LoadCalculatorFactoryFunc(func(cfg imagerapp.Config) (manager.LoadCalculator, error) {
			if cfg.LoadCalculator == "random-spike" {
				return NewRandomSpikeLoadCalculator(cfg.MinRPS, cfg.MaxRPS, cfg.MinRPS, 400, 15), nil
			}
			return imagerapp.NewBuiltInLoadCalculator(cfg)
		}),
	}

	if err := imagerapp.Run(ctx, cfg, opts); err != nil && !errors.Is(err, context.Canceled) {
		log.Fatalf("run orchestrator: %v", err)
	}
}

With this setup:

  • -request-source-type=db uses your DB-backed source
  • -load-calculator=random-spike uses your spike calculator
  • all other values continue using built-in behavior via NewBuiltInRequestSource and NewBuiltInLoadCalculator
  1. Rebuild and redeploy:
go test ./...
docker build -t imager/orchestrator:local -f Dockerfile.orchestrator .
kind load docker-image imager/orchestrator:local --name kind
kubectl -n imager rollout restart deploy/imager-orchestrator
kubectl -n imager rollout status deploy/imager-orchestrator --timeout=180s

About

imager load test tool

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages