SPACK is a container-first static asset runtime for SPA and frontend build outputs.
It is intentionally narrower than Nginx:
- one process serves one asset mount
- configuration can be loaded from dotenv, config files, environment variables, and CLI flags
- optimized for container images and runtime base image usage
- built-in asset optimization pipeline instead of generic web server features
Current scope:
- SPA/static asset serving
index.htmlfallback for client-side routing- built-in
robots.txtfallback generation with static-file precedence - MemDB-backed runtime asset catalog
- request-path normalization for mounted, encoded, and SPA-style routes
gzip,brotli, andzstdvariant generation- scanned precompressed sidecar files such as
app.js.br,app.js.zst, andapp.js.gz - on-demand image width/format variants via query or
Acceptnegotiation - frontend resource hints and immutable cache policy for fingerprinted assets
- in-memory hot asset cache for small files with optional startup warmup
sendfiledelivery for disk-backed assets and range requests- conditional HTTP caching with
ETag,Last-Modified,Cache-Control,Expires, and304 Not Modified - event-driven variant lifecycle for cache warming, invalidation, and hit tracking
- lazy or warmup compression modes
- debug and metrics endpoints for container diagnostics
Out of scope:
- reverse proxy
- dynamic rewrite DSL
- TLS termination
- scripting plugins
- Nginx-style complex
locationsemantics
The current runtime is composed of:
config + dix bootstrapLoads configuration from dotenv, files, env, and CLI, then wires the runtime throughdix.sourceReads files from the configured asset backend, currently local filesystem.catalogStores scanned source assets, source sidecars, and generated variants in a MemDB-backed in-memory index.requestpathNormalizes mounted paths, percent-encoded asset paths, and SPA route-like requests before resolution.resolverMaps an HTTP request to the best asset or variant, includingAcceptandAccept-Encodingnegotiation.pipelineGenerates compressed and image variants in lazy or warmup mode.assetcacheKeeps small hot responses in memory and supports warmup/invalidation.serverHandles HTTP, fallback, delivery, generatedrobots.txt, cache headers, and request metrics.eventDecouples variant lifecycle notifications between server, pipeline, and cache.task + schedulerRuns internal source rescans, artifact cleanup, and cache warmup throughgocron.runtime + observabilityBoots HTTP/debug runtimes and exports Prometheus metrics, build/config info, and Grafana-ready signals.
flowchart TB
subgraph Config["Configuration Sources"]
Defaults["Built-in defaults"]
Dotenv[".env / .env.local"]
Files["--config files"]
Env["Environment variables"]
Flags["CLI flags"]
end
Config --> Loader["configx loader"]
Loader --> CLI["Cobra + dix container"]
subgraph Runtime["Runtime and Data Plane"]
CLI --> Lifecycle["runtime lifecycle"]
Lifecycle --> Source["source"]
Source --> Catalog["catalog"]
Lifecycle --> Pipeline["pipeline"]
Lifecycle --> AsyncLimit["async concurrency limit"]
Lifecycle --> AssetCache["assetcache"]
Lifecycle --> Scheduler["gocron scheduler"]
Lifecycle --> HTTP["Fiber HTTP server"]
Lifecycle --> Debug["debug runtime"]
Pipeline --> AsyncLimit
Pipeline --> ArtifactStore["artifact store"]
ArtifactStore --> Catalog
Scheduler --> SourceRescan["source rescan"]
Scheduler --> ArtifactJanitor["artifact janitor"]
Scheduler --> CacheWarmer["cache warmer"]
SourceRescan --> Source
ArtifactJanitor --> ArtifactStore
CacheWarmer --> AssetCache
end
subgraph RequestFlow["Request Flow"]
Client["client"] --> HTTP
HTTP --> PathCleaner["requestpath cleaner"]
PathCleaner --> Resolver["resolver"]
Resolver --> Catalog
Resolver --> Fallback["entry fallback"]
Resolver --> Delivery["delivery"]
Delivery --> MemoryCache["memory cache hit/fill"]
Delivery --> Sendfile["sendfile / range"]
Delivery --> Headers["ETag / Last-Modified / Cache-Control / Expires"]
end
subgraph Events["Event Flow"]
HTTP -->|VariantServed| EventBus["event bus"]
Pipeline -->|VariantGenerated / VariantRemoved| EventBus
EventBus --> Pipeline
EventBus --> AssetCache
end
subgraph Observability["Observability"]
HTTP --> Metrics["observabilityx"]
Resolver --> Metrics
Pipeline --> Metrics
Scheduler --> Metrics
AsyncLimit --> Metrics
Debug --> Prometheus["/prometheus"]
Debug --> Statsviz["/debug/statsviz"]
Metrics --> Prometheus
Prometheus --> Dashboard["Grafana overview dashboard"]
end
Request flow at a high level:
- The runtime scans
SPACK_ASSETS_ROOTinto the catalog. - An internal scheduler periodically rescans the source tree and removes stale generated artifacts.
- The pipeline optionally warms compressed/image variants.
- The memory cache can optionally preload small hot assets and generated variants.
- Precompressed source sidecars are indexed as variants without treating them as plain source assets.
- Each request path is normalized by
requestpathbefore route resolution so mounted, encoded, and SPA-style paths follow the same matching rules. - The resolver chooses the best asset or variant, including content-coding and image-format negotiation.
- Delivery uses memory cache for eligible small files, otherwise Fiber
SendFile. - Cache and validator headers are applied from resolved metadata and response policy rules.
- HTML responses can emit resource hints, and fingerprinted static assets can receive immutable cache headers.
- Served/generated/removed variants are propagated through the event bus for decoupled cache and pipeline updates.
Hot paths that are intentionally optimized:
- request-path cleaning for already-canonical asset paths
- resolver negotiation for direct assets, encoding variants, and image variants
- HTTP middleware short-circuiting when request logging or metrics are disabled
- response-header calculation for
Vary,Content-Length,Last-Modified, resource hints, and cache-policy emission
FROM ghcr.io/daiyuang/spack:latest
COPY ./dist /app
ENV SPACK_ASSETS_ROOT=/app
ENV SPACK_ASSETS_PATH=/
ENV SPACK_ASSETS_FALLBACK_TARGET=index.html
ENV SPACK_LOGGER_LEVEL=info
ENV SPACK_COMPRESSION_ENABLE=true
ENV SPACK_COMPRESSION_MODE=lazy
ENV SPACK_IMAGE_ENABLE=trueThen run:
go run .Or override configuration at startup:
go run . --config .\spack.yaml --http.port=8080 --assets.root=.\distPublished images live in GitHub Container Registry:
docker pull ghcr.io/daiyuang/spack:latest
docker pull ghcr.io/daiyuang/spack:1.1.5Image tags follow the release version and runtime base:
latestand<version>point to the Alpine runtime imagealpineandalpine-<version>point to the Alpine runtime imagedebiananddebian-<version>point to the Debian Slim runtime image
Use SPACK directly as the runtime base image for frontend build outputs:
FROM node:22-alpine AS build
WORKDIR /workspace
COPY package.json pnpm-lock.yaml ./
RUN corepack enable && pnpm install --frozen-lockfile
COPY . .
RUN pnpm build
FROM ghcr.io/daiyuang/spack:latest
COPY --from=build /workspace/dist /app
ENV SPACK_ASSETS_ROOT=/app
ENV SPACK_ASSETS_PATH=/
ENV SPACK_ASSETS_ENTRY=index.html
ENV SPACK_ASSETS_FALLBACK_TARGET=index.html
ENV SPACK_HTTP_PORT=80
ENV SPACK_COMPRESSION_ENABLE=true
ENV SPACK_COMPRESSION_MODE=lazy
ENV SPACK_IMAGE_ENABLE=true
EXPOSE 80 8080Use SPACK as a reusable runtime base in your own image family:
FROM ghcr.io/daiyuang/spack:latest AS spack-runtime
ENV SPACK_ASSETS_ROOT=/srv/www
ENV SPACK_ASSETS_PATH=/
ENV SPACK_ASSETS_ENTRY=index.html
ENV SPACK_ASSETS_FALLBACK_TARGET=index.html
ENV SPACK_HTTP_PORT=80
ENV SPACK_DEBUG_ENABLE=true
ENV SPACK_DEBUG_LIVE_PORT=8080
FROM spack-runtime
COPY ./dist /srv/www
EXPOSE 80 8080Use a custom config file instead of many environment variables:
FROM ghcr.io/daiyuang/spack:latest
COPY ./dist /app
COPY ./deploy/spack.yaml /etc/spack/spack.yaml
ENV SPACK_ASSETS_ROOT=/app
CMD ["spack", "--config", "/etc/spack/spack.yaml"]Use SPACK only as the runtime layer while keeping your own build pipeline:
FROM ghcr.io/daiyuang/spack:latest
COPY ./packages/web/dist /opt/assets
ENV SPACK_ASSETS_ROOT=/opt/assets
ENV SPACK_ASSETS_PATH=/assets
ENV SPACK_ASSETS_ENTRY=index.html
ENV SPACK_ASSETS_FALLBACK_TARGET=index.html
ENV SPACK_ROBOTS_ENABLE=trueContainer deployment notes:
- keep hashed build assets cacheable and let SPACK serve
index.htmlfallback separately - expose the debug runtime port only inside trusted networks when
/prometheusand profiling endpoints are enabled - prefer baking assets into the image for immutable deploys instead of mounting mutable runtime volumes
- use
spack_build_infoandspack_runtime_start_time_secondsto correlate image versions and container restarts
Important endpoints:
/healthz/livez/readyz/catalog/robots.txtwhen built-in robots generation is enabled/prometheuswhen debug runtime is enabled/debug/statsvizon the debug runtime port
Response behavior:
- small eligible files can be served from the in-memory asset cache
- large files and range requests are delivered through Fiber
SendFile - static asset logs include
delivery=memory_cache_hit|memory_cache_fill|sendfile|sendfile_range GET /robots.txtserves the scanned static file when present, otherwise SPACK can generate a simple fallback from config- responses include
ETag,Last-Modified,Cache-Control, andExpires - conditional requests support
304 Not Modified HEADrequests reuse the same header selection logic without sending a response body- HTML responses can include
Linkresource hints derived from the served HTML - fingerprinted static assets can use long-lived immutable cache headers while HTML stays revalidated
SPACK is designed to be operated as an application runtime, not just a static file drop. The default observability surface combines:
- Prometheus runtime metrics from SPACK modules
- default Go runtime metrics such as
go_goroutines,go_threads,go_memstats_*,go_gc_*, andgo_info - default process metrics such as
process_cpu_seconds_total,process_resident_memory_bytes,process_open_fds, andprocess_start_time_seconds - static runtime metadata via
spack_build_info,spack_config_info, andspack_runtime_start_time_seconds - scheduler telemetry through
gocron'sSchedulerMonitor dixlifecycle telemetry through thespack_dix_*metric family
Bundled Grafana dashboards:
deploy/grafana/spack-overview-dashboard.jsonfor a full single-dashboard service overview with aninstancevariable.deploy/grafana/spack-fleet-instance-dashboard.jsonfor a more fleet-oriented multi-instance view.
They include:
- application request, resolver, pipeline, cache, async concurrency, and scheduler panels
- Go runtime and process overview panels for startup time, uptime, RSS, heap, goroutines, OS threads, GOMAXPROCS, open FDs, and GC behavior
- build and runtime config tables derived from
spack_build_infoandspack_config_info
Recommended operational flow:
- Import the bundled dashboard into Grafana.
- Point it at the SPACK Prometheus target.
- Use
spack_build_infoandspack_runtime_start_time_secondsto correlate deploys, restarts, and regressions. - Use the benchmark/profile entrypoints below before and after performance changes.
See .env.example for a complete example.
Configuration sources are merged in this order:
- built-in defaults
- dotenv files:
.env,.env.local - config files passed by
--config - environment variables
- CLI flags
Later sources override earlier ones.
CLI flags use config-path names directly, for example:
--http.port=8080--assets.root=./dist--assets.backend=local--assets.fallback.target=index.html--robots.disallow=/admin--compression.mode=warmup--compression.encodings=br,zstd,gzip--logger.level=info
You can pass --config multiple times. Later files override earlier ones.
Required:
SPACK_ASSETS_ROOT
There is intentionally no reverse-proxy configuration. SPACK stays focused on static asset runtime behavior: source scanning, catalog lookup, generated variants, HTTP caching, and observability.
HTTP:
SPACK_HTTP_PORT=80SPACK_HTTP_LOW_MEMORY=trueSPACK_HTTP_PREFORK=falseSPACK_HTTP_MEMORY_CACHE_ENABLE=trueSPACK_HTTP_MEMORY_CACHE_WARMUP=trueSPACK_HTTP_MEMORY_CACHE_MAX_ENTRIES=1024SPACK_HTTP_MEMORY_CACHE_MAX_BYTES=67108864SPACK_HTTP_MEMORY_CACHE_MAX_FILE_SIZE=65536SPACK_HTTP_MEMORY_CACHE_TTL=5m
Assets:
SPACK_ASSETS_BACKEND=localSPACK_ASSETS_PATH=/SPACK_ASSETS_ENTRY=index.htmlSPACK_ASSETS_FALLBACK_ON=not_found|forbiddenSPACK_ASSETS_FALLBACK_TARGET=index.html
Async:
SPACK_ASYNC_WORKERS=<int>defaultruntime.NumCPU()- used as the shared async concurrency limit for batch work
- event bus async dispatch follows the same worker-count setting
Robots:
SPACK_ROBOTS_ENABLE=trueSPACK_ROBOTS_OVERRIDE=falseSPACK_ROBOTS_USER_AGENT=*SPACK_ROBOTS_ALLOW=/SPACK_ROBOTS_DISALLOW=SPACK_ROBOTS_SITEMAP=SPACK_ROBOTS_HOST=- when
SPACK_ROBOTS_OVERRIDE=false, a scannedrobots.txtasset is served as-is if present - when no scanned
robots.txtexists, SPACK generates a simple fallback response from the robots config
Compression:
SPACK_COMPRESSION_ENABLE=trueSPACK_COMPRESSION_MODE=lazy|warmup|offSPACK_COMPRESSION_CACHE_DIR=<path>SPACK_COMPRESSION_MIN_SIZE=1024SPACK_COMPRESSION_WORKERS=2SPACK_COMPRESSION_QUEUE_SIZE=128SPACK_COMPRESSION_ENCODINGS=br,zstd,gzipSPACK_COMPRESSION_CLEANUP_EVERY=5mSPACK_COMPRESSION_MAX_AGE=168hSPACK_COMPRESSION_IMAGE_MAX_AGE=336hSPACK_COMPRESSION_ENCODING_MAX_AGE=168hSPACK_COMPRESSION_MAX_CACHE_BYTES=1073741824SPACK_COMPRESSION_ENCODING_MAX_CACHE_BYTES=0SPACK_COMPRESSION_IMAGE_MAX_CACHE_BYTES=0SPACK_COMPRESSION_BROTLI_QUALITY=5SPACK_COMPRESSION_ZSTD_LEVEL=3SPACK_COMPRESSION_GZIP_LEVEL=5- scanned sidecars are recognized when their original asset exists, for example
app.js.br,app.js.zst, andapp.js.gz - sidecar variants keep their source files in place and are not removed as generated artifacts
Images:
SPACK_IMAGE_ENABLE=trueSPACK_IMAGE_WIDTHS=640,1280,1920SPACK_IMAGE_FORMATS=SPACK_IMAGE_JPEG_QUALITY=78- request width variants with
?w=<width> - image processing uses the builtin Go image pipeline and supports
jpegandpng - request format variants with
?format=jpeg|png - format can also be negotiated from
Accept: image/jpeg,image/png - combine both as
?w=640&format=jpeg - when
SPACK_IMAGE_FORMATSis set, warmup/default image planning can pre-generate those formats - when
SPACK_IMAGE_FORMATSis empty, request-timeAcceptnegotiation can still ask for any supported output format
Frontend hints and cache:
SPACK_FRONTEND_RESOURCE_HINTS_ENABLE=trueSPACK_FRONTEND_RESOURCE_HINTS_EARLY_HINTS=falseSPACK_FRONTEND_RESOURCE_HINTS_MAX_LINKS=16SPACK_FRONTEND_RESOURCE_HINTS_MAX_HEADER_BYTES=4096SPACK_FRONTEND_IMMUTABLE_CACHE_ENABLE=trueSPACK_FRONTEND_IMMUTABLE_CACHE_MAX_AGE=8760h- HTML responses can emit
Linkhints for module scripts, styles, fonts, prefetches, preconnects, and dns-prefetch entries found in the served HTML - HTTP
103 Early Hintscan be enabled separately for clients and proxies that handle informational responses safely - fingerprinted assets such as
app-deadbeef.js,app.DiwrgTda.css, orapp-deadbeef.js.mapcan receive long-lived immutable cache headers whileindex.htmlstays revalidated
Debug and metrics:
SPACK_DEBUG_ENABLE=trueSPACK_DEBUG_PPROF_PREFIX=/pprofSPACK_DEBUG_LIVE_PORT=8080SPACK_METRICS_PREFIX=/prometheus- request logs include
delivery=memory_cache_hit|memory_cache_fill|sendfile|sendfile_rangefor static asset responses /prometheusincludes HTTP request metrics/prometheusincludes HTTP runtime gauges such asspack_http_requests_in_flight/prometheusincludes asset delivery metrics labeled by delivery mode/prometheusincludes health runtime metrics such asspack_health_check_runs_total,spack_health_check_duration_seconds,spack_health_reports_total, andspack_health_report_duration_seconds/prometheusincludes asset cache hit/miss/fill/warmup/eviction counters/prometheusincludes pipeline runtime metrics such as queue length, enqueue drop/dedupe, and cleanup activity/prometheusincludes pipeline stage execution metrics such asspack_pipeline_stage_runs_total,spack_pipeline_stage_duration_seconds,spack_pipeline_variants_generated_total, andspack_pipeline_variants_generated_bytes_total/prometheusincludes catalog gauges such asspack_catalog_assets_current,spack_catalog_variants_current, andspack_catalog_source_bytes_current/prometheusincludes resolver metrics such asspack_resolver_resolutions_total,spack_resolver_resolution_duration_seconds, andspack_resolver_generation_requests_total/prometheusincludes background task metrics such asspack_task_runs_total,spack_task_run_duration_seconds,spack_source_rescan_*,spack_artifact_janitor_*, andspack_cache_warmer_*/prometheusincludes scheduler runtime metrics such asspack_task_scheduler_running,spack_task_scheduler_events_total,spack_task_scheduler_job_events_total,spack_task_scheduler_job_execution_seconds,spack_task_scheduler_job_scheduling_delay_seconds,spack_task_scheduler_concurrency_limit_total,spack_task_scheduler_jobs_registered_current, andspack_task_scheduler_jobs_running_current/prometheusincludes async concurrency gauges such asspack_async_capacity_current/prometheusincludes async concurrency execution metrics such asspack_async_batch_runs_total,spack_async_batch_duration_seconds,spack_async_task_runs_total,spack_async_task_duration_seconds, andspack_async_task_submissions_total/prometheusincludesdixruntime lifecycle metrics with thespack_dix_*prefixspack_dix_*covers app build/start/stop, health checks, and state transitions- representative metrics include
spack_dix_build_total,spack_dix_start_total,spack_dix_health_check_total, andspack_dix_state_transition_total /prometheusincludes static runtime metadata gauges such asspack_build_info,spack_config_info, andspack_runtime_start_time_secondsspack_build_infoexposes low-cardinality build labels such as app version, Go version, and VCS revisionspack_config_infoexposes low-cardinality runtime mode labels such as asset backend, compression mode, memory-cache state, frontend hint/cache state, robots state, image state, and logger levelspack_runtime_start_time_secondsexposes the current process start timestamp for restart and uptime correlation
Logger:
SPACK_LOGGER_LEVEL=debugSPACK_LOGGER_CONSOLE_ENABLED=trueSPACK_LOGGER_FILE_ENABLED=falseSPACK_LOGGER_FILE_PATH=<path>SPACK_LOGGER_FILE_MAX_SIZE=<int>SPACK_LOGGER_FILE_MAX_AGE=<int>SPACK_LOGGER_FILE_MAX_FILES=<int>
Internal scheduled tasks:
- SPACK runs an internal source rescan every 5 minutes
- local filesystem sources also trigger debounced rescans from
fsnotifychange events - the rescan reconciles mounted source files with the in-memory catalog
- removed or changed source assets cause stale generated variants and cache entries to be invalidated
- the internal scheduler is instrumented through
gocron'sSchedulerMonitorinterface and exports per-job registration, run, failure, execution-time, scheduling-delay, and concurrency-limit metrics - this scheduler is internal runtime behavior and is not exposed as user configuration
Example startup commands:
# use environment variables / dotenv only
go run .
# load one config file and override a few values from CLI
go run . --config .\spack.yaml --http.port=8080 --assets.root=.\dist
# layer multiple config files
go run . --config .\spack.yaml --config .\spack.local.yamlRun tests:
go test ./...Run repeatable performance baselines:
task perf:benchCapture CPU and memory profiles for a single subsystem:
task perf:profile:resolver
task perf:profile:cache
task perf:profile:pipeline
task perf:profile:http
go tool pprof .\tmp\perf\resolver.cpu.pprofThe current baseline focuses on four hot paths:
resolver.Resolvefor direct asset, encoding variant, and image variant selectionassetcache.GetOrLoadfor cache hit and miss behaviorpipeline.Service.Enqueuefor unique and deduplicated lazy-generation requests- HTTP asset delivery through memory-cache-hit and sendfile paths
Recent optimization work has primarily targeted:
- request-path normalization
- resolver variant negotiation and per-request reuse
- HTTP middleware short-circuiting and response-header emission
Profile artifacts are written to tmp/perf/ so later optimization passes can compare against the same entrypoints.
Releases are tag-driven. Pushing a v* tag starts the Release workflow, which uses GoReleaser to publish:
- GitHub Release archives and checksums
ghcr.io/daiyuang/spackcontainer images- Alpine runtime tags:
latest,<version>,alpine, andalpine-<version> - Debian runtime tags:
debiananddebian-<version>
Before publishing, validate the release configuration locally:
task release:goreleaser:checkCreate and push a patch release:
task release:bump:patchUse task release:bump:minor or task release:bump:major when the next release should advance those version segments.
Use the SPA fixture:
pnpm -C test build
$env:SPACK_ASSETS_ROOT = (Resolve-Path .\test\build\dist).Path
go run .Or run the fixture with CLI flags only:
pnpm -C test build
go run . --assets.root=./test/build/dist --assets.path=/ --assets.entry=index.htmlThe current architecture leaves room for:
- alternate source backends beyond the local asset tree
- richer cache policy strategies beyond TTL and max-size eviction
- more pipeline stages built on the same artifact/catalog/runtime model