Search before asking
Motivation
There is currently no standardized way to run reproducible performance tests against Fluss. This makes it difficult to detect regressions, compare optimizations, and establish baselines — especially for aggregation merge engine workloads (RBM32/64, LISTAGG, SUM, etc.).
Solution
Introduce a unified benchmarking module (fluss-microbench) that provides a CLI-driven performance testing framework for Fluss clusters.
Key Features:
- YAML-driven scenario configuration covering write, lookup, prefix-lookup, scan, and mixed workloads
- Dual-process architecture: Server (MiniCluster) runs in a forked JVM, Client runs in the main process, enabling independent resource sampling
- Five-layer metric collection: OS process (OSHI), JVM (MXBean), NMT, application-level (client + server metrics), and JFR
- Built-in presets: log-append, kv-upsert-get, kv-agg-mixed, kv-agg-listagg, kv-agg-rbm32, log-filter-pushdown
- Baseline management with --diff-previous and --diff-baseline for regression detection
- Structured JSON-lines stdout output, semantic exit codes, and --quick mode for fast iteration
- Dataset pre-generation for reproducible benchmarks
- HTML/CSV/JSON report generation with environment snapshots
CLI Commands:
- run, generate, validate, diff, baseline, list, clean
Anything else?
No response
Willingness to contribute
Search before asking
Motivation
There is currently no standardized way to run reproducible performance tests against Fluss. This makes it difficult to detect regressions, compare optimizations, and establish baselines — especially for aggregation merge engine workloads (RBM32/64, LISTAGG, SUM, etc.).
Solution
Introduce a unified benchmarking module (fluss-microbench) that provides a CLI-driven performance testing framework for Fluss clusters.
Key Features:
CLI Commands:
Anything else?
No response
Willingness to contribute