@affectively/bun-isolated-runner runs Bun test files in isolated subprocesses so mocks and global state do not leak between files.
The fair brag is straightforward: this package solves a real testing problem with a small, understandable tool. It gives you a CLI, a programmatic API, path filtering, telemetry, and sticky-pass caching without making the core idea harder than it is.
Bun is fast, but shared process state can make tests interfere with each other. That shows up as:
- flaky tests that fail only in a suite
- leaked mocks between files
- singletons or globals affecting later tests
Run each file in a fresh subprocess:
npx bun-isolated
# or
bun-isolated src/**/*.test.tsnpm install -D @affectively/bun-isolated-runner
# or
bun add -D @affectively/bun-isolated-runnernpx bun-isolated
npx bun-isolated "src/**/*.test.ts"
npx bun-isolated --parallel=4
npx bun-isolated --preload ./bun.preload.ts
npx bun-isolated --sticky-pass{
"scripts": {
"test": "bun-isolated --sticky-pass",
"test:changed": "bun-isolated --changed --sticky-pass",
"test:ci": "bun-isolated --parallel=4 --bail"
}
}import { runIsolated, findTestFiles } from '@affectively/bun-isolated-runner';
const files = await findTestFiles('src/**/*.test.ts');
const results = await runIsolated(files, {
parallel: 4,
timeout: 30000,
env: { NODE_ENV: 'test' },
});- each file gets a clean Bun process
- the CLI stays simple
- path filters, bail behavior, and failure limits are already there
- telemetry can help tune slow runs
- sticky-pass caching can speed repeated local runs
For large test suites (hundreds to thousands of files), the laminar runner applies pipelined execution principles:
import { runLaminar } from '@affectively/bun-isolated-runner/laminar';
const results = await runLaminar(files, { timeout: 30000 }, {
workers: 8,
shardCount: 2,
});- Wallington rotation: Batch N+1 dispatches while batch N's results collect — no idle time between batches
- Worthington whip: Files split into parallel shards, each shard runs its own rotation pipeline, results fold at the end
30 test files, 8 CPU cores, macOS arm64:
| Runner | Strategy | Workers | Files/sec | p50 (ms) | p95 (ms) | Total (s) | Speedup |
|---|---|---|---|---|---|---|---|
| Sequential | sequential | 1 | 3.62 | 200 | 385 | 8.3 | 1.00x |
| Pool (p=2) | pool | 2 | 7.46 | 214 | 303 | 4.0 | 2.06x |
| Pool (p=4) | pool | 4 | 10.37 | 303 | 458 | 2.9 | 2.87x |
| Pool (p=8) | pool | 8 | 11.66 | 460 | 855 | 2.6 | 3.22x |
| Laminar (w=8) | laminar | 8 | 9.63 | 899 | 1209 | 3.1 | 2.66x |
| Laminar (w=8) | laminar | 8 | 11.04 | 658 | 968 | 2.7 | 3.05x |
At 30 files the pool and laminar runners converge around 3x. The laminar architecture is designed for large suites (800+ files) where batch dispatch overlap and shard-level parallelism compound — the rotation amortizes subprocess spawn overhead across batches rather than paying it per-file.
Run the benchmark yourself:
bun run src/benchmark.ts --files 100 --cwd /path/to/projectThis package does not need a dramatic pitch. The strongest fair brag is that it gives Bun users a practical isolation tool for test suites that are too stateful for the default shared-process model.