Skip to content

cmd/evm: add enginexrunner alongside staterunner and blockrunner#21027

Merged
taratorio merged 16 commits intomainfrom
worktree-cmd-enginextest
May 7, 2026
Merged

cmd/evm: add enginexrunner alongside staterunner and blockrunner#21027
taratorio merged 16 commits intomainfrom
worktree-cmd-enginextest

Conversation

@taratorio
Copy link
Copy Markdown
Member

cmd/evm: add enginextest command + EngineXTestRunner hardening

Headline result

Running the full EEST blockchain_tests_engine_x set (fixtures_develop v5.4.0):

metric value
Tests passed 63920 / 63920 (zero failures)
Wall time 5:09 (default --workers 8, TMPDIR=/dev/shm)
Peak RSS 4.69 GB
Test files scanned 2844 (after skipping pre_alloc/)
Test groups (unique fork × preAllocHash) 35733
RAMDISK=$(./tools/create-ramdisk)
TMPDIR=$RAMDISK ./build/bin/evm enginextest \
    --pre-alloc-dir <pre_alloc_dir> \
    <blockchain_tests_engine_x_root>

What this adds

A new evm enginextest subcommand inspired by #20315, but built around the existing EngineXTestRunner (full Erigon node + JSON-RPC engine API) rather than building a fresh execmodule-based harness. Each test fixture runs through the same engine API path Erigon serves to a real consensus client, with one tester cached per (fork, preAllocHash) group and reused across the tests in that group.

Highlights of the CLI:

  • --pre-alloc-dir (required), --run (regex), --workers (default 8 — knee of the wall-time curve), --verbosity.
  • Walks a single test path; skips files under any pre_alloc/ directory so users can point at the EEST tree root without contortions.
  • Strict on JSON unmarshal: an unparseable fixture file is now a hard error, not silently skipped.
  • --help includes the cross-platform tools/create-ramdisk recipe and a Linux /dev/shm shortcut.

Files (9 changed, +605 lines)

file change
cmd/evm/enginexrunner.go (new, +255) The subcommand: walks tests, regex-filters, groups by (fork, preAllocHash), schedules across workers, runs each group through EngineXTestRunner.Run then Evict
cmd/evm/main.go (+11) Adds shared RunFlag, WorkersFlag (default 0 → 8 inside the cmd), registers engineXTestCommand
execution/engineapi/engineapitester/engine_x_test_runner.go (+173) Lock-free critical path; Evict(fork, hash) shared with Close; ValidationError/ErrorCode fields with negative-test handling
execution/engineapi/engineapitester/engine_api_tester.go (+19) Per-tester cancel ctx; nodeConfig.MdbxDBSizeLimit = 1 GB so each tester's chaindata MDBX reserves 1 GB instead of 2 TB of virtual address space
execution/engineapi/engineapitester/engine_x_leak_test.go (new, +151, build tag leak) Loop test that creates → evicts a tester N times and prints RSS, VmSize, mmap-line-count, goroutine count, heap. Used to find and verify the leak fixes
cmd/rpcdaemon/cli/config.go (+9 / −) subscribeToStateChangesLoop's 3-second retry sleep is now common.Sleep(ctx, …); ctx done returns and falls through to the existing warn
node/eth/backend.go (+3) Mining-broadcast goroutine select gets case <-ctx.Done(): return as the first case, so it terminates when the eth backend's ctx is cancelled
execution/state/genesiswrite/genesis_write.go (+16) Linux genesisMapSize reduced from 2 TB to 16 GB (still 16× the comment's "1 GB plenty" baseline; matches the existing Windows cap intent)
db/migrations/migrations.go (+8) OpenMigrationsDB caps MapSize at 1 GB; the DB only stores migration names

The journey (skip if you don't care)

The changes here are the result of running the CLI against the full EEST set, watching the wall time, and chasing the next bottleneck repeatedly.

1 — Goroutine leak in EngineApiTester

Every InitialiseEngineApiTester was leaking 2 goroutines: one in node/eth.New.func13 (mining-broadcast loop) and one in cmd/rpcdaemon/cli.subscribeToStateChangesLoop. With ~36 k testers in a full run, that's ~70 k pinned goroutines plus the pinned closures, which OOM'd or thrashed the machine.

Diagnosed by tight EnsureTester → Evict loop sampling RSS / goroutines / mmap-line-count per iteration (the new engine_x_leak_test.go). RSS grew linearly, goroutines grew exactly +2 per iteration. Stack-dump comparison pointed at the two functions above.

Root causes:

  • node/eth.New.func13 only watched Hd.QuitPoWMining. When disableBlockDownload=true (the default in eth.New), Hd is constructed as a zero-value headerdownload.HeaderDownload{}, so QuitPoWMining is nil. SafeClose(nil) is a no-op and <-nil chan blocks forever. → fix: add case <-ctx.Done(): return as the first select case.
  • subscribeToStateChangesLoop watches ctx.Done at the top of its loop, but uses a non-ctx-aware time.Sleep(3 * time.Second) between retries. → fix: replace with common.Sleep(ctx, 3*time.Second).
  • The eth backend's ctx is the long-lived caller-supplied one. → fix: InitialiseEngineApiTester derives a per-tester cancel ctx and registers cancel as the first cleanup on Close.

2 — Lock-free init/close in EngineXTestRunner

The runner held a single sync.Mutex during the slow lifecycle ops (InitialiseEngineApiTester, tester.Close, dir.RemoveAll). With workers ≥ 4 most of the wall time was workers queueing on that lock — speedup capped at ~1.4× regardless of worker count.

The lock now covers only map mutation. Init runs unlocked (with a double-check that closes a duplicate tester if the same key gets created twice — rare in practice, free in the common case). Evict removes the entry under the lock then runs the slow close unlocked. Close snapshots all entries under the lock, clears the map, then drains them unlocked. Wall-time speedup at workers=8: 24:30 → 12:01 (2× over the locked version).

3 — Filesystem journal (the big one)

With the lock-free path live, wall time still plateaued at ~12 min from workers=4 onward. CPU usage capped at ~2.4 cores out of 16; system time was ~47% of total CPU.

I (incorrectly) hypothesised mmap_lock contention from the many MDBX env opens/closes. Sampling kernel wchan per thread proved that wrong: only ~0.05% of waits were in __vm_munmap / do_madvise. The dominant wait was futex_wait_queue, but high system time means that "busy time" is in syscalls, not waits.

The actual culprit turned out to be ext4's journal serialising metadata operations (mkdir, file create, unlink for the per-tester datadir lifecycle). Moving TMPDIR to a tmpfs (/dev/shm or the cross-platform tools/create-ramdisk) cut wall time roughly in half:

storage wall RSS sys time CPU%
ext4 (/tmp) 12:01 4.33 GB 834s 242%
tmpfs (/dev/shm) 5:03 4.59 GB 433s 449%

The --help text now documents this with both the tools/create-ramdisk recipe and the /dev/shm shortcut.

4 — MDBX virtual address space ceiling

At workers=32 the runner crashed with mdbx_env_open: cannot allocate memory even though physical RAM was nowhere near the limit. Each MDBX env defaults to 2 TB MapSize. A tester opens three: chaindata, the genesis temp DB, and the migrations DB. That's ~6 TB of virtual address space per concurrent tester; at 32 workers we exhausted the 128 TB user-space address bound on x86-64.

Caps applied (all opt-in for the per-tester setup; production node behaviour unchanged):

  • nodecfg.Config.MdbxDBSizeLimit = 1 GB for the engine-api tester (caps chaindata + consensus envs).
  • genesisMapSize = 16 GB on Linux in GenesisToBlock (the comment in that file already said "1 GB is plenty for any practical genesis"; this leaves 16× headroom).
  • OpenMigrationsDB MapSize capped at 1 GB; that DB stores migration names only.

Per-tester reservation goes from ~6 TB → ~18 GB; workers=32 now runs cleanly, and the path scales to 96 workers in subset benchmarks without crashing.

5 — Negative-test handling

The EEST fixtures include malformed-payload tests with validationError (e.g. "BlockException.INCORRECT_BLOCK_FORMAT") and errorCode (e.g. "-32602") hints. The spec allows the engine to either return INVALID or fail the JSON-RPC call. Erigon does the latter for several Cancun blob-transition and Prague EIP-7685 fixtures. processNewPayload now treats any failure (RPC error or non-Valid status) as success when validationError or errorCode is set — matches the spec, brings the failing-test count from 46 → 0.

Verification

  • make lint clean (run multiple times).
  • make test-all with ERIGON_EXECUTION_TESTS_TMPDIR=/mnt/erigon-ramdisk GOGC=80: 222 packages green (one flake on first run — TestInvalidReceiptHashHighMgas 403 stale token after 607s of contention; passed cleanly in 147s on the retry, no deterministic regression).
  • Standalone evm enginextest full set: 63920 / 0 as in the headline.
  • Leak loop test (-tags leak) shows goroutine count flat at 3 across 40 iterations.

Worker-count sweep (post-fix, full set, tmpfs)

workers wall RSS
4 12:16 4.09 GB
8 (default) 5:09 4.69 GB
16 5:13 4.89 GB
24 11:54¹ 4.68 GB

¹ workers ≥ 16 currently plateau at ~5 min wall on tmpfs; the next bottleneck is Go-level futex contention somewhere I haven't pinned down (see follow-up below). The 24-worker number above is from a pre-refactor disk run; updating once the next round of measurements lands.

Follow-ups (not in this PR)

  • Per-Ethereum BlockReadAheader — a separate refactor (already prototyped, holding for its own commit) replaces the package-level globalReadAheader in execution/exec/blocks_read_ahead.go with a per-backend instance. The shared WaitGroup race (panic sync: WaitGroup is reused before previous Wait has returned) shows up at workers ≥ 16 under high concurrency. Fix: pass *exec.BlockReadAheader through ExecModule, SendersCfg, ExecuteBlockCfg, stageloop.New{Default,Pipeline,InMemory}…. Correctness fix only — does not move wall time.
  • Identify the futex-bound bottleneck. Wall time plateaus at ~5 min on tmpfs from workers=4 onward; CPU caps at ~450%. A pprof CPU + goroutine profile mid-run would pinpoint the dominant Go-level lock or channel.

Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Adds a new evm enginextest subcommand and hardens the underlying EngineX/engine-api tester infrastructure to make large-scale EEST blockchain_tests_engine_x runs faster and more reliable (reduced lock contention, fixed shutdown leaks, and reduced MDBX virtual address space reservations).

Changes:

  • Introduces evm enginextest CLI that walks engine-x fixtures, regex-filters, groups by (fork, preAllocHash), and executes groups in parallel workers with eviction.
  • Refactors EngineXTestRunner cache lifecycle (eviction + lock scope reductions) and improves negative-test handling in processNewPayload.
  • Reduces MDBX MapSize reservations for per-tester DBs (tester chaindata limit, genesis temp DB map size, migrations DB cap) and improves ctx-cancellation shutdown paths to avoid goroutine leaks; adds a tagged leak-loop test.

Reviewed changes

Copilot reviewed 9 out of 9 changed files in this pull request and generated 3 comments.

Show a summary per file
File Description
cmd/evm/enginexrunner.go New enginextest command: fixture discovery, grouping, worker scheduling, runner usage + eviction.
cmd/evm/main.go Adds shared --run / --workers flags and registers the new subcommand.
execution/engineapi/engineapitester/engine_x_test_runner.go Refactors tester caching/eviction and updates negative-test semantics.
execution/engineapi/engineapitester/engine_api_tester.go Adds per-tester cancellable ctx and caps MDBX MapSize for tester DBs.
execution/engineapi/engineapitester/engine_x_leak_test.go Adds -tags leak loop test to detect cleanup leaks via RSS/VM/goroutine sampling.
cmd/rpcdaemon/cli/config.go Makes retry backoff sleep ctx-aware in state-change subscription loop.
node/eth/backend.go Ensures mined-block listener goroutine exits on ctx cancellation.
execution/state/genesiswrite/genesis_write.go Lowers Linux/macOS genesis temp DB MapSize reservation (16GB) while keeping Windows at 1GB.
db/migrations/migrations.go Caps migrations DB MapSize to 1GB to reduce VA-space pressure.

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines 103 to +112
func (extr *EngineXTestRunner) Close() error {
extr.mu.Lock()
defer extr.mu.Unlock()
var entries []testerEntry
for _, perAlloc := range extr.testers {
for _, entry := range perAlloc {
entries = append(entries, entry)
}
}
extr.testers = nil
extr.mu.Unlock()
err = common.Sleep(ctx, 3*time.Second)
if err == nil {
continue
}
Comment on lines +119 to +126
statBytes, err := os.ReadFile("/proc/self/status")
require.NoError(t, err)
rssKb := procStatusKb(string(statBytes), "VmRSS:")
vmKb := procStatusKb(string(statBytes), "VmSize:")

mapsBytes, err := os.ReadFile("/proc/self/maps")
require.NoError(t, err)
mapsLines := bytes.Count(mapsBytes, []byte{'\n'})
Copy link
Copy Markdown
Member

@yperbasis yperbasis left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Major

  1. OpenMigrationsDB MapSize cap applies to all Erigon installations, not just the tester

db/migrations/migrations.go:91 — the unconditional MapSize(1 * datasize.GB) is hit by:

  • node/node.go:370 (every production node startup)
  • cmd/integration/commands/{root,stages}.go
  • the new tester

The PR description frames the cap as a "per-tester" concern but the code change is global. 1 GB is almost certainly fine forever for migration metadata, but the framing in the
description undersells the blast radius. Either:

  • mention this explicitly in the PR description so reviewers/operators know the cap is universal, or
  • gate it behind a flag/argument so production keeps the existing 2 TB and only the engine-x flow caps it.

I'd lean toward (a) — operationally the cap is safe — but it shouldn't be hidden in the "MDBX virtual address space ceiling" section.

  1. EngineXTestRunner.Close() permanently nils the map; subsequent ops panic

execution/engineapi/engineapitester/engine_x_test_runner.go:111 sets extr.testers = nil under the lock. After Close:

  • Evict(...) is fine (extr.testers[fork] on nil map returns the zero value; delete(nil, ...) is a no-op).
  • getOrCreateTester(...) is not fine — line 228 (extr.testers[fork] = perAlloc) panics with assignment to entry in nil map because the read on line 225 returns (nil, false) and the
    code then writes to extr.testers.

In the CLI's deferred-Close pattern this race can't be triggered because workers all finish before Close runs. But this is brittle and undocumented. Two options:

  • add a closed bool flag, check it in getOrCreateTester and return an error; or
  • replace extr.testers = nil with extr.testers = map[Fork]map[PreAllocHash]testerEntry{} (safe to write into; subsequent Closes also work).

If EngineXTestRunner becomes a library used outside cmd/evm, the panic is a real footgun.


Medium

  1. The Close() comment overpromises parallelism

// The map is snapshotted under the lock and then drained without
// it, so the slow tester.Close + dir.RemoveAll work runs in parallel rather
// than serialised behind extr.mu.

The drain loop at engine_x_test_runner.go:113-119 is a single goroutine processing entries sequentially. Each evict(entry) runs unlocked, but they don't run in parallel with each
other. Either:

  • spawn a goroutine per entry inside Close (with a WaitGroup — the unused wg field could finally do something) and actually parallelize, or
  • correct the comment to "runs without the lock so concurrent Evict / getOrCreate calls aren't blocked".

Given Close is a one-shot at end-of-run and the per-entry close is seconds at worst, parallelizing is probably not worth it. Just fix the comment.

  1. subscribeToStateChangesLoop retry-error gets overwritten

cmd/rpcdaemon/cli/config.go:255-258:

err = common.Sleep(ctx, 3*time.Second)
if err == nil {
continue
}
…falls through to log.Warn("[rpcdaemon subscribeToStateChanges]", "err", err).

Previously the log line printed the original retryable error (transport closed / EOS). Now, on a ctx cancellation during the sleep, it prints err=context canceled and the original
cause is lost. Consider:

sleepErr := common.Sleep(ctx, 3*time.Second)
if sleepErr == nil {
continue
}
// fall through and log the retryable err (sleepErr is just ctx done)

Minor, but it's a regression in shutdown observability that came along for the ride.

  1. expectFailure is very loose

engine_x_test_runner.go:340 + 369-381 — when validationError or errorCode is non-empty, any RPC error or non-Valid status counts as a pass, with no cross-check against the actual
error. A fixture marked errorCode = "-32602" (invalid params) would pass even if the engine returned -32603 (internal error) — including a panic-driven 500.

The comment on the type definition acknowledges this is intentional ("Strict code/message matching is intentionally skipped — EEST fixtures may be rejected at the JSON-RPC
parameter-validation step or by the payload validator depending on implementation, and both forms are spec-permitted"), so this is a documented trade-off. But it does meaningfully
weaken what these fixtures verify. Worth tightening to "JSON-RPC error OR INVALID status, but not internal-server-error" if EEST distinguishes those.


Recommendation

The major items (#1, #2) are not blockers — #1 is a documentation/scope concern and #2 is a latent panic that the CLI usage doesn't trigger. It'd be nice to:

  1. PR description: explicit note that OpenMigrationsDB cap is universal (or move to a flag).
  2. Either guard getOrCreateTester against post-Close use, or document the contract.
  3. Fix the misleading "in parallel" comment in Close().

Everything else is small and can be a follow-up.

@taratorio taratorio added this pull request to the merge queue May 7, 2026
Merged via the queue into main with commit 12113dd May 7, 2026
42 checks passed
@taratorio taratorio deleted the worktree-cmd-enginextest branch May 7, 2026 14:08
Sahil-4555 pushed a commit to Sahil-4555/erigon that referenced this pull request May 8, 2026
…21058)

continuation of erigontech#20315 and
erigontech#21027

## Summary

Improves the `evm blocktest` and `evm statetest` CLI runners — parallel
workers, JSON output, regex filtering, stdin batch mode — plus a few
correctness fixes (EIP-7702 fixture parsing, pre-Prague SetCode
rejection, fresh-DB per subtest, goroutine/datadir leak in `RunCLI`).

End-to-end benchmarks against `fixtures_develop.tar.gz` v5.4.0 on a
16-core host with `tmpfs` (`tools/create-ramdisk`,
`TMPDIR=/mnt/erigon-ramdisk/tmp`), 12 workers / `-parallel 12`:

### State tests

| Run | Set | Tests | Pass | Fail | Wall |
|---|---|---:|---:|---:|---:|
| `evm statetest` | all `state_tests/` | 63,556 | 63,519 | 37 |
**1m59s** |
| `evm statetest` | `static/state_tests/` minus `stTimeConsuming`
(matches `TestState`) | 25,294 | 25,285 | 9 | **47s** |
| `go test -run '^TestState$'` | as configured | 25,294 | 25,294 | 0 |
50s wall (46.7s reported) |

The 9/37 CLI failures are real Erigon validation gaps surfaced by the
CLI's strict `checkError` (EIP-4844 blob `TYPE_3_TX_*` checks, EIP-2930
pre-fork tx-type rejection). `TestState`'s wrapper is permissive — `if
err != nil && len(ExpectException) > 0 { return nil }` — so it ignores
whether the expected error actually fired.

### Blockchain tests

| Run | Tests | Pass | Fail | Wall |
|---|---:|---:|---:|---:|
| `evm blocktest --workers=12` — entire `blockchain_tests/` (no skips) |
**69,256** | 69,256 | 0 | **3m34s** |
| `evm blocktest --workers=12` — Go-test subset only | 17,671 | 17,671 |
0 | 1m04s |
| `go test -parallel 12` — 5 `TestExecutionSpecBlockchain*` packages |
17,671 | 17,671 | 0 | 1m02s |

CLI covers ~4× more blockchain-test subtests than the existing 5 Go test
packages combined. The bulk of the gap is
`blockchain_tests/static/state_tests/` (~40,855 subtests in
blockchain-test format), which `TestExecutionSpecBlockchain` skips with
the comment *"Tested in the state test format by TestState"* — but
`TestState` walks `state_tests/static/state_tests/` (state-test format),
a different directory with different end-to-end coverage. The remaining
~10,730 are 7 "very slow" files (BLS, blob-tx combinations,
intrinsic-gas tx, stack-overflow) that no Go test currently exercises.

On apples-to-apples (same 17,671 subset), CLI and `go test` are within
3% of each other — both MDBX-bound on per-subtest datadir lifecycle.

---

## Changes

### `cmd/evm/staterunner.go`, `cmd/evm/blockrunner.go`,
`cmd/evm/main.go`, `cmd/evm/reporter.go`

CLI runner upgrades shared by both commands:
- New flags: `--workers` (parallel pool), `--jsonout` (machine-readable
array of `{name, pass, stateRoot, fork, error, ...}`), `--run <regex>`
(filter by test key).
- Both commands now accept a directory (recursive walk via
`collectFiles`) or stdin batch mode (newline-separated filenames,
one-by-one).
- Worker pool uses an indexed channel + ordered result slice so JSON
output stays deterministic across runs regardless of completion order.
- `report` writes JSON via streaming `json.Encoder` to stdout (no
intermediate `MarshalIndent` allocation) and uses a buffered writer for
the human-readable path.
- `testResult` carries `Fork` and always includes the `error` field
(empty string when passing) so JSON output is shape-stable.
- `runStateTest` / `runBlockTest` propagate JSON-unmarshal errors
instead of silently skipping non-fixture files.

### `cmd/evm/staterunner.go` — fresh DB per subtest

Previously the runner created one `temporaltest.NewTestDB` for the whole
batch and reused the same write tx across subtests. State from a failing
test (or even a successful one with side effects) leaked into the next
subtest's pre-state. Now each subtest gets its own `os.MkdirTemp` +
datadir + `temporaltest.NewTestDB` + tx, all torn down before moving on.
With `--workers=N` this is also the only way to safely parallelize,
since each goroutine needs its own MDBX env. Infrastructure errors
during setup (`MkdirTemp`, `BeginTemporalRw`) mark that subtest failed
and continue with the next — they don't abort the whole batch.

### `execution/tests/testutil/state_test_util.go` — EIP-7702 fixture
parsing

EEST emits authorization lists with raw fields like `"chainId": "0x00"`
(leading-zero hex), which `hexutil.Big`'s strict parser rejects. New
`stAuthorization` mirror struct uses `math.HexOrDecimal256` and converts
to `types.Authorization` via `ToAuthorization()`.

The empty list `"authorizationList": []` is semantically meaningful — it
marks the tx as type-4 SetCode (changes intrinsic gas) even with zero
entries. A custom `UnmarshalJSON` peeks at the raw JSON to set
`IsSetCodeTx = true` whenever the key is present, so callers can
distinguish "no `authorizationList` key" (legacy/regular tx) from "empty
`authorizationList`" (SetCode tx with no auths).

`Run()` gains a `checkError` helper modeled on geth's: distinguishes
- err==nil + no expected → pass
- err==nil + expected → "expected error X, got no error"
- err!=nil + no expected → "unexpected error: X"
- err!=nil + expected → pass

When an error was expected, post-state root is only re-checked if
`post.Root` is explicitly set (non-zero hash).

`RunNoVerify` now adds a zero-balance touch on the coinbase even for
failing/reverted txs (matches geth's `state_test_util.go`) and
propagates the `ApplyMessage` error through to the caller (was
previously silenced by the trailing `nil` return).

### `execution/protocol/txn_executor.go` — SetCode pre-check

`verifyAuthorities` now distinguishes `auths == nil` (not a SetCode tx)
from `len(auths) == 0` (empty list, still type-4). For non-nil auths it
asserts:
- chain rules are at least Prague (otherwise `"SetCode transaction not
allowed before Prague fork"`),
- not a contract creation (existing check, unchanged),
- list is non-empty (`"SetCode transaction must have at least one
authorization"`).

This pairs with the parsing change above: fixtures using
`"authorizationList": []` to test the empty-list invalid case now drive
a real rejection error, instead of silently being treated as legacy txs.

### `execution/execmodule/execmoduletester/exec_module_tester.go` +
`execution/tests/testutil/block_test_util.go` — RunCLI leak fix

`BlockTest.RunCLI()` previously did `defer m.DB.Close()` only, but
`execmoduletester.New` spawns a background `errgroup` plus an Engine,
BlockSnapshots, and a temp datadir. Across 17k+ blocktest subtests with
12 workers the result was leaked goroutines (CPU at 100% across all
cores), 26k+ leftover `mock-sentry-*` directories under `TMPDIR`, and
the host lagging.

Fix:
- `ExecModuleTester.Close()` now skips the `require.Equal(emt.tb, ...)`
assertion when `tb == nil` (CLI mode panicked otherwise) and removes the
temp datadir at the end (the previous code relied on `tb.Cleanup`, which
doesn't fire in CLI mode).
- `BlockTest.RunCLI()` switches to `defer m.Close()`.

After the fix, the 69,256-test full sweep finishes in 3m34s with 0
leftover datadirs.

---------

Co-authored-by: spencer-tb <spencer.tb@ethereum.org>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants