Skip to content

feat: add TestDeFiSimulation benchmark for Uniswap V2 workload#3187

Merged
chatton merged 3 commits intomainfrom
cian/add-defi-simulation-benchmark
Mar 23, 2026
Merged

feat: add TestDeFiSimulation benchmark for Uniswap V2 workload#3187
chatton merged 3 commits intomainfrom
cian/add-defi-simulation-benchmark

Conversation

@chatton
Copy link
Contributor

@chatton chatton commented Mar 23, 2026

Summary

  • Add TestDeFiSimulation benchmark measuring Uniswap V2 swap throughput
  • Refactor to use benchConfig with BENCH_* env vars (no hardcoded values)
  • Shares infrastructure with existing gasburner/ERC20 tests

NOTE: it looks like a lot of these results do not look great, but I think this is largely due to a bottleneck at the spamoor level not being able to spam enough transactions. The full breakdown is below.

Hopefully we can see more realistic results with the dedicated hardware.

Part of #2288

DeFi Simulation Results (locally on docker)

Run Block Time Gas Limit Scrape Spammers Throughput Wallets Count/Sp MGas/s TPS Non-Empty % Avg Gas/Blk ProduceBlock avg Overhead ev-reth GGas/s Steady State Status
1 100ms 100M 25ms 4 30 200 10K 21.48 297 97.8 2.2M 36ms 3.1% 0.128 1m0s PASS
2 100ms 100M 25ms 6 120 400 10K 12.11 228 94.7 1.5M 30ms 3.2% 0.112 1m4s PASS
3 250ms 300M 50ms 4 60 200 10K 12.96 187 95.8 3.2M 44ms 2.4% 0.164 1m9s PASS
4 100ms 100M 25ms 8 30 300 5K 26.04 561 70.4 2.6M 42ms 2.6% 0.085 7s PASS
5 500ms 300M 50ms 8 50 400 15K 3.04 71 100.0 1.5M 35ms 2.4% 0.108 3m35s PASS
6 100ms 100M 25ms 4 50 200 15K 16.78 231 98.9 1.8M 47ms 2.8% 0.075 1m28s PASS
7 100ms 100M 25ms 6 30 200 20K 9.54 155 99.8 1.0M 46ms 2.6% 0.052 2m42s PASS
8 2s 375M 100ms 8 120 500 10K 6.36 155 61.5 20.1M 127ms 1.3% 0.174 1m16s PASS
9 1s 1G 100ms 10 60 500 5K 8.33 188 100.0 8.5M 202ms 0.7% 0.068 48s PASS
10 100ms 100M 10ms 4 30 200 20K 5.80 95 99.4 587K 28ms 3.0% 0.045 3m54s PASS
11 100ms 100M 25ms 4 30 1000 10K 8.48 184 96.4 1.1M 49ms 47.5% 0.086 55s PASS
12 100ms 100M 25ms 4 30 200 10K 8.55 133 95.0 896K 34ms 2.9% 0.050 1m26s PASS
13 100ms 100M 25ms 4 300 (1s slot) 200 10K 9.86 145 98.3 1.0M 31ms 3.5% 0.074 1m26s PASS
14 100ms 100M 25ms 4 40 200 10K 14.01 193 ? 1.4M 29ms 3.9% 0.107 1m12s PASS

Key Findings

  1. Spamoor injection is the bottleneck, not ev-node or ev-reth. Gas/block never exceeds ~2-3M avg at 100ms blocks (out of 100M limit). The Uniswap scenario has heavy per-spammer warmup (contract deploys, liquidity provision) and limited steady-state send rate.

  2. Best sustained result: 21.48 MGas/s (run 1, 100ms/100M/4x30). Best peak: 26.04 MGas/s (run 4, 8 spammers) but only 7s window.

  3. ev-node overhead is consistently low (0.7-3.9%), confirming it is not a bottleneck.

  4. ProduceBlock avg (28-202ms) varies with block size, not block time config. More gas/block = longer ProduceBlock.

  5. These numbers are spamoor-limited, not system-limited. The gasburner test achieves 297-383 MGas/s on Docker Desktop and 645 MGas/s on dedicated hardware. The DeFi gap is due to spamoor Uniswap scenario send rate, not EVM execution or state root cost.

Summary by CodeRabbit

  • Tests

    • Added benchmark test for DeFi simulation scenarios to measure performance under load.
  • Chores

    • Updated CI workflow to execute new DeFi simulation benchmarks as part of the automated testing pipeline.

chatton added 2 commits March 23, 2026 08:30
Replace hardcoded spammer config with benchConfig fields so all
parameters are controllable via BENCH_* env vars. Add pair_count
and rebroadcast as configurable options for the uniswap-swaps scenario.
@github-actions
Copy link
Contributor

github-actions bot commented Mar 23, 2026

The latest Buf updates on your PR. Results from workflow CI / buf-check (pull_request).

BuildFormatLintBreakingUpdated (UTC)
✅ passed⏩ skipped✅ passed✅ passedMar 23, 2026, 10:41 AM

@claude
Copy link
Contributor

claude bot commented Mar 23, 2026

Claude finished @chatton's task in 2m 9s —— View job


Code Review: feat: add TestDeFiSimulation benchmark for Uniswap V2 workload

Review Checklist

  • Code Quality & Architecture
  • Error Handling & Resilience
  • Security
  • Performance & Resource Efficiency
  • Testing & Reliability
  • Documentation & Developer Experience
  • CI Workflow (.github/workflows/benchmark.yml)

Summary

This PR adds a TestDeFiSimulation benchmark that exercises the EVM stack under a realistic Uniswap V2 workload — a solid addition for measuring how workload complexity impacts throughput. The implementation is structurally sound and consistent with the existing TestERC20Throughput pattern. A few issues are worth addressing before merge, along with some minor improvements.


Code Quality & Architecture

test/e2e/benchmark/spamoor_defi_test.go

The test closely mirrors TestERC20Throughput, which is the right instinct for apples-to-apples comparison. However, the two tests diverge in their waiting strategy in a way that creates an asymmetry worth noting:

  • TestERC20Throughput uses waitForSpamoorDone, which polls until all transactions are sent and returns explicit sent/failed counts that are subsequently asserted.
  • TestDeFiSimulation uses waitForMetricTarget (line 88), which only waits for the sent_total counter but never checks failed_total or asserts a non-zero sent count.

This means the DeFi test can silently succeed even if every transaction fails. There is no equivalent to TestERC20Throughput's s.Require().Zero(failed, "no transactions should have failed"). Fix this →

pair_count and rebroadcast bypassing benchConfig (lines 39–40, already noted by CodeRabbit):

"pair_count":  envInt("BENCH_PAIR_COUNT", 1),
"rebroadcast": envInt("BENCH_REBROADCAST", 0),

These parameters are read directly from the environment, bypassing the benchConfig struct that centralises all other tunable parameters. This means:

  1. cfg.log(t) at line 28 does not print them, so the benchmark log is incomplete.
  2. They cannot be injected via an external benchmark harness that populates benchConfig programmatically.

Both should be added as fields on benchConfig (with BENCH_PAIR_COUNT/BENCH_REBROADCAST defaults) and then populated by newBenchConfig. Fix this →

max_pending hardcoded at 50000 (line 37):

Both DeFi and ERC20 tests hardcode max_pending: 50000. While this is fine for now, it may warrant a benchConfig field in a follow-up when the benchmark suite grows. Not a blocker.


Error Handling & Resilience

Drain timeout hardcoded at 30s (lines 91–95, also noted by CodeRabbit):

drainCtx, drainCancel := context.WithTimeout(ctx, 30*time.Second)

This is the same pattern used in TestERC20Throughput (line 66 of that file), so the choice is consistent. However, Uniswap V2 transactions are significantly heavier than ERC20 transfers — involving multiple contract calls, liquidity pool state changes, and event emissions — meaning mempool drain could legitimately take longer under high-pair-count configurations. The 30s budget may be insufficient when BENCH_PAIR_COUNT > 1.

A comment documenting the rationale (e.g., "sufficient for default pair_count=1; increase if using many pairs") would satisfy the CodeRabbit note without requiring a code change. A config-driven drain timeout would be the clean fix: Fix this →

waitForMetricTarget on timeout does not fail the test (helpers.go, line 477):

case <-timer.C:
    t.Logf("metric %s did not reach target %.0f within %v", name, target, timeout)
    return

The function logs a warning and returns silently on timeout. In TestDeFiSimulation, the warmup wait (line 76) and sent-total wait (line 88) both rely on this. If the warmup never completes (e.g., Uniswap deploy fails) the test will still proceed to record a start block and collect metrics — producing a measurement window that is entirely warmup traffic. This is a pre-existing issue in the helper, but it is more consequential for the DeFi test where warmup is heavier. Consider adding a t.Fatal call on timeout, or at a minimum, assert pollSentTotal() >= cfg.WarmupTxs after the warmup wait returns.

time.Sleep(5 * time.Second) before requireSpammersRunning (line 64):

The 5-second sleep buys time for spammers to start before the health check. This is a best-effort delay: it is too short on slow CI runners and wastes time on fast ones. The existing requireSpammersRunning function already polls with EventuallyWithT over 5 seconds, so the unconditional sleep is redundant unless the spamoor API itself needs time to register the spammer record. A comment explaining why the sleep is necessary before the eventual check would clarify intent.


Security

No security concerns. The DeFi scenario uses the same e2e.TestPrivateKey funding path as the rest of the benchmark suite, which is appropriate for a local/CI test environment. No new credential handling is introduced.


Performance & Resource Efficiency

refill_amount is 2x higher than ERC20 (line 43):

"refill_amount":  "10000000000000000000", // 10 ETH
"refill_balance": "5000000000000000000",  // 5 ETH

vs. ERC20's 5 ETH/2 ETH. The inline comment explains why ("swaps need ETH for WETH wrapping and router approvals"), which is correct and appreciated.

cfg.log(t) called before setupEnv (line 28 vs. 32):

The log is printed before setupEnv spins up the environment — this is the right order (log config, then start infra) and consistent with good practice. No issue.

No spammer ID slice pre-allocation:

var spammerIDs []int  // line 53

With cfg.NumSpammers known at compile time, this could be make([]int, 0, cfg.NumSpammers). Minor; not a blocker.


Testing & Reliability

No assertion on sent count (contrast with ERC20 at lines 88–89):

As noted above, TestERC20Throughput asserts:

s.Require().Greater(sent, float64(0), "at least one transaction should have been sent")
s.Require().Zero(failed, "no transactions should have failed")

TestDeFiSimulation has neither. A failed Uniswap deploy would produce zero useful data while the test reports success.

requireSpammersRunning only checks status once after 5s (line 65):

requireSpammersRunning polls for up to 5 seconds with 100ms intervals. For Uniswap, the deploy + liquidity provision is the expensive part, and the status may be transiently 0 (deploying) before reaching a running state. The 5-second window should be sufficient for most cases, but could be fragile on overloaded CI runners. Not a blocking concern given the existing pattern in helpers.go.


Documentation & Developer Experience

Test-level comment is excellent (lines 13–22):

The godoc comment for TestDeFiSimulation is well-structured:

  • Explains the workload characteristics ("deep call chains, event emission, multi-contract storage")
  • States the comparison intent with TestERC20Throughput explicitly
  • Identifies primary and diagnostic metrics

This is noticeably better than the ERC20 test's comment and sets a good precedent.

Inline comments on warmup exclusion (lines 50–51, 62–68) are clear and helpful.

cfg.log(t) does not include pair_count or rebroadcast:

As noted in the architecture section, these DeFi-specific parameters are invisible in the test log. When debugging a run from CI logs alone, there is no way to know what pair_count was used.


CI Workflow (.github/workflows/benchmark.yml)

defi-benchmark job is not included in publish-benchmarks needs (line 128):

publish-benchmarks:
  needs: [evm-benchmark]

The defi-benchmark job produces no artifact and is not wired into publish-benchmarks — this is consistent with erc20-benchmark which is also excluded. The TODO comment on line 80 acknowledges this. However, if the intent is for defi-benchmark to eventually publish results (analogous to how erc20-benchmark is called out in the TODO), an artifact upload step should be added now to make the wiring easier later. Not a blocker.

No BENCH_JSON_OUTPUT env var on DeFi job (lines 119–123):

The spamoor-benchmark job uses BENCH_JSON_OUTPUT=spamoor_bench.json to capture output as an artifact. The DeFi job omits this. If result storage is desired in the future, there is no artifact to download. Consistent with ERC20 job, so not a regression, but worth noting.

All action SHAs are pinned — good security hygiene, consistent with the rest of the file.

defi-benchmark timeout is 30m vs. erc20-benchmark at 20m. Given the heavier warmup (contract deploys + liquidity provision), the longer timeout is justified.


Issues Summary

Severity Location Issue
Medium spamoor_defi_test.go:88 No assertion on failed_total; silent failure if all txs fail
Medium spamoor_defi_test.go:39-40 pair_count/rebroadcast bypass benchConfig; not logged
Low spamoor_defi_test.go:91 Drain timeout may be too short for pair_count > 1; undocumented
Low spamoor_defi_test.go:64 Unconditional 5s sleep before requireSpammersRunning; unexplained
Low helpers.go:477 (pre-existing) waitForMetricTarget silently returns on timeout; no test failure

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Mar 23, 2026

📝 Walkthrough

Walkthrough

A new E2E benchmark test TestDeFiSimulation (build tag evm) was added to spawn multiple Spamoor Uniswap V2 swap spammers, perform a warm-up phase via metrics polling, measure steady-state transactions across a block range, collect traces and block-level metrics, and record benchmark results.

Changes

Cohort / File(s) Summary
DeFi Benchmark Test
test/e2e/benchmark/spamoor_defi_test.go
Added (*SpamoorSuite).TestDeFiSimulation() (evm build tag): spawns Spamoor Uniswap V2 swap spammers, warm-up via spamoor_transactions_sent_total polling, resets trace timing, waits for target transactions and pending drains, gathers block gas/tx metrics and traces, constructs and writes benchmark results.
CI Workflow
.github/workflows/benchmark.yml
Added defi-benchmark GitHub Actions job that builds evm/da binaries and runs the E2E benchmark test with Go tags evm, targeting TestSpamoorSuite/TestDeFiSimulation.

Sequence Diagram

sequenceDiagram
    participant Test as Test Harness
    participant Spammer as Spamoor Spammers
    participant Node as Blockchain Node
    participant Metrics as Metrics System
    participant Traces as Trace Collector
    participant Results as Results Writer

    Test->>Spammer: Spawn Uniswap V2 swap spammers
    activate Spammer
    Test->>Test: Sleep (warm-up delay)
    Test->>Spammer: Assert spammers running
    Spammer-->>Test: Running status
    loop Warm-up polling
        Test->>Metrics: Read spamoor_transactions_sent_total
        Metrics-->>Test: Counter value
    end
    Test->>Traces: Reset trace collection window
    Test->>Node: Fetch start block header
    Node-->>Test: Start block header
    loop Steady-state
        Test->>Metrics: Poll until target transaction count reached
        Metrics-->>Test: Count updates
    end
    Test->>Node: Wait for pending tx drain (bounded)
    Node-->>Test: Drain status
    Test->>Node: Fetch end block header
    Node-->>Test: End block header
    Test->>Node: Collect block gas/tx metrics for range
    Node-->>Test: Block metrics
    Test->>Traces: Gather execution traces
    Traces-->>Test: Trace data
    Test->>Results: Construct and record benchmark result
    Results-->>Test: Ack
    deactivate Spammer
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

Possibly related issues

Possibly related PRs

Suggested reviewers

  • alpe
  • julienrbrt

🐰
I spawned spammers with a hop and a cheer,
Counting swaps and blocks as they steer,
Warmed up the chain, then steady we played,
Traces collected, results neatly laid,
A carrot for metrics — benchmark is here!

🚥 Pre-merge checks | ✅ 3
✅ Passed checks (3 passed)
Check name Status Explanation
Title check ✅ Passed The title clearly and concisely describes the main change: adding a TestDeFiSimulation benchmark for Uniswap V2 workload, which aligns with the core additions in the changeset.
Description check ✅ Passed The PR description provides a comprehensive overview with summary points, detailed results table, key findings, and context (#2288). It exceeds the minimal template requirements by explaining rationale and performance analysis.
Docstring Coverage ✅ Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch cian/add-defi-simulation-benchmark

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@codecov
Copy link

codecov bot commented Mar 23, 2026

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 61.14%. Comparing base (a44ae05) to head (eeeafe9).
⚠️ Report is 1 commits behind head on main.

Additional details and impacted files
@@            Coverage Diff             @@
##             main    #3187      +/-   ##
==========================================
+ Coverage   61.12%   61.14%   +0.01%     
==========================================
  Files         117      117              
  Lines       12082    12082              
==========================================
+ Hits         7385     7387       +2     
+ Misses       3870     3869       -1     
+ Partials      827      826       -1     
Flag Coverage Δ
combined 61.14% <ø> (+0.01%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (2)
test/e2e/benchmark/spamoor_defi_test.go (2)

91-95: Drain timeout is hardcoded; consider documenting the rationale.

The 30-second timeout for draining pending transactions is reasonable, but a brief comment explaining why this value was chosen (or why it differs from cfg.WaitTimeout) would help future maintainers understand the trade-off between waiting longer for accuracy vs. keeping test duration bounded.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@test/e2e/benchmark/spamoor_defi_test.go` around lines 91 - 95, Add a short
comment explaining why the drain timeout is set to 30 seconds and how it relates
to test duration vs. accuracy (and note why it differs from cfg.WaitTimeout if
that is relevant); place this comment immediately above the creation of
drainCtx/drainCancel where waitForDrain is called (referencing drainCtx,
drainCancel, and waitForDrain) so future maintainers understand the trade-off
and rationale for this hardcoded value.

39-40: Consider adding pair_count and rebroadcast to benchConfig for consistency.

Other BENCH_* environment variables are accessed via the benchConfig struct (e.g., cfg.Throughput, cfg.MaxWallets), but these two call envInt() directly. Moving them to benchConfig would centralize configuration and allow cfg.log(t) to display all parameters.

♻️ Suggested approach

Add fields to benchConfig in config.go:

PairCount:   envInt("BENCH_PAIR_COUNT", 1),
Rebroadcast: envInt("BENCH_REBROADCAST", 0),

Then use cfg.PairCount and cfg.Rebroadcast here.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@test/e2e/benchmark/spamoor_defi_test.go` around lines 39 - 40, Add PairCount
and Rebroadcast fields to the benchConfig struct (in config.go) and initialize
them using envInt("BENCH_PAIR_COUNT", 1) and envInt("BENCH_REBROADCAST", 0)
respectively; then replace the direct envInt(...) calls in
test/e2e/benchmark/spamoor_defi_test.go with cfg.PairCount and cfg.Rebroadcast
so all BENCH_* settings are centralized and will be included when calling
cfg.log(t).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In `@test/e2e/benchmark/spamoor_defi_test.go`:
- Around line 91-95: Add a short comment explaining why the drain timeout is set
to 30 seconds and how it relates to test duration vs. accuracy (and note why it
differs from cfg.WaitTimeout if that is relevant); place this comment
immediately above the creation of drainCtx/drainCancel where waitForDrain is
called (referencing drainCtx, drainCancel, and waitForDrain) so future
maintainers understand the trade-off and rationale for this hardcoded value.
- Around line 39-40: Add PairCount and Rebroadcast fields to the benchConfig
struct (in config.go) and initialize them using envInt("BENCH_PAIR_COUNT", 1)
and envInt("BENCH_REBROADCAST", 0) respectively; then replace the direct
envInt(...) calls in test/e2e/benchmark/spamoor_defi_test.go with cfg.PairCount
and cfg.Rebroadcast so all BENCH_* settings are centralized and will be included
when calling cfg.log(t).

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: df5aba88-ffb3-47dd-9a60-f619f5a9a81c

📥 Commits

Reviewing files that changed from the base of the PR and between a44ae05 and e7ce827.

📒 Files selected for processing (1)
  • test/e2e/benchmark/spamoor_defi_test.go

- name: Set up Docker Buildx
uses: docker/setup-buildx-action@4d04d5d9486b7bd6fa91e7baf45bbb4f8b9deedd # v4.0.0
- name: Install just
uses: extractions/setup-just@v3

Check warning

Code scanning / CodeQL

Unpinned tag for a non-immutable Action in workflow Medium

Unpinned 3rd party Action 'Benchmarks' step
Uses Step
uses 'extractions/setup-just' with ref 'v3', not a pinned commit hash
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (1)
.github/workflows/benchmark.yml (1)

103-123: New defi-benchmark job looks good and follows established patterns.

The job structure, action versions, and test invocation are consistent with the existing spamoor-benchmark and erc20-benchmark jobs.

One note: The static analysis tool flagged extractions/setup-just@v3 (line 116) as an unpinned action. This is a pre-existing pattern used by all other jobs in this workflow (lines 34, 66, 94). Consider pinning to a commit SHA for supply chain security in a follow-up PR that addresses all instances.

,

🔒 Optional: Pin the action to a specific commit SHA

You can find the commit SHA for the v3 tag and update all usages:

#!/bin/bash
# Get the commit SHA for extractions/setup-just v3 tag
curl -s https://api.github.com/repos/extractions/setup-just/git/refs/tags/v3 | jq -r '.object.sha'
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.github/workflows/benchmark.yml around lines 103 - 123, Pin the unpinned
GitHub Action reference "extractions/setup-just@v3" to a specific commit SHA for
supply-chain security: replace every occurrence of extractions/setup-just@v3 in
the workflow with extractions/setup-just@<commit-sha> (use the commit SHA for
the v3 tag from the extractions/setup-just repo). Locate the action usages by
searching for the exact string "extractions/setup-just@v3" and update them all
consistently; fetch the correct SHA from the remote repo tags and use that SHA
in the workflow entries.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In @.github/workflows/benchmark.yml:
- Around line 103-123: Pin the unpinned GitHub Action reference
"extractions/setup-just@v3" to a specific commit SHA for supply-chain security:
replace every occurrence of extractions/setup-just@v3 in the workflow with
extractions/setup-just@<commit-sha> (use the commit SHA for the v3 tag from the
extractions/setup-just repo). Locate the action usages by searching for the
exact string "extractions/setup-just@v3" and update them all consistently; fetch
the correct SHA from the remote repo tags and use that SHA in the workflow
entries.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: a6048a31-43c5-489f-b8b4-38e1be131444

📥 Commits

Reviewing files that changed from the base of the PR and between e7ce827 and eeeafe9.

📒 Files selected for processing (1)
  • .github/workflows/benchmark.yml

@chatton chatton added this pull request to the merge queue Mar 23, 2026
@github-merge-queue github-merge-queue bot removed this pull request from the merge queue due to failed status checks Mar 23, 2026
@chatton chatton added this pull request to the merge queue Mar 23, 2026
@github-merge-queue github-merge-queue bot removed this pull request from the merge queue due to failed status checks Mar 23, 2026
@chatton chatton added this pull request to the merge queue Mar 23, 2026
Merged via the queue into main with commit 005e06c Mar 23, 2026
39 checks passed
@chatton chatton deleted the cian/add-defi-simulation-benchmark branch March 23, 2026 13:22
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants