A standalone Rust CLI that takes a CSV of transactions and shows how much volume bilateral + multilateral netting would compress at different settlement-window sizes. No server, no DB, no network — it runs locally against your dataset.
Built so external researchers can validate the Rescontre netting model against real payment traffic without standing up the clearinghouse.
Standard cargo. Three small dependencies (csv, serde, clap).
Cold build is ~5s.
git clone https://github.com/Rescontre/netsim.git
cd netsim
cargo build --release
CSV with a header row, four columns:
timestamp,sender,recipient,amount_usd
1714000000,0xabc...,0xdef...,0.50
1714000001,0xdef...,0x123...,0.30timestamp— unix seconds (u64). Used for sorting; time-based windowing isn't in v1.sender/recipient— opaque strings. Wallet addresses, agent IDs, anything that uniquely identifies a counterparty.amount_usd— f64 USD. Internally converted to integer microdollars (amount × 1_000_000); all comparisons are integer.
Rows that round to zero microdollars (dust) and self-transfers
(sender == recipient) are dropped with a stderr note. Malformed
rows print a warning and continue.
./target/release/netsim --input txns.csv --windows 1,10,50,200,1000,all
Arguments:
--input/-i— path to your CSV (required)--windows/-w— comma-separated window sizes. Each window is a number of consecutive transactions per batch.allor0means one batch over the entire dataset. Default:1,10,50,200,1000,all.
There's a bundled fixture if you want to verify the build before loading your own data:
./target/release/netsim --input sample_data/txns.csv --windows 1,3,5,all
Three output blocks plus an invariant check. From the bundled 9-txn
sample (5 parties, two overlapping cycles
agent_a → service_b → compute_c → agent_a and
compute_c → data_d → storage_e → compute_c):
================================================================
NETSIM — Offline Netting Simulator
================================================================
Input: sample_data/txns.csv
Transactions: 9
Unique parties: 5
Gross volume: $1.94
Time span: 1714000000 → 1714000008 (0.0 hours)
Windows: [1, 3, 5, all]
================================================================
window txns gross($) bi_net($) ml_net($) cycles ml_compress
--------------------------------------------------------------------------------
1 9 1.94 1.94 1.94 0 0.0%
3 9 1.94 1.76 1.16 1 40.2%
5 9 1.94 1.76 1.16 1 40.2%
all 9 1.94 1.76 0.92 2 52.6%
--- Compression / Exposure tradeoff ---
window compression max_exposure exposure/gross
----------------------------------------------------
1 0.0% 0.50 25.8%
3 40.2% 1.00 51.5%
5 40.2% 1.55 79.9%
all 52.6% 1.94 100.0%
--- Invariant checks (full-batch) ---
Conservation: OK
Monotonicity (ml_net <= gross): OK (0.92 <= 1.94)
window— chunk size.1means each transaction is its own batch (no netting possible).allmeans one batch over everything.gross($)— total USD changing hands before any netting.bi_net($)— what's left after bilateral netting (offset A→B against B→A per pair). The sum of the surviving directional edges across all batches in this window.ml_net($)— what's left after multilateral netting: bilateral, then greedy cycle cancellation on the resulting obligation graph until it's acyclic.cycles— total cycles cancelled across all batches.ml_compress—1 − ml_net / gross. The headline number.max_exposure— peak gross volume in any single chunk. This is your worst-case unsettled exposure if a counterparty defaults mid-window.exposure/gross—max_exposure / gross. Larger windows trade compression for exposure; this column quantifies that.
Bilateral. For every unordered pair {A, B}, we sum A→B and
B→A traffic and emit a single edge in the surviving direction with
the difference. Pairs that cancel exactly leave no edge.
Multilateral (cycle cancellation). On the directed weighted graph from bilateral netting:
- Find any cycle via DFS.
- Find the minimum edge weight along the cycle.
- Subtract that weight from every edge in the cycle.
- Repeat until no cycles remain.
This is the same algorithm running in the production Rescontre clearinghouse.
Every party's net position — (total received) − (total sent) —
must be identical before and after netting. A netting algorithm
that changes any party's net position has stolen from them.
netsim runs this check on the full-batch result and prints
Conservation: OK (or CONSERVATION FAIL: {party} before={x} after={y} and exits non-zero). It's the existential correctness
check; it's the first thing to look at if anything ever looks
wrong.
It also prints Monotonicity (ml_net <= gross): OK — multilateral
net is bounded above by gross volume by construction; printing it
makes a mistake in the algorithm visible immediately.
- Integer microdollars internally.
$1.00 = 1_000_000. Floats are used only for input parsing and display. No float comparison ever decides whether two amounts are equal. Same discipline the production ledger uses. - Sorted DFS iteration. The cycle finder iterates outer keys
and per-node neighbor lists in alphabetical order. Greedy cycle
cancellation is order-sensitive (different cycle choices produce
different residual flows, all conservation-equivalent), and
Rust's
HashMap::new()uses a per-instance random seed — two HashMaps with identical content in the same process can iterate differently. Sorting makes results reproducible across runs. - No external state. netsim doesn't hit the network, doesn't open a database, and doesn't read environment variables. Whatever it computes is determined entirely by the CSV you feed it.
- Export your transactions to CSV with the four columns above.
timestampcan be any monotonic key — actual second-precision isn't needed for tx-count windowing. - Run with several window sizes that span your settlement
cadence. For per-call settlement traffic, start with
--windows 1,10,100,1000,10000,allto see how compression scales with batching. - Look at the conservation line first. If it ever says FAIL, that's a real bug — please open an issue with the dataset that triggered it (or a minimized repro if the data is sensitive).
- Compare
ml_compressagainst your current settlement strategy. The interesting number isn't the headline compression; it's the compression/exposure tradeoff — how much volume you save vs. how much default risk you take by waiting.