While discussing database performance, an LLM told me that MongoDB with transactions runs 5-15x slower than without transactions. This seemed like a substantial claim worth investigating, so I decided to test it.
I vibe coded this benchmark to compare MongoDB's performance with and without transactions on a single machine. All replica set members run on localhost, which means essentially zero network latency between them. The tests revealed a ~2x slowdown for typical workloads, but this overhead increases dramatically with larger transaction sizes, reaching ~7x at 40,000 operations per transaction. It's important to note that production setups with replica set members on different physical machines would experience additional overhead from network latency.
After running my tests and, I found this EnterpriseDB report which appears to be the source of the 5-15x figure. The report actually compares MongoDB transactions versus PostgreSQL transactions, not MongoDB with versus without transactions. This explained the discrepancy.
So TL;DR is that mongo is indeed kinda slow, but the LLM misrepresented the number.
Testing on a machine with the total of 12 cores, MongoDB 8.0:
| Mode | Throughput | Avg Latency | Duration |
|---|---|---|---|
| Without TX | 457,961 ops/s | 20.6 ms | 1.09s |
| With TX | 228,646 ops/s | 43.1 ms | 2.19s |
Slowdown: ~2.0x
| Mode | Throughput | Avg Latency/TX | Duration |
|---|---|---|---|
| Without TX | 239,573 ops/s | 416.4 ms | 41.74s |
| With TX | 92,579 ops/s | 1077.8 ms | 1m48.02s |
Slowdown: ~2.6x
| Mode | Throughput | Avg Latency/TX | Duration |
|---|---|---|---|
| Without TX | 197,708 ops/s | 1495.1 ms | 25.29s |
| With TX | 73,016 ops/s | 4061.7 ms | 1m8.48s |
Slowdown: ~2.7x
| Mode | Throughput | Avg Latency/TX | Duration |
|---|---|---|---|
| Without TX | 196,001 ops/s | 1992.5 ms | 25.51s |
| With TX | 27,562 ops/s | 14401.1 ms | 3m1.41s |
Slowdown: ~7.1x - Transaction overhead increases dramatically with larger transaction sizes (40K ops/tx vs 30K ops/tx).
Prerequisites: Go 1.23+, Docker or Podman engine running
# Run both tests (default: fair comparison)
go test -v -run TestBatchInsert
# Run specific test
go test -v -run TestBatchInsert_NoTx
go test -v -run TestBatchInsert_Tx| Variable | Default | Description |
|---|---|---|
PREPOPULATE_DOCS |
1000000 |
Documents to prepopulate |
INSERT_BATCHES |
500 |
Number of batch operations |
BATCH_SIZE |
1000 |
Documents per batch |
INSERT_WORKERS |
10 |
Concurrent workers |
| Variable | Default | Description |
|---|---|---|
MIXED_WORKLOAD |
0 |
Enable mixed read+update+insert workload (set to 1) When enabled: 25% reads, 25% updates, 50% inserts |
| Variable | Default | Description |
|---|---|---|
SECONDARY_INDEXES |
0 |
Number of secondary indexes (try 2-4) |
BATCHES_PER_TX |
1 |
Batches per transaction (try 10-100) |
CROSS_COLLECTION |
0 |
Enable cross-collection writes (set to 1) |
| Variable | Default | Description |
|---|---|---|
PREPOP_PROGRESS_SECONDS |
2 |
Prepopulation progress interval |
TEST_PROGRESS_SECONDS |
1 |
Insert progress interval |
go test -v -run TestBatchInsertPREPOPULATE_DOCS=10000000 INSERT_BATCHES=10000 \
SECONDARY_INDEXES=2 BATCHES_PER_TX=10 \
go test -v -run TestBatchInsertPREPOPULATE_DOCS=5000000 INSERT_BATCHES=5000 MIXED_WORKLOAD=1 SECONDARY_INDEXES=3 BATCHES_PER_TX=30 CROSS_COLLECTION=1 \
go test -v -run TestBatchInsertPREPOPULATE_DOCS=5000000 INSERT_BATCHES=5000 MIXED_WORKLOAD=1 SECONDARY_INDEXES=3 BATCHES_PER_TX=40 CROSS_COLLECTION=1 \
go test -v -run TestBatchInsert