Skip to content

vx416/play_iouring

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

play_iouring

play_iouring is a small playground for comparing two Rust async ecosystems on I/O-heavy workloads.

At a high level:

  • monoio is a runtime designed around Linux io_uring and completion-based async I/O.
  • tokio is the most widely used general-purpose async runtime in Rust, built for broad portability and ecosystem support.

This repository benchmarks the two approaches in the same project so you can inspect behavior and performance differences under similar workload patterns.

The current benchmark scope is:

  • Network I/O benchmarks (TCP echo)
  • File I/O benchmarks (parallel read and write)

Why Vagrant (Ubuntu)

This project should be benchmarked on Linux because io_uring is a Linux kernel feature.

The repository includes a Vagrantfile that provisions an Ubuntu VM and mounts this repo to:

  • /home/vagrant/workspace

Using Vagrant gives a stable Linux environment for fair monoio vs tokio comparisons.

Quick Start (Vagrant)

From the repository root on your host machine:

vagrant up
vagrant ssh
cd /home/vagrant/workspace
source "$HOME/.cargo/env"
ulimit -n 65536

Build first:

cargo build --release

Run Benchmarks

Run all benchmark targets:

cargo bench

Run network benchmarks only:

cargo bench --bench tcp_echo_server_monoio
cargo bench --bench tcp_echo_server_tokio

Run file I/O benchmarks only:

cargo bench --bench file_io_monoio
cargo bench --bench file_io_tokio

Criterion will print timing summaries in the terminal and generate HTML reports under target/criterion.

Benchmark Logic

Network: TCP Echo

  • Starts a server.
  • Creates multiple clients.
  • Each client sends data and waits for echoed bytes.
  • Measures total end-to-end time for the benchmark scenario.

File I/O

  • Creates a temporary directory.
  • Uses N files, each with fixed size (currently 64 KiB in bench code).
  • Write benchmark: concurrently writes all files.
  • Read benchmark: pre-populates files, then concurrently reads all files.
  • Measures total batch read/write time.

Benchmark Results

Results collected on Ubuntu (Vagrant VM, ARM64, 2 vCPU, 4 GiB RAM).

File I/O (100 files, 64 KiB each)

Benchmark monoio tokio Δ
Write (100 files) ~1.32 ms ~1.95 ms monoio ~1.5x faster
Read (100 files) ~693 µs ~1.87 ms monoio ~2.7x faster

TCP Echo

Benchmark monoio tokio Δ
10 clients ~4.12 ms ~3.98 ms ~equal
100 clients ~26.6 ms ~27.1 ms ~equal
1000 clients ~292 ms ~330 ms monoio slightly faster

Why monoio wins on File I/O

tokio does not have true async file I/O — tokio::fs internally dispatches file operations to a blocking thread pool (spawn_blocking), which means each read/write carries thread pool scheduling overhead and additional latency from worker thread context switches.

monoio uses io_uring for file operations natively. Reads and writes are submitted directly as entries in the io_uring submission queue and completed asynchronously by the kernel via the completion queue — no thread pool, no extra context switches.

Why Network I/O is roughly equal

For TCP sockets, both runtimes are competitive because network file descriptors are already handled efficiently. tokio uses epoll-based readiness notification, which is well-optimised for this workload. The difference only starts to appear at higher concurrency (1000 clients), where monoio's batched syscall submission begins to reduce overhead.

Notes for Fair Comparison

  • Run benchmarks in the same environment (same VM, same CPU allocation).
  • Compare monoio and tokio with identical benchmark parameters.
  • Avoid running extra background workloads inside the VM during measurements.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages