Skip to content

grayfail/echelon

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

8 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

echelon

$O(1)$ amortized priority queue for Rust. Adaptive Ladder Queue implementation optimized for heavy-tailed distributions.

Ideal for discrete-event simulation, job scheduling, and network modeling where event timestamps follow Pareto, log-normal, or other heavy-tailed distributions that cause standard $O(\log n)$ structures to become bottlenecks.

Usage

use echelon::{TimestampQueue, Timestamp};

let mut q = TimestampQueue::new();
q.push(Timestamp::new(3.0), "third");
q.push(Timestamp::new(1.0), "first");
q.push(Timestamp::new(2.0), "second");

assert_eq!(q.pop().unwrap().1, "first");
assert_eq!(q.pop().unwrap().1, "second");
assert_eq!(q.pop().unwrap().1, "third");

Algorithm

Based on the Adaptive Ladder Queue (Furfaro & Sacco, 2018), which extends the original Ladder Queue (Tang, Goh & Thng, 2005) with runtime auto-tuning via Smart Spawn.

The queue organizes events into three tiers (Top, Ladder, Bottom) and achieves $O(1)$ amortized insert and delete-min as long as the mean priority increment $\mu > 0$ is finite, regardless of variance.

Benchmarks

Benchmarked using the HOLD model (Jones 1986) across $8$ distributions from Tang (2005) and Furfaro & Sacco (2018), queue sizes from $100$ to $10^6$, comparing against BinaryHeap and BTreeMap. Measured on an Apple M4 Pro (24 GB RAM, macOS 26.3).

You can run the benchmarks yourself using cargo bench.

Throughput ($5 \times 10^5$ hold operations, lower is better)

Queue size $n$ LadderQueue BinaryHeap Speedup
$10^2$ 20ms 19ms $0.95\times$
$10^3$ 17ms 24ms $1.4\times$
$10^4$ 17ms 33ms $1.9\times$
$10^5$ 18ms 49ms $2.7\times$
$10^6$ 18ms 69ms $\mathbf{3.8\times}$

Per-operation latency

Queue size $n$ LadderQueue $P_{50}$ BinaryHeap $P_{50}$
$10^3$ 41ns 42ns
$10^4$ 41ns 42ns
$10^5$ 41ns 83ns

LadderQueue's $P_{50}$ is $41\text{ns}$ regardless of queue size — the $O(1)$ amortization is visible at the individual operation level. BinaryHeap doubles at $n = 10^5$ as the $O(\log n)$ sift-down deepens.

References

  • W.T. Tang, R.S.M. Goh & I.L.J. Thng. Ladder queue: An O(1) priority queue structure for large-scale discrete event simulation. ACM Trans. Model. Comput. Simul. 15(3), pp. 175-204, 2005. doi:10.1145/1103323.1103324
  • A. Furfaro & L. Sacco. Adaptive Ladder Queue: Achieving O(1) Amortized Access Time in Practice. In Proc. ACM SIGSIM-PADS '18, pp. 101-104, 2018. doi:10.1145/3200921.3200925
  • D.W. Jones. An empirical comparison of priority-queue and event-set implementations. Commun. ACM 29(4), pp. 300-311, 1986. doi:10.1145/5684.5686
  • R. Brown. Calendar queues: a fast 0(1) priority queue implementation for the simulation event set problem. Commun. ACM 31(10), pp. 1220-1227, 1988. doi:10.1145/63039.63045
  • K. Chung, J. Sang & V. Rego. A performance comparison of event calendar algorithms: An empirical approach. Software: Practice and Experience 23(10), pp. 1107-1138, 1993. doi:10.1002/spe.4380231005

License

Licensed under either of Apache License, Version 2.0 or MIT License at your option.

About

O(1) amortized priority queue based on an Adaptive Ladder Queue implementation.

Topics

Resources

License

Apache-2.0, MIT licenses found

Licenses found

Apache-2.0
LICENSE-APACHE
MIT
LICENSE-MIT

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages