Lock-free SPSC channels for Nim with production-grade performance validation
nimsync v1.0.0 is production-ready for SPSC channels with comprehensive benchmarking following industry standards (Tokio, Go, LMAX Disruptor, Redis). Performance: 615M ops/sec peak throughput, 31ns P99 latency, stable under burst loads. This is verified, tested, real code.
- High throughput: 615M ops/sec (raw), 512K ops/sec (async) - See all 7 benchmarks
- Production-validated: Comprehensive benchmark suite (throughput, latency, burst, stress, sustained)
- Industry-standard testing: Following Tokio, Go, Rust Criterion, LMAX Disruptor methodologies
- Lock-free ring buffer with atomic operations
- Zero GC pressure with ORC memory management
- Cache-line aligned (64 bytes) to prevent false sharing
- Power-of-2 sizing for efficient operations
- Non-blocking
trySend/tryReceive - Async
send/recvwrappers for Chronos
- Nim 2.0.0+ (required)
- Chronos 4.0.0+
nimble install nimsyncgit clone https://github.com/codenimja/nimsync.git
cd nimsync
nimble installimport nimsync
# Create SPSC channel with 16 slots
let chan = newChannel[int](16, ChannelMode.SPSC)
# Non-blocking operations
if chan.trySend(42):
echo "Sent successfully"
var value: int
if chan.tryReceive(value):
echo "Received: ", valueimport nimsync
import chronos
proc producer(ch: Channel[int]) {.async.} =
for i in 1..10:
await ch.send(i)
proc consumer(ch: Channel[int]) {.async.} =
for i in 1..10:
let value = await ch.recv()
echo "Received: ", value
proc main() {.async.} =
let ch = newChannel[int](16, ChannelMode.SPSC)
await allFutures([producer(ch), consumer(ch)])
waitFor main()proc newChannel[T](size: int, mode: ChannelMode): Channel[T]Creates a channel with specified size (rounded to next power of 2).
Only ChannelMode.SPSC is implemented.
proc trySend[T](channel: Channel[T], value: T): bool
proc tryReceive[T](channel: Channel[T], value: var T): boolReturns true on success, false if channel is full/empty.
Use these for maximum performance (sub-100ns operations).
proc send[T](channel: Channel[T], value: T): Future[void] {.async.}
proc recv[T](channel: Channel[T]): Future[T] {.async.}Async wrappers using Chronos. Note: Uses 1ms polling internally.
proc isEmpty[T](channel: Channel[T]): bool
proc isFull[T](channel: Channel[T]): boolnimsync includes 7 official benchmarks following industry best practices:
| Benchmark | Metric | Result | Industry Reference |
|---|---|---|---|
| Throughput | Peak ops/sec | 615M | Go channels benchmarking |
| Latency | p50/p99/p99.9 | 30ns/31ns/31ns | Tokio/Cassandra percentiles |
| Burst Load | Stability | 300M ops/sec, 21% variance | Redis burst testing |
| Buffer Sizing | Optimal size | 2048 slots, 559M ops/sec | LMAX Disruptor |
| Stress Test | Contention | 0% at 500K ops | JMeter/Gatling |
| Sustained | Long-duration | Stable over 10s | Cassandra/ScyllaDB |
| Async | Overhead | 512K ops/sec | Standard async benchmarking |
# Run complete benchmark suite (~18 seconds)
./tests/performance/run_all_benchmarks.sh
# Run individual benchmarks
nim c -d:danger --opt:speed --mm:orc tests/performance/benchmark_latency.nim
./tests/performance/benchmark_latencyFull Documentation: See tests/performance/README.md for detailed explanations of each benchmark.
### Third-Party Verification
Want to verify these claims yourself?
- **Reproduction Guide**: See [BENCHMARKS.md](BENCHMARKS.md) and [tests/performance/README.md](tests/performance/README.md)
- **CI Benchmarks**: Automatic benchmarks on every commit → [GitHub Actions](https://github.com/codenimja/nimsync/actions/workflows/benchmark.yml)
- **Expected Range**: 20M-600M ops/sec depending on CPU, benchmark type, and system load
## Limitations
1. **SPSC Only** - Single Producer Single Consumer only
- Each channel: ONE sender, ONE receiver
- MPSC/SPMC/MPMC will raise `ValueError`
2. **No close()** - Channels don't have close operation
- Use sentinel values for shutdown signaling
3. **Power-of-2 sizing** - Size rounded up
- `newChannel[int](10, SPSC)` creates 16-slot channel
4. **Async polling** - `send`/`recv` use exponential backoff polling
- Starts at 1ms, backs off to 100ms max
- Use `trySend`/`tryReceive` for zero-latency
## Development
### Testing
```bash
nim c -r tests/unit/test_channel.nim # Basic tests
nim c -r tests/unit/test_basic.nim # Version check
nimble bench # Run all benchmarksnimble fmt # Format code
nimble lint # Static analysis
nimble ci # Full CI checksThis repository contains experimental implementations of:
- TaskGroups (structured concurrency)
- Actors (with supervision)
- Streams (backpressure-aware)
- Work-stealing scheduler
- NUMA optimizations
These are NOT production-ready and not exported in the public API. They exist as research code for future releases. See internal modules in src/nimsync/ if interested.
- ✅ v1.0.0: Production SPSC channels (DONE!)
- v1.1.0: MPSC channels + TaskGroup fixes
- v1.2.0: Production-ready Streams
- v2.0.0: Full async runtime with actors
See GitHub Issues for experimental features and known limitations:
- Async wrappers use polling - exponential backoff (1ms-100ms), use
trySend/tryReceivefor zero-latency - TaskGroup has bugs - nested async macros fail (not exported) - See issue template
- MPSC not implemented - multi-producer channels needed for actors - See issue template
- NUMA untested - cross-socket performance unknown - See issue template
These are documented limitations, not intentional behavior. Contributions to fix welcome!
Contributions welcome! Priority areas:
- Fix TaskGroup nested async bug (blocking v0.3.0) - Details
- Implement MPSC channels (enables actors) - Details
- Validate NUMA performance - Details
- Cross-platform support (macOS/Windows)
See issue templates for detailed specifications and acceptance criteria.
MIT License - see LICENSE for details.
Status: Production-ready SPSC channels with comprehensive validation. Other features (TaskGroup, MPSC, actors) are experimental - see GitHub Issues for contributor opportunities.
nimsync v1.0.0 is production-ready for SPSC channels.
✅ SPSC channels verified - 615M ops/sec peak, 31ns P99 latency, 7-benchmark suite validation
We document performance honestly. We benchmark rigorously. We're transparent about limitations.
Open source async runtime built with Nim. Contributions welcome - see issues for high-impact areas.