Skip to content

santhsecurity/bufpool

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

santh-bufpool

Fast, lock-free buffer recycling for Rust with fixed size classes and thread-local caching.

Overview

santh-bufpool provides a typed buffer pool that eliminates allocator churn in hot paths. It pre-allocates buffers in four size classes—4 KiB, 64 KiB, 256 KiB, and 1 MiB—and services checkout requests via lock-free queues (crossbeam-queue). When a class is exhausted, the pool falls back to a fresh heap allocation rather than blocking.

Buffers are automatically zeroed before reuse, making the pool suitable for security-sensitive workloads. A small thread-local cache (up to 4 buffers per thread) further reduces cross-thread queue contention.

Features

  • Lock-free checkout/returnArrayQueue per size class, no mutexes.
  • Zero-allocation fast path — thread-local cache bypasses the global queue entirely.
  • Automatic zeroing — every buffer is wiped before the next checkout.
  • NUMA-aware placement — optional best-effort allocation on a specific NUMA node (requires the numa feature and kernelkit).
  • Immutable sharing — freeze a PoolBuffer into a Send + Sync FrozenBuffer.

Quick Start

use santh_bufpool::{BufferPool, PoolConfig};

let pool = BufferPool::new(PoolConfig {
    four_kib_count: 64,
    sixty_four_kib_count: 8,
    two_fifty_six_kib_count: 2,
    one_mib_count: 0,
    numa_node: None,
});

let mut buf = pool.checkout(1024).unwrap();
buf.copy_from_slice(b"hello world");
assert_eq!(&buf[..11], b"hello world");
// buffer returns to the pool (or TLS cache) on drop

Size Classes

Requested bytes Allocated capacity
1..=4096 4 KiB
4097..=65536 64 KiB
65537..=262144 256 KiB
262145..=1048576 1 MiB
> 1048576 exact requested size (fallback)

Thread-Local Caching

Each thread keeps a small private stash of recently returned buffers. When you check out a buffer, the pool first checks the thread-local cache before hitting the shared queue. This means repeated checkout/return cycles on the same thread are typically contention-free.

The cache is drained automatically when the thread exits, returning buffers to their original pools.

NUMA Placement

Enable the numa feature to request best-effort allocation on a specific NUMA node:

[dependencies]
santh-bufpool = { version = "0.1", features = ["numa"] }
use santh_bufpool::{BufferPool, PoolConfig};

let pool = BufferPool::new(PoolConfig {
    four_kib_count: 32,
    numa_node: Some(0),
    ..PoolConfig::default()
});

If the node-specific allocation fails, the pool transparently falls back to the default heap allocator.

Safety & Security

  • Buffers are zeroed on every return path (PoolBuffer::drop, FrozenBuffer::drop, and TLS cache drain).
  • The pool rejects requests larger than isize::MAX to avoid undefined behavior in slice construction.
  • PoolBuffer exposes DerefMut only while uniquely owned; freeze() converts it into an immutable FrozenBuffer that can cross thread boundaries safely.

License

MIT

About

Typed buffer recycling with lock-free checkout/return

Resources

License

Security policy

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors