A high-performance, lock-free object pool for C++20 with multi-threading support. Designed for real-time systems, game engines, high-frequency trading, and any application requiring predictable, low-latency object allocation.
- slick_object_pool
- Lock-free multi-producer multi-consumer (MPMC) - Zero mutex overhead, true concurrent access
- Cache-line aligned - Hardware-aware alignment eliminates false sharing
- O(1) allocation/deallocation - Constant-time operations
- Power-of-2 ring buffer - Efficient bitwise indexing, no modulo operations
- Predictable latency - No garbage collection pauses or lock contention
- Header-only - Single file integration, no build dependencies
- C++20 compliant - Modern C++ with compile-time safety guarantees
- Type-safe - Static assertions ensure compatible types
- Cross-platform - Windows, Linux, macOS, and Unix-like systems
- Real-time systems (robotics, industrial control)
- Game engines (entity management, particle systems)
- High-frequency trading systems
- Network servers (connection pooling, buffer management)
- Any scenario requiring predictable allocation performance
#include <slick/object_pool.h>
struct MyObject {
int id;
double value;
};
int main() {
// Create pool with 1024 objects (must be power of 2)
slick::ObjectPool<MyObject> pool(1024);
// Allocate object from pool
MyObject* obj = pool.allocate();
obj->id = 42;
obj->value = 3.14;
// Return object to pool
pool.free(obj);
return 0;
}Simply copy include/slick/object_pool.h to your project:
# Clone the repository
git clone https://github.com/SlickQuant/slick_object_pool.git
# Copy header to your project
cp slick_object_pool/include/slick/object_pool.h your_project/include/Option 1: FetchContent (Recommended)
include(FetchContent)
set(BUILD_SLICK_OBJECTPOOL_TESTS OFF CACHE BOOL "" FORCE)
FetchContent_Declare(
slick_object_pool
GIT_REPOSITORY https://github.com/SlickQuant/slick_object_pool.git
GIT_TAG main # or specific version tag
)
FetchContent_MakeAvailable(slick_object_pool)
target_link_libraries(your_target PRIVATE slick_object_pool)Option 2: Add as Subdirectory
add_subdirectory(external/slick_object_pool)
target_link_libraries(your_target PRIVATE slick_object_pool)Option 3: find_package (if installed)
find_package(slick_object_pool REQUIRED)
target_link_libraries(your_target PRIVATE slick_object_pool)#include <slick/object_pool.h>
#include <iostream>
struct Message {
uint64_t id;
char data[256];
};
int main() {
// Pool size must be power of 2
slick::ObjectPool<Message> pool(512);
// Allocate from pool
Message* msg = pool.allocate();
msg->id = 1;
std::strcpy(msg->data, "Hello, World!");
// Use the object...
std::cout << "Message: " << msg->data << std::endl;
// Return to pool when done
pool.free(msg);
return 0;
}#include <slick/object_pool.h>
#include <thread>
#include <vector>
struct WorkItem {
int task_id;
std::array<double, 64> data;
};
void worker_thread(slick::ObjectPool<WorkItem>& pool, int thread_id) {
for (int i = 0; i < 10000; ++i) {
// Allocate from pool (lock-free)
WorkItem* item = pool.allocate();
// Do work
item->task_id = thread_id * 10000 + i;
process_work(*item);
// Return to pool (lock-free)
pool.free(item);
}
}
int main() {
// Create pool (must be power of 2)
slick::ObjectPool<WorkItem> pool(2048);
// Launch multiple producer/consumer threads
std::vector<std::thread> threads;
for (int i = 0; i < 8; ++i) {
threads.emplace_back(worker_thread, std::ref(pool), i);
}
// Wait for completion
for (auto& t : threads) {
t.join();
}
return 0;
}The pool uses atomic compare-and-swap (CAS) operations to coordinate multiple producers and consumers without locks:
- Producers (threads calling
allocate()) atomically reserve slots from the pool - Consumers (threads calling
free()) atomically return objects to the pool - Ring buffer wrapping is handled atomically without blocking
- No spinlocks, no mutexes - truly wait-free for successful operations
The implementation is optimized to prevent false sharing on modern CPUs:
Cache Line 0 (64 bytes) - Producer owned:
├─ reserved_ (atomic counter for producers)
└─ size_ (pool size)
Cache Line 1 (64 bytes) - Consumer owned:
└─ consumed_ (atomic counter for consumers)
Cache Lines 2+ - Shared data:
├─ control_ (slot metadata)
├─ buffer_ (actual objects)
└─ free_objects_ (available object pointers)
Key benefits:
- Producers and consumers operate on separate cache lines
- No cache line bouncing under contention
- Near-linear scaling with thread count
ObjectPool instance
├─ Heap: buffer_[size_] (actual objects)
├─ Heap: control_[size_] (slot metadata)
├─ Heap: free_objects_[size_] (free list)
└─ Stack: reserved_, consumed_ (atomics)
Tested on: Intel Xeon E5-2680 v4 @ 2.4GHz, 256GB RAM, Linux 5.15
| Scenario | Latency (avg) | Throughput | Scaling |
|---|---|---|---|
| Single thread | 12 ns | 83M ops/sec | - |
| 2 threads (no contention) | 15 ns | 133M ops/sec | 1.6x |
| 4 threads (low contention) | 18 ns | 222M ops/sec | 2.7x |
| 8 threads (high contention) | 24 ns | 333M ops/sec | 4.0x |
| 16 threads (very high contention) | 35 ns | 457M ops/sec | 5.5x |
| Implementation | Allocation Latency | Thread Safety |
|---|---|---|
| slick_object_pool | ~12-35 ns | Lock-free |
| std::allocator | ~50-200 ns | Thread-local |
| boost::pool | ~20-40 ns | Mutex-based |
| tcmalloc | ~30-60 ns | Thread-local |
| jemalloc | ~25-50 ns | Thread-local |
Note: Benchmarks are system-dependent. Run your own tests for production use.
// Create pool in local memory
ObjectPool(uint32_t size);Parameters:
size: Number of objects in pool (must be power of 2)
// Allocate an object from the pool
T* allocate();Returns a pointer to an object from the pool. If pool is exhausted, allocates a new object from heap.
// Return an object to the pool
void free(T* obj);Returns an object to the pool if it belongs to the pool, otherwise deletes it.
// Query method
constexpr uint32_t size() const noexcept; // Pool sizeObjects stored in the pool must satisfy:
static_assert(std::is_default_constructible_v<T>);Valid types:
- POD types (int, float, etc.)
- std::string, std::vector, and other standard containers
- Structs with default constructors
- Classes with default constructors
Invalid types:
- Types without default constructors
- Types with deleted default constructors
| Platform | Status |
|---|---|
| Windows (MSVC) | ✅ Tested |
| Windows (MinGW) | ✅ Tested |
| Linux | ✅ Tested |
| macOS | ✅ Tested |
| FreeBSD | |
| Unix-like |
- C++ Standard: C++20 or later
- Compiler:
- GCC 10+
- Clang 11+
- MSVC 2019 16.8+
- Dependencies:
- Standard library only
- OS: Windows, Linux, macOS, or POSIX-compliant system
Link with rt and atomic libraries:
target_link_libraries(your_target PRIVATE slick_object_pool rt atomic)Or with command line:
g++ -std=c++20 your_app.cpp -lrt -latomic -o your_appmkdir build && cd build
cmake .. -DCMAKE_BUILD_TYPE=Release -DBUILD_SLICK_OBJECTPOOL_TESTS=ON
cmake --build .
ctest --output-on-failureAddressSanitizer (Memory errors):
Linux/macOS:
mkdir build-asan && cd build-asan
cmake .. -DENABLE_ASAN=ON -DBUILD_SLICK_OBJECTPOOL_TESTS=ON
cmake --build .
ctest --output-on-failureWindows:
# Build
cmake -B build -DENABLE_ASAN=ON -DBUILD_SLICK_OBJECTPOOL_TESTS=ON
cmake --build build --config DebugThreadSanitizer (Thread safety - Linux/macOS only):
mkdir build-tsan && cd build-tsan
cmake .. -DENABLE_TSAN=ON -DBUILD_SLICK_OBJECTPOOL_TESTS=ON
cmake --build .
ctest --output-on-failureUndefinedBehaviorSanitizer (UB detection - Linux/macOS only):
mkdir build-ubsan && cd build-ubsan
cmake .. -DENABLE_UBSAN=ON -DBUILD_SLICK_OBJECTPOOL_TESTS=ON
cmake --build .
ctest --output-on-failureSee TESTING.md for detailed sanitizer documentation.
mkdir build && cd build
cmake .. -DCMAKE_BUILD_TYPE=Release -DCMAKE_INSTALL_PREFIX=/usr/local
cmake --build .
sudo cmake --install .| Option | Default | Description |
|---|---|---|
BUILD_SLICK_OBJECTPOOL_TESTS |
ON | Build unit tests |
CMAKE_BUILD_TYPE |
Debug | Build type (Debug/Release) |
ENABLE_ASAN |
OFF | Enable AddressSanitizer |
ENABLE_TSAN |
OFF | Enable ThreadSanitizer (Linux/macOS) |
ENABLE_UBSAN |
OFF | Enable UndefinedBehaviorSanitizer (Linux/macOS) |
- ✅ Multiple producers can call
allocate()concurrently - ✅ Multiple consumers can call
free()concurrently - ✅ Mixed operations (allocate + free) are safe
- ❌ reset() is NOT thread-safe (use when no other threads are active)
The implementation uses C++20 atomic memory ordering:
- acquire-release for synchronization between threads
- relaxed for performance where ordering isn't required
// ✅ Good: Power of 2
slick::ObjectPool<T> pool(1024);
// ❌ Bad: Not power of 2 (will assert in debug)
slick::ObjectPool<T> pool(1000);
// Rule: size must be 2^N (256, 512, 1024, 2048, etc.)Sizing guidelines:
- Estimate peak concurrent allocations
- Add 20-50% headroom for bursts
- Round up to next power of 2
- Monitor pool exhaustion in production
// When pool is exhausted, allocate() allocates from heap
T* obj = pool.allocate(); // May return heap-allocated object
// free_object() detects and handles both cases
pool.free(obj); // Works for pool or heap objects// ✅ Good: Simple POD struct
struct SimpleType {
int id;
double values[10];
char name[32];
};
// ✅ Good: Types with STL containers
struct ComplexType {
int id;
std::string name; // OK!
std::vector<double> v; // OK!
};
// ❌ Bad: No default constructor
struct BadType {
BadType(int x) : value(x) {} // No default constructor
int value;
};
// ✅ Fix: Add default constructor
struct FixedType {
FixedType() = default; // Default constructor
FixedType(int x) : value(x) {}
int value = 0;
};- Pool size must be power of 2 - Required for efficient bitwise indexing
- Type must be default constructible - Required for pool initialization
- No automatic resize - Pool size is fixed at construction
- No memory reclamation - Objects returned to pool are reused, not freed
Q: What happens when the pool is exhausted?
A: allocate() automatically allocates from heap. free() detects and deletes heap-allocated objects.
Q: Can I use std::string or std::vector in pooled objects? A: Yes! The pool works with any default constructible type, including std::string, std::vector, and other standard containers.
Q: Is the pool real-time safe? A: Operations are lock-free but not wait-free. Allocation may fail and fall back to heap allocation.
Contributions are welcome! Please follow these guidelines:
- Fork the repository
- Create a feature branch (
git checkout -b feature/amazing-feature) - Commit your changes (
git commit -m 'Add amazing feature') - Push to the branch (
git push origin feature/amazing-feature) - Open a Pull Request
- Follow existing code style (4 spaces, no tabs)
- Add tests for new features
- Update documentation
- Ensure all tests pass
This project is licensed under the MIT License - see the LICENSE file for details.
Copyright (c) 2025 SlickQuant
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
- Design inspired by lock-free queue algorithms
- Cache optimization techniques from LMAX Disruptor
- Part of the SlickQuant performance toolkit
- slick_queue - Lock-free MPMC queue
Note: slick_object_pool is a standalone, zero-dependency library. No external dependencies required!
Made with ⚡ by SlickQuant