You are in the test bench repository where the fastest algorithms for multithreaded data caching are compared. If you have your own variant of a thread-safe cache, please send me a link to its C/C++ implementation (specnet.messenger@gmail.com) so that I can add it to the tests.
Step-by-step instructions on how to run a test using source code / ready-made compiled programs, bench description, description of caching algorithms here: WiKi
To use OnCache * .h series templates, you need to implement 2 methods used to search for an item by key:
- Hash function: inline uint64_t get_hash(YourKey). For example, calculating a hash on a string, I cast the pointer std :: string.c_str() to an array[3] of 64-bit unsigned integers and put them together - these calculations are enough to decompose the elements into baskets of the hash-table:
inline uint64_t get_hash(const ElemNSizeKey *lhv) {
uint64_t re = 0;
if (lhv->key.keyArray) {
for (int i = 0; i < lhv->key.keyLongSize; ++i) {
re += lhv->key.keyArray[i];
}
}
return (re < 9223372036854775807ll)? re : (re >> 1);
}
- Comparison function: inline int compare (YourKey lhv,YourKey rhv). For exmaple, for std::string first compare casted to uint64_t[3] first letters and if they are equal, next call to std::string.compare(). So we are comparing by 8 bytes on each step versus 1 byte by 1 byte:
inline int compare (const ElemNSizeKey *lhv,
const ElemNSizeKey *rhv) {
if (lhv->key.keyArray) {
for (int i = lhv->key.keyLongSize_1; i >= 0; --i) {
if (lhv->key.keyArray[i] > rhv->key.keyArray[i]) return 1;
if (lhv->key.keyArray[i] < rhv->key.keyArray[i]) return -1;
}
} else if (rhv->key.keyArray) {
return -1;
}
return lhv->data.compare(rhv->data);
}
And the most important thing - your Key should work as a pointer (allows =nullptr):
void clear() {
key = nullptr;
// ...
}
To make the key self-destruct when evicting a cached item, make it part of a data object or serve as an independent shared_ptr:
std::shared_ptr<YourKey> key;
You can see examples of use in tester classes:
Single threaded fastest:
Multithreaded thread safe:
How to call methods and work in parallel threads:
Graphs of the results of multi-threaded testing cite here at the end of the WiKi page.
The graphs are aligned on the level of the first data block work time - the processing time for the 10* data block is divided by 10 times, the processing time for the 100* data block is divided by 100 times and so on. If the graph remains parallel to the earth, this means O (n) time complexity.
Copyright (c) Dmitriy Bondarenko feel free to contact me: specnet.messenger@gmail.com