Skip to content

Comparative testing of concurrent multithreaded cache algorithms (LRU, MRU, MQ, ..)

License

Notifications You must be signed in to change notification settings

DimaBond174/cache_multi_thread

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

11 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Concurrent caching algorithms.

You are in the test bench repository where the fastest algorithms for multithreaded data caching are compared. If you have your own variant of a thread-safe cache, please send me a link to its C/C++ implementation (specnet.messenger@gmail.com) so that I can add it to the tests.

Current test results.

Key-string search :

TestCase2.1thread

Key - number array search :

TestCase1.1thread

Step-by-step instructions on how to run a test using source code / ready-made compiled programs, bench description, description of caching algorithms here: WiKi


Quick Start to use cache OnCache*.h :

To use OnCache * .h series templates, you need to implement 2 methods used to search for an item by key:

  1. Hash function: inline uint64_t get_hash(YourKey). For example, calculating a hash on a string, I cast the pointer std :: string.c_str() to an array[3] of 64-bit unsigned integers and put them together - these calculations are enough to decompose the elements into baskets of the hash-table:
  inline uint64_t  get_hash(const ElemNSizeKey  *lhv) {
    uint64_t re  =  0;
    if  (lhv->key.keyArray)  {
      for (int  i = 0;  i  <  lhv->key.keyLongSize;  ++i)  {
          re  += lhv->key.keyArray[i];
      }
    }
    return  (re < 9223372036854775807ll)?  re  :  (re >> 1);  
  }
  1. Comparison function: inline int compare (YourKey lhv,YourKey rhv). For exmaple, for std::string first compare casted to uint64_t[3] first letters and if they are equal, next call to std::string.compare(). So we are comparing by 8 bytes on each step versus 1 byte by 1 byte:
  inline int compare (const ElemNSizeKey  *lhv,
                      const ElemNSizeKey  *rhv)  {
    if  (lhv->key.keyArray)  {
      for (int  i  = lhv->key.keyLongSize_1;  i  >=  0;  --i) {
        if (lhv->key.keyArray[i]  >  rhv->key.keyArray[i])  return  1;
        if (lhv->key.keyArray[i]  <  rhv->key.keyArray[i])  return  -1;
      }
    }  else if (rhv->key.keyArray) {
      return  -1;
    }
    return lhv->data.compare(rhv->data);
  }

And the most important thing - your Key should work as a pointer (allows =nullptr):

    void  clear() {
       key  =  nullptr;
   // ...
    }

To make the key self-destruct when evicting a cached item, make it part of a data object or serve as an independent shared_ptr:

std::shared_ptr<YourKey> key;

You can see examples of use in tester classes:

Single threaded fastest:

TestOnCacheSMRU

TestOnCacheSMRU2

Multithreaded thread safe:

TestOnCacheMMRU

TestOnCacheMMRU2

TestOnCacheMLRU

TestOnCacheMLRU2

How to call methods and work in parallel threads:

TestCase1

TestCase2

Graphs of the results of multi-threaded testing cite here at the end of the WiKi page.

The graphs are aligned on the level of the first data block work time - the processing time for the 10* data block is divided by 10 times, the processing time for the 100* data block is divided by 100 times and so on. If the graph remains parallel to the earth, this means O (n) time complexity.


Copyright (c) Dmitriy Bondarenko feel free to contact me: specnet.messenger@gmail.com

About

Comparative testing of concurrent multithreaded cache algorithms (LRU, MRU, MQ, ..)

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages