Join GitHub today
GitHub is home to over 40 million developers working together to host and review code, manage projects, and build software together.Sign up
Higher level cache above the standard RocksDB cache #935
Currently RocksDB caches data in bytes, this means that you may create two separate transaction objects and manipulate them in different threads.
We want to create a cache layer above the DB implementation that locks in X K transactions and only stored them to the actual DB when evicting. This should allow us to avoid most reads from the DB.
Reducing I/O overhead.
Open Questions (optional)
We need to decide on the number of transactions that we want to store and calculate the amount of memory that would occupy for the node. Without any calculations, I'd like to be able to store an amount of transactions that supports 1000 TPS, but that is likely a ton of memory. We can start with 50-100 and see what that gets us. The eviction policy should be a fraction of the pool. For example 1%, 3%, 10%, ... whatever makes sense for the given size.
Configurability I'm open to unless we can squeeze a sufficient amount of TXs into a very small memory footprint (which I reckon we can't). In which case I'd recommend adding a minimum value to the configuration parameter, e.g., at least a transaction worth of 100-200 MB, which should at least somewhat help even low resource nodes.
I'm a bit reserved towards having the cache size dynamic. As we'd have to monitor/count the amount of inflow TXs and then react based on that, meaning that if a large jump in TPS happened, we'd have to evict a lot of transactions very fast before we adjust the cache mechanism. But happy if someone proves me wrong with an approach that would work here.
The problems are:
I offer that we create a new cache that will replace RocksDb block cache. It will be either based on Guava's cache or be a synchronized map of weak references like https://github.com/ehcache/ehcache3/blob/606c5dcba355f5ed1abb002d455ef05b5899f48e/core/src/main/java/org/ehcache/core/collections/ConcurrentWeakIdentityHashMap.java but with concurrent purging.
Everytime we store to the db we simply also write in cache.
This is the cache Hans created to whoever is interested:
He said that the change failed on tests, and it was a problem fixing it which is why we didn't continue.