You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
One of Redis’s best features is the ability to bring the work to the server. While the inability to scale a single node’s in-memory work across multiple threads is considered to be a mere minor caveat, I think there is room for discussion about it. So, what would it look like if we doubled down on threading?
The reason why the Lua context runs on the main thread is to guarantee atomicity/isolation. But we can absolutely thread scripts without surrendering any of these guarantees. Such a system would serve to retain all existing functionality while providing a means of efficiently moving more and more computation to the data itself:
All reads/writes still happen on the main thread.
A child Lua thread is spawned with a new, distinct command that takes the same arguments as EVAL (ex: SPAWN).
Child thread scripts ensure mutual exclusion with each other through mutexes.
Scripts running on the main thread suspend child thread I/O completely; EVAL works identically to how it does now.
Child threads can invoke EVAL to perform I/O on the main thread with the strongest guarantees
This is the functional equivalent of running a bunch of Lua clients on the same hardware outside of Redis. If you run your own hardware, it's not hard to roll your own. So why is this useful? Elasticache and other platforms run their own binaries and provide hardware with high core counts that go unexploited.
Applications requiring high throughput on scripts that often have no overlap with the data being concurrently manipulated by the other threads stand to benefit greatly. My application falls into this category, so I’m convinced there is a problem here that could be solved. What else do we gain by scaling vertically in this way? Do we lose anything? I perceive this as a purely progressive enhancement.
The text was updated successfully, but these errors were encountered:
I think the proposed change will be much more complicated then you think and won't be as performant as redis cluster. Fundamentally you will need a lock every time you access the main database for reads or writes, you can't have a LUA script accessing a variable while the backend db is resizing for example, so you will mostly just be bottlenecked on accessing the db. I think adding this locking will be a detriment to the maintainability of redis.
As another solution to your problem, have you considered cluster mode with tags as an alternative to your suggestion? It seems like it would meet your requirement of improving the throughput on exclusive data and allow you to shard out instead of scaling up. You could also consider a module to do the same thing.
One of Redis’s best features is the ability to bring the work to the server. While the inability to scale a single node’s in-memory work across multiple threads is considered to be a mere minor caveat, I think there is room for discussion about it. So, what would it look like if we doubled down on threading?
The reason why the Lua context runs on the main thread is to guarantee atomicity/isolation. But we can absolutely thread scripts without surrendering any of these guarantees. Such a system would serve to retain all existing functionality while providing a means of efficiently moving more and more computation to the data itself:
This is the functional equivalent of running a bunch of Lua clients on the same hardware outside of Redis. If you run your own hardware, it's not hard to roll your own. So why is this useful? Elasticache and other platforms run their own binaries and provide hardware with high core counts that go unexploited.
Applications requiring high throughput on scripts that often have no overlap with the data being concurrently manipulated by the other threads stand to benefit greatly. My application falls into this category, so I’m convinced there is a problem here that could be solved. What else do we gain by scaling vertically in this way? Do we lose anything? I perceive this as a purely progressive enhancement.
The text was updated successfully, but these errors were encountered: