After switching to spring boot 2.1.x which switches from Jedis to Lettuce by default we are seeing a difference in pipelining behaviour. In particular we noticed a significant increase in the latency of our redis calls which utilize pipelining.
For example:
redisTemplate.executePipelined(
(RedisCallback<Object>)
connection -> {
// 1. use connection.{command} operation// 2. use connection.{command} operation// 3. use connection.{command} operationreturnnull;
});
the above use of the pipelining functionality in the Jedis implementation will buffer all 3 commands and flush them to the transport at once.
However, in the lettuce implementation, due to auto-flushing being enabled by default in Lettuce, each command is flushed to the transport individually in an async manner.
This behaviour difference affects the latency of Redis operations done via pipelining.
A possible solution for this could be:
In the LettuceConnection class, when getAsyncConnection is called and a RedisAsyncCommands is created, we should check if pipelining is enabled and if so, disable auto-flushing for that instance of RedisAsyncCommands.
Once the closePipeline method is called, we should call the flush operation and then await on the futures like is currently done.
An example of this can be found in the lettuce documentation:
Thanks for report. Can you share the order of magnitude we're talking about here? Having some numbers would be helpful.
Jedis buffers up to 8192 bytes in its command buffer before it flushes a command in pipelining mode.
Please note that manual flushing with Lettuce can interfere with timeouts. With global timeouts enabled, the command timeout starts ticking in the moment a command is invoked on the command facade, not at the moment it is written to the transport.
Probably it would make sense to make flushing behavior configurable and default to the current strategy
Looking at our datadog metrics for redis latency, with jedis we see latencies in the double digit microsecond range whereas with the lettuce setup we are getting ~3-5 millisecond range. So roughly 2-3 orders of magnitude.
I agree with making it a configurable behaviour; that seems like a more reasonable approach
I prepared a prototype for a PipelineFlushPolicy. Initially, with three options:
Flush after each command (default behavior)
Flush on execute (suitable for smaller batches)
Buffered flushing (flush after n commands)
This type of arrangement works well with a single server. Pipelining with multiple nodes (Redis Cluster, Master/Replica) can encounter race conditions as connections get allocated lazily and asynchronously. Closing the pipeline while connection creation is still in progress can leave commands unflushed. Right now, we don't have a way to address this shortcoming as the issue requires a fix in Lettuce
spring-projects-issues commentedJul 5, 2019
Umair Kayani opened DATAREDIS-1011 and commented
After switching to spring boot 2.1.x which switches from Jedis to Lettuce by default we are seeing a difference in pipelining behaviour. In particular we noticed a significant increase in the latency of our redis calls which utilize pipelining.
For example:
the above use of the pipelining functionality in the Jedis implementation will buffer all 3 commands and flush them to the transport at once.
However, in the lettuce implementation, due to auto-flushing being enabled by default in Lettuce, each command is flushed to the transport individually in an async manner.
This behaviour difference affects the latency of Redis operations done via pipelining.
A possible solution for this could be:
In the LettuceConnection class, when getAsyncConnection is called and a RedisAsyncCommands is created, we should check if pipelining is enabled and if so, disable auto-flushing for that instance of RedisAsyncCommands.
Once the closePipeline method is called, we should call the flush operation and then await on the futures like is currently done.
An example of this can be found in the lettuce documentation:
https://github.com/lettuce-io/lettuce-core/wiki/Pipelining-and-command-flushing under the Async Pipelining code example.
At the very least, this functionality should be possible and can be configurable if need be.
Affects: 2.1.9 (Lovelace SR9)
Referenced from: pull request #511
The text was updated successfully, but these errors were encountered: