Skip to content

Request: exact timeouts in BLPOP etc #874

Open
v0s opened this Issue Jan 10, 2013 · 2 comments

2 participants

@v0s
v0s commented Jan 10, 2013

Hello. This is a feature request / documentation bug report.

The documentation page for BLPOP says:
The timeout argument is interpreted as an integer value specifying the maximum number of seconds to block.

In reality, BLPOP blocks for a time between timeout and (timeout + 1), giving up on blocking when system UNIX timestamp changes (which has a second precision)

So the feature request is: make BLPOP and other blocking routines follow the exact timeout specified
As I can see, it can be done easily with minimal changes:

  • clientsCron() already gets called REDIS_HZ times per second
  • Use floating-point type instead of time_t for blockingState::timeout
  • Use floating-point for redisServer::unixtime also
  • Set redisServer::unixtime with clock_gettime() or smth instead of time(NULL) or smth like that.

The old behavior is really annoying if you think of a situation where many clients (1000+) constantly and simultaneously block on different keys with short timeouts (say, 2 sec). This situation is very common in environments with users sending each other messages.
With the current implementation, Redis shall give up on blocking when system time changes, and cause high server load every second, preventing an even distribution of server load.

Would appreciate if it gets implemented, this could save tons of pain in the ass.

@v0s
v0s commented Jan 10, 2013

Also BLPOP could accept a float for timeout like was added for EXPIRE in 2.6.x, but that's beyond my feature request :-)

@jgehrcke

+1

The current implementation does not provide the precision required for a fine-grained timeout control. In case this won't be changed, the docs need to be clarified.

@jmalloc jmalloc referenced this issue in IcecaveStudios/chastity Nov 14, 2014
Closed

Redis based blocking driver? #2

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Something went wrong with that request. Please try again.