Hi, I traced a minor bug. If I set a large time out value, e.g. 3000s, in "reserved-with-timeout" command, the queue will return back to the client immediately with "TIMED_OUT", just the same effect as "reserved-with-timeout 0". I took some time to trace it. Seems like there is an overflow during type conversion. Here is a simple patch which can fix it (pending_timeout is 'int', where overflow occurs after it multiplies against SECOND):
Previously, a malicious user could craft a job payload and inject beanstalk commands without the client application knowing. (An extra-careful client library could check the size of the job body before sending the put command, but most libraries do not do this, nor should they have to.) Reported by Graham Barr.
A DELETE record can be written at any time, including times when we are unable to shuffle reserved bytes around. So, to make sure that we don't lose any reserved bytes, the current binlog must always have reserved space for some number of complete DELETE records. This lets us fill up the current binlog exactly with no lost reserved bytes. The only time we are able to shuffle things around to maintain this property is when reserving bytes. We used to try to accomplish this by checking the reserved size of the current_binlog after each reservation, but that isn't sufficient. The next binlog can become current and run out of space without any reservations taking place. This would force us to lose bytes. Now we check that the current_binlog and *all future binlog* reservation sizes are suitable to become the current binlog and fill up exactly. Closes gh-38.