Skip to content


Subversion checkout URL

You can clone with
Download ZIP


OUT_OF_MEMORY error with -d AND -b #40

nfo opened this Issue · 12 comments

3 participants


On my MacBook, beanstalkd 1.4.4 raises an OUT_OF_MEMORY error after ~15k jobs pushed, if I detach beanstalkd (-b) AND I activate the binlog (-b).
If I remove one of these parameters, I did not see any error after 1 million jobs pushed.

require 'rubygems'
require 'beanstalk-client'

body = {
  :fdsf => 'gkljfjkldsgjkldsjkgldskljfgjklsdf sdfkljfdslk',
  :jklsf => 'gkljfjkldsgjkldsjkgldskljfgjklsdf 878734',
  :gdfhjk => 'gkljfjkldsgjkldsjkgldskljfgjklsdf (fdskljgfhsdkl)è!ç',

conn ='')

1_000.times do |i|
  puts i * 2000
  2_000.times do
    conn.yput body, 1, 0, 8_035_200

puts 'ok'
kr commented

This is because beanstalk does a chdir("/") when it detaches, and the relative path given on the command line is being misinterpreted when it comes time to open a second binlog segment.

To get consistent behavior, you should provide an absolute path to the -b option.

I'll have beanstalkd require an absolute path in the future, when you combine -b and -d.

kr commented

Require absolute path when -b is used with -d.

Closed by a4b5484.


Still an issue. CentOS 5.2 (Final).
Launched with:
$ beanstalkd -d -b /var/spool/beanstalkd -f 1000 -u beanstalkd

Despite of the absolute path being used and newest beanstalkd version (1.4.6), it crashes with OUT_OF_MEMORY error after about 10-20 thousand of jobs.


Found the conditions under which it's a problem:
1. Start beanstalkd with flags above
2. Use it
3. do killall -9 beanstalkd
4. Start it again
5. On use it immediately crashes

That's fake persistence.


The only hackaround is to start beanstalkd again, do plain killall, and restart it again.


Also, it seems to exhibit this behavior only when the queue is empty before crash.


Running in non-detached mode does the same. Debug info:
beanstalkd: binlog.c:589 in maintain_invariants_iter: newest binlog has invalid 182 reserved
beanstalkd: prot.c:841 in enqueue_incoming_job: server error: OUT_OF_MEMORY


Will post a new issue, though

kr commented

I've reproduced this, and I'm working on a fix.

kr commented

Just for clarity, the new issue is at and it's fixed as well.

kr commented
This issue was closed.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Something went wrong with that request. Please try again.