Skip to content


binlog consumes unreasonable disk space with very old jobs #43

jbergstroem opened this Issue · 10 comments

6 participants


If you get a buried job the binlog will start to stack up. This is very unfortunate in high traffic environments since it will fill both disk and memory very quickly.

There's a good blog post about this here:

kr commented

Yeah, I need to fix this, but it should not be a separate binlog, partly because buried jobs are not the only way to trigger this problem. I have a much more general solution, which I will describe and implement shortly.


kr, has there been any work on this?

@kr kr was assigned

Yeah, I've run into this as well. I had 8 GB of binlogs, and beanstalk would keep them mapped them all into RAM after a restart. It also make restarting slow.

@kr kr added a commit that closed this issue
@kr compact the WAL; fixes #43 9d2f1f5
@kr kr closed this in 9d2f1f5

What is the beanstalkd version which contains this fix?

kr commented

It's not released yet. I'm working on it when I can, but I've been busier lately.
It'll be out as soon as it's ready.


now 1 year since the fix - is it released and tested?

are there any other workarounds? we don't have stuck or buried jobs, but we have 400G of binlogs.

can the old binlog files just be removed from the filesystem safely while beanstalk is running instead? (so a find that purges them when they hit 30 days old, for example)


You could run the latest from git...


it -appears- that this is fixed in 1.5 (released 3 months ago). I don't see any easy way to tell what issues or commits are in a particular release, though, so I'm speculating at this point.

kr commented

Yes, this fix is released in version 1.5, as documented in the release notes
under "Highlights":

Also, each release is tagged in git. You can see a complete list of changes
between releases with your own git tools or github's compare view:

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Something went wrong with that request. Please try again.