If you get a buried job the binlog will start to stack up. This is very unfortunate in high traffic environments since it will fill both disk and memory very quickly.
There's a good blog post about this here: http://blog.sendapatch.se/2010/may/how-do-you-handle-job-failures-really.html
Yeah, I need to fix this, but it should not be a separate binlog, partly because buried jobs are not the only way to trigger this problem. I have a much more general solution, which I will describe and implement shortly.
kr, has there been any work on this?
Yeah, I've run into this as well. I had 8 GB of binlogs, and beanstalk would keep them mapped them all into RAM after a restart. It also make restarting slow.
compact the WAL; fixes #43
What is the beanstalkd version which contains this fix?
It's not released yet. I'm working on it when I can, but I've been busier lately.
It'll be out as soon as it's ready.
now 1 year since the fix - is it released and tested?
are there any other workarounds? we don't have stuck or buried jobs, but we have 400G of binlogs.
can the old binlog files just be removed from the filesystem safely while beanstalk is running instead? (so a find that purges them when they hit 30 days old, for example)
You could run the latest from git...
it -appears- that this is fixed in 1.5 (released 3 months ago). I don't see any easy way to tell what issues or commits are in a particular release, though, so I'm speculating at this point.
Yes, this fix is released in version 1.5, as documented in the release notes
Also, each release is tagged in git. You can see a complete list of changes
between releases with your own git tools or github's compare view: