Skip to content


Subversion checkout URL

You can clone with HTTPS or Subversion.

Download ZIP


Broken pipe error and heavy CPU use with misbehaving clients #22

pkieltyka opened this Issue · 5 comments

3 participants


| See below... This happens when clients send reserve
| commands constantly on an empty queue.

I'm running the master beanstalkd on Snow Leopard and it's been working well, except I've come across the error:

./beanstalkd: prot.c:672 in check_err: writev(): Broken pipe
./beanstalkd: prot.c:672 in check_err: writev(): Broken pipe

Without the clients doing anything else to the queue, this causes beanstalkd to run at 97% CPU usage. I'm not sure what caused this specifically but it occurs when I have one client adding things onto the queue and another removing them (about 10 items per second).




Another thing is I noticed that beanstalkd's memory usage increases slowly and never releases it's allocated memory. Perhaps I'm doing something wrong or its my environment, but I'm interested to know the expected behaviour. By the way, I am using a persistent queue via the binlog.

kr commented

Something is wrong if it takes 97% cpu with so little activity. I will try to reproduce it.


I looked into this some more and it results from my client code doing crazy things, it was sending "reserve" commands in an infinite loop without any jobs on the queue. I've fixed my code since then and it's been working great.

Perhaps though, you could ignore subsequent reserve commands for a particular connection .. or something.


kr commented

Okay, I'm not going to hold up a bugfix release for this, but I will still try to fix it.


I'm having a trouble like that. I have a worker sending the "reserve" command and then a "watch channel\r\n" every minute. After the 14th, beanstalkd starts to eat all my CPU.

1. send the "receive" command
2. send more than 208 bytes of data.

@kr kr was assigned
@kr kr closed this in 5a74547
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Something went wrong with that request. Please try again.