segfaults/OOM when hitting a process with too many signals #356

Closed
nocode opened this Issue Jun 8, 2010 · 1 comment

Comments

Projects
None yet
2 participants
@nocode

nocode commented Jun 8, 2010

The following code behaves erratically when hitting a process
a lot of signals. I've seen it eat several gigs of memory
and I've also seen it segfault once or twice, too. Sometimes it runs fine, too I couldn't
get any behavior consistently, but insane memory growth seem to be the most common.

r, w = IO.pipe
pid = fork do
  sr, sw = IO.pipe
  trap(:TERM) { exit(0) }
  trap(:HUP) { sw.write_nonblock('.') rescue nil }
  w.syswrite('.')
  loop { sr.readpartial(16384) }
end

p [ :child_ready,  r.sysread(1) ]
100000000.times { Process.kill(:HUP, pid) }
Process.kill(:TERM, pid)
p Process.waitpid2(pid)
@evanphx

This comment has been minimized.

Show comment
Hide comment
@evanphx

evanphx Jun 16, 2010

Member

Rework how signals are delivered. Closed by e4c92f8

Member

evanphx commented Jun 16, 2010

Rework how signals are delivered. Closed by e4c92f8

This issue was closed.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment