New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Pool not working after a long tight loop #3
Comments
I suppose is related to @w << '.' can't add more bits. I supposed that in the checkout part the @w should be decreased. |
I'm just pushing a char from a coroutine to a socket, which is read by another coroutine (I use the socket to notify and have timeouts). It should just continue to write and read, whatever how many times it happens. Maybe there is a BufferedIO which gets filled. That would explain why it always fail at the 65536 iteration. |
@asterite, Does crystal have any limitation for the |
I wrote some tests. The first one shows there is no limit to IO.pipe. I can write has many bytes as I want. r, w = IO.pipe
spawn do
loop do |i|
buffer = Slice(UInt8).new(1)
r.read(buffer)
puts "received #{i}" if i % 65536 == 0
end
end
spawn do
loop { w << "." }
end
sleep The second one outlines the problem you describe: the pool stops around 65535. What's interesting is that is doesn't raise any Timeout::Error, so there must be a blocking call somewhere. require "./src/pool"
class Obj; end
pool = Pool.new(capacity: 5, timeout: 1.0) { Obj.new }
spawn do
loop do |i|
puts "iteration #{i * 5}"
obj1 = pool.checkout
obj2 = pool.checkout
obj3 = pool.checkout
obj4 = pool.checkout
obj5 = pool.checkout
pool.checkin(obj1)
pool.checkin(obj2)
pool.checkin(obj3)
pool.checkin(obj4)
pool.checkin(obj5)
end
end
sleep |
The problem is indeed The problem seems to happen at the libc level, where a buffer seems to be filled. What I don't understand is that it doesn't happen in my IO.pipe tests. I can write as many bytes as needed and never block 😞 |
From the pipe(7) man page:
I think it's starting to make sense. I'm always writing a byte (on start_one and on checkin), but I'm never reading a byte when the pool has pending objects ! |
Hello @ysbaddaden, it's nice to know you are near the identification of the root cause. I'm following this issue by the minutes, i don't want to restart the server anymore, every hour :). |
I just pushed a branch which disables the lazily start instances features and always reads first from the pipe. That fixes the problem. Now, to get lazy instances with the pipe fix is quite difficult 😭 |
Fixed in 4630a55 |
I have a REST API using the pool. We're doing some stress Testing over the lib and found this behavior.
This tight loop always stop working after 65536 iteration.
The text was updated successfully, but these errors were encountered: