Skip to content

Stream next reset inac timer #29

wants to merge 3 commits into from

2 participants

fdmanana commented Jan 9, 2011

Hi Chandru,

The following one line patch fixes the timeout issues I told you I have when using may workers with the {stream_to, {Pid(), once}} option on a local and fast network.

Let me know if you agree with it.

Filipe Manana

fdmanana added some commits Jan 9, 2011
@fdmanana fdmanana Reset inactivity timeout when stream_next is invoked
This avoids plenty of connection inactivity timeouts. From a logical point of view,
the inactivity timeout should be reset not only only data is received from the socket
but also when the client asks for more data.
@fdmanana fdmanana Don't trigger new inactivity timer when socket data is received and c…
…aller controls the socket

Like in synchronous programming, in makes sense to start an inactivity timer only when the caller
does a "recv" call and cancel the timer as soon as data is received from the socket.

No longer one line :)
Please see the 2 commit messages for an explanation.



I'm not sure about this patch set Filipe. What happens if the calling process just dies and never asks for more? How will this connection get closed?

I suppose eventually the server will close the connection, but there is a corner case. If a firewall between the client and server drops its state table at the same time, we will never receive a TCP FIN for this socket, and we'll never close it down.

The other thing is set_inac_timer cancels the timer before setting it, so the call to cancel_timer is a bit redundant. Maybe the patch is just to add the {eat_message, timeout} to set_inac_timer?


Hi Chandru,

"I'm not sure about this patch set Filipe. What happens if the calling process just dies and never asks for more? How will this connection get closed?"

For my case it's not a problem, since the worker is linked to my process, once my process dies, the work dies and the socket is closed. If the worker is not linked to the user's process, then I guess it's the same issue like in most other languages/VMs.

My line of thought here is simpler to understand if we forget Erlang active sockets and say, imagine you're doing C or Java and want a timeout of 30secs:

1) Every we do a socket "recv" call, we start a timer and if within 30 secs we receive no data, we trigger a timeout error;

2) If during the "recv" call we receive data, we clear the timer we set, but we don't start a new timer. We only start a new timer when a "recv" call is done

Basically, for me ibrowse:stream_next/1 is like a "recv" call.

I agree that adding the {eat_message, timeout} to the cancel_timer call done by set_inac_timer is a good idea. But I don't think it solves the timeout issues I have in a local, fast LAN. I don't have this issue when not using the {stream_to, {pid(), once}} option (doesn't happen with {stream_to, pid()}.

Does my explanation makes sense? (I might be missing something)

thanks :)


Ok, I guess if the caller wants to bypass the load balancing mechanism, then it is their responsibility to clean up. I've integrated this patch. Thank you.

This issue was closed.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Something went wrong with that request. Please try again.