Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Starting/stopping listeners on a single port in a tight loop crashes ranch_listener_sup #74

Closed
yfyf opened this issue Jan 24, 2014 · 6 comments

Comments

@yfyf
Copy link

yfyf commented Jan 24, 2014

This is kind of corner-case behavior, but thought you might want to know.

Observed on current master (c1d0c45) and tag 0.8.3, running R15B03.

Starting and then immediately stopping ranch listeners seems to cause a crash (I know the modules make no sense, it's just there to pass the code:ensure_loaded/1):

> [
    begin
        {ok, _} = ranch:start_listener(foo, 100,
            ranch_tcp, [{port, 8080}], ranch_tcp , []),
        ok = ranch:stop_listener(foo),
        io:format("~p~n", [N])
    end ||

    N <- lists:seq(1, 100)
].

1
2
3
<..>
21
** exception error: no match of right hand side value
{error,{shutdown,{child,undefined, {ranch_listener_sup,foo},
    {ranch_listener_sup,start_link,
    [foo,100,ranch_tcp,[{port,8080}],ranch_tcp,[]]},
        permanent,infinity,supervisor, [ranch_listener_sup]}}}

The key to reproducing this is a fairly large pool size and a single port, which might mean this is something to do with the OS not being happy about binding sockets so intensively. So hopefully you can reproduce it.

I bumped into this, because I was running lots of small tests where cowboy was stopped and started on each one of them multiple times and had undeterministic failures.

@yfyf
Copy link
Author

yfyf commented Feb 5, 2014

@essen bump.

@essen
Copy link
Member

essen commented Apr 22, 2014

I got a variant of this where I get an already_started when a listener should have been stopped. I will investigate tomorrow and try to put a fix in the next version.

@essen
Copy link
Member

essen commented Apr 24, 2014

I don't think I can do anything for your particular error. It happens even if I set {reuseaddr, true}. It still happens in R17 although the actual error is different, you get a {error, eaddrinuse} error instead.

For testing I'd suggest using {port, 0} to get a port number dynamically and/or avoid restarting everything all the time (your tests will run faster that way too).

Closing this ticket though. Thanks!

@essen essen closed this as completed Apr 24, 2014
@yfyf
Copy link
Author

yfyf commented Apr 24, 2014

Could you elaborate on what exactly is causing the error though?

@essen
Copy link
Member

essen commented Apr 24, 2014

I have no idea. I suppose the OS doesn't immediately allow the socket to be reused and if you try to reuse it too quickly you get that error, but as this is out of the realm of the things I can fix I didn't investigate further.

@essen
Copy link
Member

essen commented Apr 24, 2014

... And the other issue I noticed was actually an issue in the test suite itself. Sorry for the noise on that one.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants