Skip to content

HTTPS clone URL

Subversion checkout URL

You can clone with
or
.
Download ZIP

Loading…

optimization: {active, true} only set when needed #482

Closed
wants to merge 1 commit into from

3 participants

@DBarney

Currently all messages sent to a websocket_handler process will trigger
the option {active, true} to be set on the socket. This causes
unnecessary time to be spent setting an option that is already set.
This is especially true for websockets that primarily listen for
messages from other processes and forward them to the connected client.
{active, true} only needs to be set on the socket after a data packet
has been received, and once at the start of the wesocket to "prime the
pump".

@essen
Owner

{active, once} you mean?

@DBarney

Yes I did. Would you like me to correct the commit message and then create a new pull request? Or just create an empty commit with the corrected message?

@essen
Owner

If you amend the commit and push -f it will update this PR. Feel free to update. :)

@DBarney

OK I amended the commit and pushed. Thanks Essen.

@essen
Owner

There's a spacing issue line 181.

I'm OK with this but we probably want to disable active, once and flush any message if the handler decided to shutdown in websocket_data.

@DBarney DBarney optimization: {active, once} only set when needed
Currently all messages sent to a websocket_handler process will trigger
the option {active, once} to be set on the socket. This causes
unnecessary time to be spent setting an option that is already set.
This is especially true for websockets that primarily listen for
messages from other processes and forward them to the connected client.
{active, once} only needs to be set on the socket after a data packet
has been received, and once at the start of the wesocket to "prime the
pump".
9dd0028
@DBarney

Ok I fixed the spacing issue, I also added in the flushing of any messages that could have arrived as a result of the Socket being in {active, once}. The behavior is called in websocket_close because there are multiple places where the Socket is put in {active, once} and then could be closed: web socket_data, websocket_payload, websocket_payload_loop, handler_call, websocket_dispatch.

Thanks Essen.

@essen
Owner

I'll run it against autobahn and see if it improves anything.

@essen
Owner

Autobahn doesn't seem to say it's better. Do you have measurements that show it's improving things?

@DBarney

yes just a second, I have to run them again.

@DBarney

I connected a websocket client to cowboy, both before and after the patch, and sent it 1000 messages. The messages in this case aren't acted on, but that is only to highlight the differences in speed between the two versions.

So here are the results: https://gist.github.com/DBarney/5374070

Granted not a huge improvement in speed because it is already very fast.

But : "This is especially true for websockets that primarily listen for messages from other processes and forward them to the connected client." and that is what the service that I am optimizing does. Which is why this was identified as an area we could squeeze extra performance out of our servers.

Thanks Essen.

@essen
Owner

How do I read this? I don't understand the values, though I guess it says it calls setopts less (we already knew that).

@DBarney

You have never used fprof? If you haven't you really should look into it, its built into erlang and its really awesome.

here is how to interpret the results:
http://www.erlang.org/doc/man/fprof.html#id79359

Here is a summary:

  • without the patch 62.619 milliseconds per 1000 messages
  • with the patch 16.963 milliseconds per 1000 messages
@essen
Owner

I used fprof sometimes. I can't use that for anything more than "this seems to be the right thing to do" though. It only says we spend less time in setopts (we know!).

Can you measure the time it takes for the client to send everything (and get a final reply possibly) from the client side instead?

@DBarney

sure, just give me a minute and I'll get that for you.

@essen
Owner

Any news?

@essen
Owner

Hi. Still waiting hoping for your message. :)

@essen
Owner

Hi.

@AeroNotix

Any update on this?

@DBarney

Sorry about not getting back for a long time, I have been really busy this last year and just haven't gotten around to writing up a full benchmark to show the difference between the two implementations.

But thinking about it now, the improvement is probably close to nothing. There are (probably) other areas that can be speed up before a micro optimization like this one needs to be even thought about.

So I'm going to close this for now.

@DBarney DBarney closed this
@essen
Owner

No problem, thanks for trying!

Also the new {active, N} option is gonna be a more interesting optimization when we can start using it.

@DBarney

Yeah it will probably even be a better solution then doing {active, once} every time a new data packet it needed.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Commits on Apr 12, 2013
  1. @DBarney

    optimization: {active, once} only set when needed

    DBarney authored
    Currently all messages sent to a websocket_handler process will trigger
    the option {active, once} to be set on the socket. This causes
    unnecessary time to be spent setting an option that is already set.
    This is especially true for websockets that primarily listen for
    messages from other processes and forward them to the connected client.
    {active, once} only needs to be set on the socket after a data packet
    has been received, and once at the start of the wesocket to "prime the
    pump".
This page is out of date. Refresh to see the latest.
Showing with 23 additions and 10 deletions.
  1. +23 −10 src/cowboy_websocket.erl
View
33 src/cowboy_websocket.erl
@@ -137,7 +137,7 @@ upgrade_error(Req, Env) ->
-> {ok, Req, cowboy_middleware:env()}
| {suspend, module(), atom(), [any()]}
when Req::cowboy_req:req().
-websocket_handshake(State=#state{transport=Transport, key=Key},
+websocket_handshake(State=#state{transport=Transport, socket=Socket, key=Key},
Req, HandlerState) ->
Challenge = base64:encode(crypto:sha(
<< Key/binary, "258EAFA5-E914-47DA-95CA-C5AB0DC85B11" >>)),
@@ -149,6 +149,7 @@ websocket_handshake(State=#state{transport=Transport, key=Key},
%% Flush the resp_sent message before moving on.
receive {cowboy_req, resp_sent} -> ok after 0 -> ok end,
State2 = handler_loop_timeout(State),
+ Transport:setopts(Socket, [{active, once}]),
handler_before_loop(State2#state{key=undefined,
messages=Transport:messages()}, Req2, HandlerState, <<>>).
@@ -156,15 +157,11 @@ websocket_handshake(State=#state{transport=Transport, key=Key},
-> {ok, Req, cowboy_middleware:env()}
| {suspend, module(), atom(), [any()]}
when Req::cowboy_req:req().
-handler_before_loop(State=#state{
- socket=Socket, transport=Transport, hibernate=true},
+handler_before_loop(State=#state{hibernate=true},
Req, HandlerState, SoFar) ->
- Transport:setopts(Socket, [{active, once}]),
{suspend, ?MODULE, handler_loop,
[State#state{hibernate=false}, Req, HandlerState, SoFar]};
-handler_before_loop(State=#state{socket=Socket, transport=Transport},
- Req, HandlerState, SoFar) ->
- Transport:setopts(Socket, [{active, once}]),
+handler_before_loop(State, Req, HandlerState, SoFar) ->
handler_loop(State, Req, HandlerState, SoFar).
-spec handler_loop_timeout(#state{}) -> #state{}.
@@ -181,10 +178,12 @@ handler_loop_timeout(State=#state{timeout=Timeout, timeout_ref=PrevRef}) ->
-> {ok, Req, cowboy_middleware:env()}
| {suspend, module(), atom(), [any()]}
when Req::cowboy_req:req().
-handler_loop(State=#state{socket=Socket, messages={OK, Closed, Error},
- timeout_ref=TRef}, Req, HandlerState, SoFar) ->
+handler_loop(State=#state{transport=Transport, socket=Socket,
+ messages={OK, Closed, Error}, timeout_ref=TRef},
+ Req, HandlerState, SoFar) ->
receive
{OK, Socket, Data} ->
+ Transport:setopts(Socket, [{active, once}]),
State2 = handler_loop_timeout(State),
websocket_data(State2, Req, HandlerState,
<< SoFar/binary, Data/binary >>);
@@ -455,9 +454,9 @@ is_utf8(_) ->
websocket_payload_loop(State=#state{socket=Socket, transport=Transport,
messages={OK, Closed, Error}, timeout_ref=TRef},
Req, HandlerState, Opcode, Len, MaskKey, Unmasked) ->
- Transport:setopts(Socket, [{active, once}]),
receive
{OK, Socket, Data} ->
+ Transport:setopts(Socket, [{active, once}]),
State2 = handler_loop_timeout(State),
websocket_payload(State2, Req, HandlerState,
Opcode, Len, MaskKey, Unmasked, Data);
@@ -646,12 +645,26 @@ websocket_send_many([Frame|Tail], State) ->
Error -> Error
end.
+%% Turn off {active,once} and clear out any messages that could have
+%% been sent by having the socket in that state.
+-spec websocket_flush(#state{}) -> ok.
+websocket_flush(#state{transport=Transport, socket=Socket,
+ messages={OK, Closed, Error}}) ->
+ Transport:setopts(Socket, [{active, false}]),
+ receive
+ {OK, Socket, _Data} -> ok;
+ {Closed, Socket} -> ok;
+ {Error, Socket, _Reason} -> ok
+ after 0 -> ok
+ end.
+
-spec websocket_close(#state{}, Req, any(),
{atom(), atom()} | {remote, close_code(), binary()})
-> {ok, Req, cowboy_middleware:env()}
when Req::cowboy_req:req().
websocket_close(State=#state{socket=Socket, transport=Transport},
Req, HandlerState, Reason) ->
+ websocket_flush(State),
case Reason of
{normal, _} ->
Transport:send(Socket, << 1:1, 0:3, 8:4, 0:1, 2:7, 1000:16 >>);
Something went wrong with that request. Please try again.