Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Websocket client uses an unbounded buffer #1967

Closed
bowlofeggs opened this issue Feb 8, 2021 · 9 comments
Closed

Websocket client uses an unbounded buffer #1967

bowlofeggs opened this issue Feb 8, 2021 · 9 comments
Labels
A-awc project: awc C-perf-mem Category: perfomance / memory / resources

Comments

@bowlofeggs
Copy link

Expected Behavior

I expect that writing bytes into the websocket client would block or await (for an async API) until there is room in a bounded buffer, so as to constrain the memory consumed by the client.

Current Behavior

It seems that the client accepts bytes without bound, such that it will consume infinite memory if the rate of bytes in exceeds the rate of bytes out.

Possible Solution

Replace the client's buffer with a bounded buffer.

Steps to Reproduce (for bugs)

I've forked the example repository and made a small edit to the code so that it sends bytes rapidly into the client. I also adjusted the server to give a longer heartbeat timeout so that the effect is more obvious. You can see those changes here:

bowlofeggs/examples@09bceb4

Context

I've noticed memory climbing in a process that uses the actix websocket client. I discovered that the data out can be constrained at times, and it is not easy to guess or predict a rate to safely send data into the client.

Your Environment

awc-2.0.3 on Linux 5.10.12.

  • Rust Version (I.e, output of rustc -V): rustc 1.47.0
  • Actix Web Version: awc-2.0.3

See also: #1956

@fakeshadow
Copy link
Contributor

fakeshadow commented Feb 8, 2021

I don't get the question. StreamHandler is like it's name. It's just a stream handler.
The moment you return in it you are actively asking for the next frame.

If you want to throttle or do rate limit you should do inside it and not leave it as a blank impl.

@fakeshadow
Copy link
Contributor

The default max frame size for ws codec is 64kb. So there is no unbounded buffer case.
A frame beyond this size would return as error unless you actively change the codec setting.

@bowlofeggs
Copy link
Author

Hi @fakeshadow, I'm concerned more with the sending side than the receiving side. I'm thinking that the sink on this line seems to potentially have an unbounded buffer for outbound traffic:

https://github.com/bowlofeggs/examples/blob/09bceb4831fe59caddb6191a8e4bf63c25a1ce3f/websocket/src/client.rs#L33

@fakeshadow
Copy link
Contributor

fakeshadow commented Feb 8, 2021

But you control the sending? Sorry you lost me.

You are using do_send to by pass the bounded mail box capacity and this is exactly what you get.

reference:
https://docs.rs/actix/0.10.0/actix/struct.Addr.html#method.do_send

@fakeshadow
Copy link
Contributor

You can use send or try_send that respect the capacity or actually control your sending rate.
actix in general don't interfere with how you want to send data through a stream.

@bowlofeggs
Copy link
Author

bowlofeggs commented Feb 8, 2021

Hi again @fakeshadow, thanks for the pointer. It's probably obvious to you, but I am new to Actix and so am unfamiliar with these APIs and was starting with the example to learn. Thanks for pointing out the send call to me. I've modified the code to use it here:

bowlofeggs/examples@96497a0

Indeed the buffer does seem very reasonably constrained for a time, but eventually the server stopped receiving data and the client process' memory usage started to rise without bound again. I also didn't see an error returned from send. I don't know what condition triggers this, but I have found a way to manually trigger it by setting a firewall rule to drop packets inbound to the example server's port (i.e., packets from the client to the server are dropped and never reach the server.) I would expect this to cause the calls to send.await to block until traffic flows again, but it seems to continue to accept messages and store them somewhere. Does that seem wrong to you, or is something else still wrong about the way I am using the library here?

As an aside, do you think it would be good to either add a comment to the example repo highlighting this, or maybe converting it to use send similarly to how I've done in my most recent commit? I could send a patch for either of those if that seems helpful to you.

@bowlofeggs
Copy link
Author

bowlofeggs commented Feb 8, 2021

To clarify what I'm doing a little bit more, here's steps to reproduce what I am seeing:

  • Use my fork of the example repo, and this commit: bowlofeggs/examples@96497a0
  • Start the server with cargo run --bin websocket-server.
  • Start the client with cargo run --bin websocket-client.
  • Use a tool to monitor the client's memory (something like htop or top, or whatever else you may prefer).
  • Once you see traffic flowing (the server should be printing "test" a lot), you may notice that the client's memory is well behaved (I saw it using something like 20 kiB).
  • If you let it run a while, you might see that the server stops printing messages. If this happens, you should observe the client's memory rising.
  • If you don't observe this, you can trigger the symptom by setting a firewall rule in your operating system to block packets inbound to the server. On Linux, I did that like this: sudo iptables -A INPUT -p tcp -d 127.0.0.1 --dport 8080 -j DROP.

This firewall rule is a way to simulate flaky network connections between client and server, and obviously I would not expect the messages to arrive at the server with this rule in place, but I would expect the client to halt accepting new messages and stay relatively low on memory usage in this condition. What do you think?

@fakeshadow
Copy link
Contributor

fakeshadow commented Feb 8, 2021

Hi again @fakeshadow, thanks for the pointer. It's probably obvious to you, but I am new to Actix and so am unfamiliar with
these APIs and was starting with the example to learn.

Yea the examples are sometimes rough and not best practice. Sorry about that and some of them surely can and need to be improved.

would expect this to cause the calls to send.await to block until traffic flows again, but it seems to continue to accept
messages and store them somewhere. Does that seem wrong to you, or is something else still wrong about the way I am using the library here?

Actor runs on it's own and the stream logic is separate from the send message logic. I didn't read the source so I could be wrong but I believe when you call send the message would arrive and ctx.0.write would be called as long as the mail box of an actor is avaliable. So all the unsent messages would still be stored and wait to be sent(They don't know anything about the network status).
The right thing to do here is to pause/stop your actor from receiving message with a rate limit setting.

Reference:
https://docs.rs/actix/0.10.0/actix/struct.Context.html#method.stop

As an aside, do you think it would be good to either add a comment to the example repo highlighting this, or maybe converting

PRs and issues are welcome to improve the examples. Like I said some of them indeed are rough and like some improvement.

@robjtede robjtede added A-awc project: awc C-perf-mem Category: perfomance / memory / resources labels Dec 7, 2021
@robjtede robjtede closed this as completed Dec 7, 2021
@bowlofeggs
Copy link
Author

Was this fixed?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
A-awc project: awc C-perf-mem Category: perfomance / memory / resources
Projects
None yet
Development

No branches or pull requests

3 participants