Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Question]: How to optimise the memory usage for websocket client #111

Closed
coder3101 opened this issue Dec 19, 2022 · 5 comments
Closed

[Question]: How to optimise the memory usage for websocket client #111

coder3101 opened this issue Dec 19, 2022 · 5 comments

Comments

@coder3101
Copy link

Hi,
I am writing a load testing tool and my use case demand opening up many websocket connections to a server, each connection is long lived and has a keep-alive send and receive mechanism, where every client sends "keep-alive" (text message) at interval of 4 second. Server also sends keep-alive text and some events as they occur.

Running that load tool on a k8 pod with 4G of memory limit results in OOM after opening up 5000 connections. I suspect the memory usage shoots up because of websocket connections as I tried to run without opening connection and memory usage was well under 500M.

I was wondering if there is a way to reduce the memory usage of client since I would like to open as many connections as possible with limited resource without triggering OOM.

I am using tokio-runtime with this library, I tried various different libraries such as:

I did not find any difference significant difference.

Thanks

@coder3101 coder3101 changed the title [Question]: How to further optimise the memory usage for websocket client [Question]: How to optimise the memory usage for websocket client Dec 19, 2022
@sdroege
Copy link
Owner

sdroege commented Dec 20, 2022

Use a profiler and check where the memory is actually used up, and then we can look at optimizing those things :)
Do you have a testcase for this that you can share btw?

@coder3101
Copy link
Author

I profiled using instruments and found that native-tls on linux was causing too much memory allocation. I replaced it with rust-tls and now 10 times less memory is being allocated.

During profiling I also found that connect_async was the highest contributor to memory usage.

@sdroege
Copy link
Owner

sdroege commented Dec 23, 2022

During profiling I also found that connect_async was the highest contributor to memory usage.

Which part?

@coder3101
Copy link
Author

coder3101 commented Dec 23, 2022

For native-tls, connect_async handshake is consuming most, also for some reason this is more prevalent incase of linux.

Screenshot 2022-12-23 at 7 34 22 PM

For rustls-tls

Screenshot 2022-12-23 at 7 36 26 PM

@sdroege
Copy link
Owner

sdroege commented Dec 23, 2022

I see. Do you see any possibility of optimization in there? :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants