-
Notifications
You must be signed in to change notification settings - Fork 242
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Question] Need recommendations for HTTP Load test planning #48
Comments
If the service being tested and the network are fast enough, it is very easy to hit 100 RPS, even 1000+ RPS. You don't need to do premature optimizations. But indeed, things like garbage collection and the goroutine scheduler have impact on the response time, adding several milliseconds to it and make it inaccurate. Usually, the service should record how long does it take for handling a request in accesslog. If you are using golang's builtin http client and pooling connections, make sure the number is greater than the number of users, to avoid internal locking. If you want to do profiling on client side, have a look at profiling. |
Oops My bad, sorry. I meant 100k rps |
Greater than number of users per slave or overall?
I was simply using default client as a proof of concept. The service reported 2ms(99%) to handle, and loader reported 400ms. I believe it is due to blocking at the client, and hence I am looking for the best possible way to generate load at slave. |
The number of users you set in the web UI is divided by the number of slaves. If you run ten users with two slaves, each slave will spawn five goroutines to run you task function in a loop.
You can do a CPU profiling with a longer duration to confirm that. If you meet the locking situation inside http client, you can use a client pool instead of a connection pool in single client. Is there any queueing situation at your service side? Some web framework will put client requests in a queue, then handle them in another thread pool, the queueing time is not added to the response time in access log. And, it not easy to hit 100K RPS without OS level tuning, like CPU, memory, TCP backlog, etc... |
@myzhan Thanks for being quick and active. So basically I optimized with bigger connection pool(1000) and used
I am able to generate 10K+ rps on a single slave. Do you think If I have 10+ more slaves, I would be able to produce 100K+ rps? Basically I am running on a big kubernetes cluster.
So my server nodes are written in kotlin + vertx. There is always some queuing + blocking. The time we measure is manually and I am sure it is not the time after the queue in we frameworks, as we take time from loadbalancer to time we write in Kafka. |
Yes, if you have enough machines. BTW, try to avoid locking in math/rand. |
Thanks for all the help myzhan. I will close the tickets with following conclusion.
I will ask more questions if required. Thanks again! |
First, I would like to thank you for this amazing library. Second I need your help in planning my load test in a way that my load generator is not the limiting factor in the performance.
I have written test in following way
Now There are several factors as I understand, I need to take care of
Since I am doing a distributed load test, I have control to increase number of slaves. So RPS is something I could easily boost. In statements [1,2,3] things can get slower when we do request. If I am right, I need to optimize this part right.
I am using keep-alive and pooled connections, but how to determine the correct number of pooled connections I should have. Is it
> number of users/ number of slave
. I am also planning to use fasthttp client.Any other recommendation or strategy to follow to have minimum performance impact due to client. I am aiming to hit
100 rps100K rps.The text was updated successfully, but these errors were encountered: