Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Question] Need recommendations for HTTP Load test planning #48

Closed
mangatmodi opened this issue Jan 10, 2019 · 7 comments
Closed

[Question] Need recommendations for HTTP Load test planning #48

mangatmodi opened this issue Jan 10, 2019 · 7 comments
Labels

Comments

@mangatmodi
Copy link

mangatmodi commented Jan 10, 2019

First, I would like to thank you for this amazing library. Second I need your help in planning my load test in a way that my load generator is not the limiting factor in the performance.

I have written test in following way

func task(){
//request created
     startTime := boomer.Now() \\  1
     response, err := client.Do(request) \\ 2
     elapsed := boomer.Now() - startTime \\ 3
//Close response, report to boomer
}

Now There are several factors as I understand, I need to take care of

  1. Any inefficiency in lines 1,2,3 will affect the time reported. This has to be as efficient as we get
  2. Inefficiency any other place will drop the RPS.

Since I am doing a distributed load test, I have control to increase number of slaves. So RPS is something I could easily boost. In statements [1,2,3] things can get slower when we do request. If I am right, I need to optimize this part right.

I am using keep-alive and pooled connections, but how to determine the correct number of pooled connections I should have. Is it > number of users/ number of slave. I am also planning to use fasthttp client.

Any other recommendation or strategy to follow to have minimum performance impact due to client. I am aiming to hit 100 rps 100K rps.

@mangatmodi mangatmodi changed the title Need recommendations for HTTP Loadtest planning [Help Requested] Need recommendations for HTTP Load test planning Jan 10, 2019
@mangatmodi mangatmodi changed the title [Help Requested] Need recommendations for HTTP Load test planning [Question] Need recommendations for HTTP Load test planning Jan 10, 2019
@myzhan
Copy link
Owner

myzhan commented Jan 10, 2019

If the service being tested and the network are fast enough, it is very easy to hit 100 RPS, even 1000+ RPS. You don't need to do premature optimizations.

But indeed, things like garbage collection and the goroutine scheduler have impact on the response time, adding several milliseconds to it and make it inaccurate. Usually, the service should record how long does it take for handling a request in accesslog.

If you are using golang's builtin http client and pooling connections, make sure the number is greater than the number of users, to avoid internal locking.

If you want to do profiling on client side, have a look at profiling.

@mangatmodi
Copy link
Author

mangatmodi commented Jan 10, 2019

Oops My bad, sorry. I meant 100k rps

@mangatmodi
Copy link
Author

mangatmodi commented Jan 10, 2019

make sure the number is greater than the number of users, to avoid internal locking.

Greater than number of users per slave or overall?

Usually, the service should record how long does it take for handling a request in access-logs.

I was simply using default client as a proof of concept. The service reported 2ms(99%) to handle, and loader reported 400ms. I believe it is due to blocking at the client, and hence I am looking for the best possible way to generate load at slave.

@myzhan
Copy link
Owner

myzhan commented Jan 11, 2019

The number of users you set in the web UI is divided by the number of slaves. If you run ten users with two slaves, each slave will spawn five goroutines to run you task function in a loop.

I believe it is due to blocking at the client

You can do a CPU profiling with a longer duration to confirm that. If you meet the locking situation inside http client, you can use a client pool instead of a connection pool in single client.

Is there any queueing situation at your service side? Some web framework will put client requests in a queue, then handle them in another thread pool, the queueing time is not added to the response time in access log.

And, it not easy to hit 100K RPS without OS level tuning, like CPU, memory, TCP backlog, etc...

@mangatmodi
Copy link
Author

mangatmodi commented Jan 11, 2019

@myzhan Thanks for being quick and active. So basically I optimized with bigger connection pool(1000) and used fasthttp. My test data is all in memory and I take random data point. I got around 10000 rps on the slave node.

Without OS level tuning, like CPU, memory, TCP backlog, etc.

I am able to generate 10K+ rps on a single slave. Do you think If I have 10+ more slaves, I would be able to produce 100K+ rps? Basically I am running on a big kubernetes cluster.

is there any queueing situation at your service side?

So my server nodes are written in kotlin + vertx. There is always some queuing + blocking. The time we measure is manually and I am sure it is not the time after the queue in we frameworks, as we take time from loadbalancer to time we write in Kafka.

@myzhan
Copy link
Owner

myzhan commented Jan 13, 2019

Do you think If I have 10+ more slaves, I would be able to produce 100K+ rps?

Yes, if you have enough machines. BTW, try to avoid locking in math/rand.

@mangatmodi
Copy link
Author

Thanks for all the help myzhan. I will close the tickets with following conclusion.

  1. Use fast http library and use enough pooled connection to avoid blocking.
  2. Profile the client to understand where is the blocking.
  3. Verify if the time is correctly measured at server.

I will ask more questions if required. Thanks again!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants