Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Locust never gets past 10 req/s, despite the server being much quicker than that #223

Closed
pimterry opened this issue Dec 22, 2014 · 11 comments

Comments

@pimterry
Copy link

I've got a very simple locustfile, running against a very simple local server, and locust tells me it can do 10req/s, while ab tells me it can do 600+.

My locustfile:

class MyTasks(TaskSet):
  @task
  def read_root(self):
    self.client.get("/")

class MyUser(HttpLocust):
  host = "http://localhost:8080"
  task_set = MyTasks

My server is a very simple flask app, running locally under CherryPy, and returning a fixed string value.

If I run this with locust -c 10 -r 10 -n 1000 --no-web -f locustfile.py (1000 requests with 10 users, all hatched immediately) I end up with:

--------------------------------------------------------------------------------------------------------------------------------------------
 GET /                                                           1000     0(0.00%)       5       2      12  |       6    9.90
--------------------------------------------------------------------------------------------------------------------------------------------
 Total                                                           1000     0(0.00%)                                       9.90

Percentage of the requests completed within given times
 Name                                                           # reqs    50%    66%    75%    80%    90%    95%    98%    99%   100%
--------------------------------------------------------------------------------------------------------------------------------------------
 GET /                                                            1000      6      6      6      6      6      7      7      8     12
--------------------------------------------------------------------------------------------------------------------------------------------

It hovers around 10, but doesn't pass 10 requests per second at any point. Each request only takes 6ms avg, suggesting each user could do at least 100/s, so 10 users -> at least 1,000 max from locust itself. That should definitely make my server the bottleneck.

Unfortunately though, my server can do way more than 10 requests/s. If I use ab instead, running:

ab -n 1000 -c 10 http://localhost:8080/

(1000 requests, with 10 threads)

I get :

Concurrency Level:      10
Time taken for tests:   1.647 seconds
Complete requests:      1000
Failed requests:        0
Write errors:           0
Total transferred:      138000 bytes
HTML transferred:       6000 bytes
Requests per second:    607.24 [#/sec] (mean)
Time per request:       16.468 [ms] (mean)
Time per request:       1.647 [ms] (mean, across all concurrent requests)
Transfer rate:          81.84 [Kbytes/sec] received

About 600 requests per second, much more in line with what I'd expect.

Why does Locust say I can only do 10 requests per second, when I can safely do 600? Am I missing something obvious in my configuration, or is ab doing something enormously different from locust that causes this effect? Seems like it should be easy to reproduce the same basic result in this case with both tools.

@Jahaja
Copy link
Member

Jahaja commented Dec 22, 2014

You need to adjust the default wait times.
http://docs.locust.io/en/latest/writing-a-locustfile.html#the-min-wait-and-max-wait-attributes

@pimterry
Copy link
Author

Ah, excellent, that fixes it, I do now get 600 or so.

Sorry, I clearly didn't understand the docs there, I'd skimmed past those assuming they were for managing timeouts. Seems a bit strange that 0 and 0 aren't the default there to me, but fair enough, I guess that's a tradeoff for realistic users vs pure benchmarking. Might be nice to include the waiting time in the results explicitly though, so it's clearer for other foolish people like me.

Thanks!

@Jahaja
Copy link
Member

Jahaja commented Dec 22, 2014

No worries, it can be made more clear.

@gabrielelanaro
Copy link

For future reference, you can use the option stream=True of self.client.get to make cherrypy release sockets.

This will allow you to properly simulate a large number of users without having to change their timeout.

@shaikshakeel
Copy link

@pimterry can u explain what does this min_wait and max_wait do ? do we need to decrease this times or increase to achieve high rps ?

@aldenpeterson-wf
Copy link
Contributor

@shaikshakeel did you check the documentation?

@EmersonYe
Copy link

EmersonYe commented Jul 9, 2018

@shaikshakeel min_wait and max_wait are the minimum and maximum waiting time between the execution of locust tasks in milliseconds. From the docs.

@yvz5
Copy link

yvz5 commented Jul 25, 2019

Hey there, I also have this problem. I deployed locust on my local kubernetes cluster. The host is also inside the cluster. Cpu usage is 25% and ram usage is 400 mb when I start locuat with 30.000 ccu and 1000 hatch rate i get 400 req/s. There are jsut two get calls inside the test. Min wait time 100 and max is 150.

Are there any troubleshooting steps i can go through to fond the problem ?

@cgoldberg
Copy link
Member

please ask general questions in the Slack channel... this issue is closed

@Dhana1991
Copy link

You need to adjust the default wait times.
http://docs.locust.io/en/latest/writing-a-locustfile.html#the-min-wait-and-max-wait-attributes

Why do we need to specify wait time when there is only one task in my Taskset.

@mckornfield
Copy link

The wait time I believe just means how long to wait between task execution, even if the same task is repeated over and over, and doesn't have to do with different tasks.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

10 participants