Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

user spawn too slow #482

Closed
dev-yohan opened this issue Sep 21, 2016 · 10 comments
Closed

user spawn too slow #482

dev-yohan opened this issue Sep 21, 2016 · 10 comments

Comments

@dev-yohan
Copy link

Hi, we are trying to spawn a lot of users per second(approximately 30000 per second) but in the locust interface we see a ratio of 100 RPS, and the min and max wait time is 1 second, are we doing something wrong

regards

@heyman
Copy link
Member

heyman commented Sep 21, 2016

What's your hatch rate? How many Locust slave processes on how many machines are you using?

Spawning 30k users/s second with a min_wait and max_wait of 1, would cause the amount of requests per second to be close to 600k after 20 seconds (assuming that your machines can handle it, both the system you're trying to load test, as well as the cluster of Locust slaves). Is that what you're trying to achieve?

@cgoldberg
Copy link
Member

go distributed for that level of throughput

@dev-yohan
Copy link
Author

dev-yohan commented Sep 22, 2016

It is strange because I'm trying by instance 3000 users with a hatch rate of 100, looking the locust dashboard we get something like this:

hatch

If I see the hatching status, every second it grows in 10 users aproximately

@cgoldberg
Copy link
Member

I don't see anything strange in that screenshot... can you be more descriptive?

@heyman
Copy link
Member

heyman commented Sep 22, 2016

@dev-yohan Do you see any exceptions in the output/log while the users are being hatched?

@dev-yohan
Copy link
Author

@heyman No, I does not see anything wrong in the execution, as I say it takes a lot of time to raise the desired quantity of users , the test servers has 8 cores and 16GB of RAM(1 master, 1 slave)

@heyman
Copy link
Member

heyman commented Sep 22, 2016

To be able to utilize all cores you should spawn 7 slaves on that setup.

But 10 users per seconds sounds really low, so it's probably something else that's going on as well. My MacBook Air manages to spawn 100-200 users per second on a single core easily.

What happens if you try a really simple locustfile. For example this one:

from locust import HttpLocust, TaskSet, task

class UserTasks(TaskSet):
    @task
    def index(self):
        self.client.get("/")

    @task
    def stats(self):
        self.client.get("/stats/requests")

    # but it might be convenient to use the @task decorator
    @task
    def page404(self):
        self.client.get("/does_not_exist")

class WebsiteUser(HttpLocust):
    """
    Locust user class that does requests to the locust web server running on localhost
    """
    host = "http://127.0.0.1:8089"
    min_wait = 2000
    max_wait = 5000
    task_set = UserTasks

If the hatch rate is much higher with that file, the problem probably lies somewhere in the test scripts.

@sj2208
Copy link

sj2208 commented Oct 13, 2016

@heyman - Regarding your point "To be able to utilize all cores you should spawn 7 slaves on that setup."

Sorry for very simple question-
How do we start an instance of locust on each core of a machine ?

@sj2208
Copy link

sj2208 commented Oct 13, 2016

@heyman - I was able to start locust on multiple cores on my mac.

The issue now i am facing is:

My machine has 8 cores ( i used the command to "sysctl -n hw.ncpu" )

I started master and then started 7 slaves to utilise all the cores. The issue i am facing is when i start a test one of slave gets disconnected and i am not able to load all the users.

EX:- I have 8 core machine. 1 Master , 7 Slaves

Started 21 users with hatch rate of 3.
After a while i was able to spawn only 18 users and in the logs i see that master sending hatch jobs to all 7 slaves but then 1 slave every-time(different number of slave each time) gets disconnected.

screen shot 2016-10-13 at 1 20 26 pm

![Uploading Screen Shot 2016-10-13 at 1.20.42 pm.png…]()

@heyman
Copy link
Member

heyman commented Oct 13, 2016

@sj2208 You should install pyzmq. It will make Locust use another implementation for master<->slave communication, which should work better. In the master branch pyzmq is declared as a dependency, and the socketrpc implementation should be removed.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants