Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Slave count doesn't get updated in the UI if no more slaves are alive #62

Closed
bogdangherca opened this issue Mar 28, 2013 · 12 comments
Closed
Labels

Comments

@bogdangherca
Copy link

Hi,

I was running some simple tests with locust (which is so cool btw) and I noticed that if you end up with no slaves connected, the UI does not reflect this change. The slave count in the UI sticks to 1 in this case.
Also, it would be nice to get a warning message in the webUI if you start swarming with no slaves connected. Now, you get this warning only in the command line.

I could provide a quick fix for this if you'd like.

Thanks!

@heyman
Copy link
Member

heyman commented Apr 18, 2013

The slave count sounds like a bug and should be fixed. Thanks for reporting, and a fix would definitely be appreciated :).

I guess some kind of warning in the web UI wouldn't be bad either, but please do two separate pull requests if you give them a shot.

@nmccready
Copy link

I second the issue, are the slaves actually there and the number is invalid? Or is the slave number correct?

@heyman
Copy link
Member

heyman commented May 7, 2013

Ok, thanks for reporting! Hopefully I'll get time to go over some waiting pull requests and issues, early next week.

@nmccready
Copy link

It might not be entirely in accurate. I am trying to spawn 20 slaves per
machine; running ps aux, I never see more than 10-12 locust instances.
VR,
Nick
On Tuesday, May 7, 2013, Jonatan Heyman notifications@github.com wrote:

Ok, thanks for reporting! Hopefully I'll get time to go over some waiting
pull requests and issues, early next week.


Reply to this email directly or view it on GitHub.

Nicholas McCready

Personal Email: nmccready@gmail.com
Web site: http://codemonkeyseedo.blogspot.com/
Twitter: nmccready

@nmccready
Copy link

Ok this bug did come again and I did verify that it does report inaccurately at times. This instance for example was supposed to be 14 slaves and it was 12.

To help you count each machine you can use the command below

ps aux | grep py | grep -v grep | awk '{print $12}' | wc -l

@bogdangherca
Copy link
Author

@nem: starting slaves works as it should for me. how exactly are you trying to spawn slaves?

@nmccready
Copy link

@bogdangherca:
nvm slaves is working fine. I was using a script that was for an older version of locust. I am not entirely sure if the older version worked this way or not... The old script targeting 0.5.1 would start the master last. I reversed the script to start the master first and it fixed the issue. This is specified here https://github.com/locustio/locust/blob/master/docs/quickstart.rst

BTW that url should be in the Documentation site somewhere or the text should be in the Documentation for the latest version. I've noticed many "gotchas" of documentation gems that are on Github and not on the doc site.

@bogdangherca
Copy link
Author

@nem: Indeed, starting the master last was your problem. You should start the master first in order for the slaves to connect to it. Anyway, glad it worked fine for you.

@nmccready
Copy link

nvm looks like it was chrome gist problem, it worked fine in safari .

Here is the gist https://gist.github.com/nmccready/5547455 . Anyways my issue is not being able to start a user amount beyond the slave amount. At least the reported users is never larger than the slave amount.

So the gist is to determine if something is wrong with setup.

@nmccready
Copy link

FYI this has started working, IE user count > than slave count.

KashifSaadat pushed a commit to KashifSaadat/locust that referenced this issue Feb 1, 2016
mthurlin pushed a commit that referenced this issue Feb 15, 2016
[#62] Correctly update slave count when drops below 1.
jaume-pinyol pushed a commit to jaume-pinyol/locust that referenced this issue May 1, 2016
@vorozhko
Copy link

Hello guys,
I has similar issue. Master and web UI doesn't reflect the actual slaves count.
We run locust in Docker containers. My setup include 200 slaves, so 200 docker containers.
Master has registered them all. But, when containers number was downscaled to 10, master and web UI still show 200 slaves assigned.

Our docker is using latest locustio package from pip.
Any advice is appreciated.

Thanks!

@jtpio
Copy link

jtpio commented Feb 24, 2017

@vorozhko:
I ran into the same issue with the following setup:

  • locust master and slaves running in Docker containers
  • Docker containers orchestrated with Kubernetes on 4 VMs (private cluster)

The problem is related to how the containers are stopped. For a locust slave to be properly terminated and the number correctly updated, it needs to send the quit message when handling the SIGTERM event by calling the shutdown function.

In my case, the entrypoint for the container is a shell script which starts locust as a child process. It means that the shell script will be assigned PID 1 and the locust script a different PID. When docker stop is called, it sends the SIGTERM event to the process with PID 1. But if it is not handled, it waits 10s and then kills the container (and locust can't shutdown gracefully).

The locust start-up I'm using is mostly inspired by https://github.com/peter-evans/locust-docker. With that setup, the easy fix was to prepend exec to replace the shell with the python program, so locust gets PID 1 and can handle the SIGTERM signal:

exec $LOCUST_PATH $LOCUST_FLAGS

Another way to fix this problem would be to handle the case when a slave is disconnected in the locust code itself (socket closed or similar).

unicell added a commit to unicell/kubernetes-locust that referenced this issue Oct 5, 2017
This is to ensure when scaling down worker pod, `quit` message being
sent and master to be notified.

related: locustio/locust#62
unicell added a commit to unicell/kubernetes-locust that referenced this issue Oct 5, 2017
This is to ensure when scaling down worker pod, `quit` message being
sent and master to be notified.

related: locustio/locust#62
pancaprima pushed a commit to pancaprima/locust that referenced this issue May 14, 2018
@cyberw cyberw closed this as completed Oct 18, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

6 participants