Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

balance/recover the load distribution when new slave joins #970

Merged
merged 2 commits into from Mar 13, 2019

Conversation

Projects
None yet
3 participants
@delulu
Copy link
Contributor

delulu commented Mar 1, 2019

With Locust master and slave agents running in Kubernetes, Kubernetes will guarantee the availability of Locust agents.

But when a slave agent crashes and restarts, it will have a different client id and it has no idea of the user load that master assigned to it previously. And the total number of running locusts will not be as many as expected.

So it might be a better way to balance the user load when new client joins, and the total number of running locusts will still be the same as we specified in the swarm request.

also this PR fixes some issue I noticed when running in Python 3 with web mode, it turns out to be the inconsistency introduced in recv_from_client and send_to_client

Any thought or comment?

@delulu

This comment has been minimized.

Copy link
Contributor Author

delulu commented Mar 1, 2019

@Jonnymcc for awareness

@delulu delulu force-pushed the delulu:fix branch from 137d709 to 74812a8 Mar 1, 2019

@cgoldberg

This comment has been minimized.

Copy link
Member

cgoldberg commented Mar 3, 2019

also this PR fixes some issue I noticed when running in Python 3 with web mode

can you move those to a separate PR?

@Jonnymcc
Copy link
Contributor

Jonnymcc left a comment

Looks good, I was thinking this would be a nice improvement to have.

self.assertEqual(msg.type, 'test')
self.assertEqual(msg.data, 'message')

def test_client_recv(self):
sleep(0.01)
sleep(0.1)

This comment has been minimized.

@Jonnymcc

Jonnymcc Mar 3, 2019

Contributor

Was the sleep not long enough?

This comment has been minimized.

@delulu

delulu Mar 4, 2019

Author Contributor

no, not long enough at my side, besides there's no harm to set a longer time here.

@delulu

This comment has been minimized.

Copy link
Contributor Author

delulu commented Mar 4, 2019

also this PR fixes some issue I noticed when running in Python 3 with web mode

can you move those to a separate PR?

sure, here is the separate PR: #972

@delulu delulu force-pushed the delulu:fix branch from 74812a8 to 9ead9e9 Mar 4, 2019

@delulu delulu force-pushed the delulu:fix branch from 9ead9e9 to 0448982 Mar 6, 2019

@delulu

This comment has been minimized.

Copy link
Contributor Author

delulu commented Mar 13, 2019

@cgoldberg please help check this pr and merge into master, let me know if any concern, thx!

@cgoldberg
Copy link
Member

cgoldberg left a comment

LGTM.. thanks

@cgoldberg cgoldberg merged commit f467cf8 into locustio:master Mar 13, 2019

1 check passed

continuous-integration/travis-ci/pr The Travis CI build passed
Details
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.