Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fixed_count: ability to spawn a specific number of users (as opposed to just using weights) #1964

Merged
merged 4 commits into from
Jan 21, 2022
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion docs/api.rst
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ User class
============

.. autoclass:: locust.User
:members: wait_time, tasks, weight, abstract, on_start, on_stop, wait, context, environment
:members: wait_time, tasks, weight, fixed_count, abstract, on_start, on_stop, wait, context, environment

HttpUser class
================
Expand Down
21 changes: 20 additions & 1 deletion docs/writing-a-locustfile.rst
Original file line number Diff line number Diff line change
Expand Up @@ -180,7 +180,7 @@ For example, the following User class would sleep for one second, then two, then
...


weight attribute
weight and fixed_count attributes
----------------

If more than one user class exists in the file, and no user classes are specified on the command line,
Expand All @@ -204,6 +204,25 @@ classes. Say for example, web users are three times more likely than mobile user
weight = 1
...

Also you can set the :py:attr:`fixed_count <locust.User.fixed_count>` attribute.
In this case the weight property will be ignored and the exact count users will be spawned.
These users are spawned first. In the below example the only instance of AdminUser
will be spawned to make some specific work with more accurate control
of request count independently of total user count.

.. code-block:: python

class AdminUser(User):
wait_time = constant(600)
fixed_count = 1

@task
def restart_app(self):
...

class WebUser(User):
...


host attribute
--------------
Expand Down
8 changes: 6 additions & 2 deletions locust/argument_parser.py
Original file line number Diff line number Diff line change
Expand Up @@ -431,10 +431,14 @@ def setup_parser_arguments(parser):

other_group = parser.add_argument_group("Other options")
other_group.add_argument(
"--show-task-ratio", action="store_true", help="Print table of the User classes' task execution ratio"
"--show-task-ratio",
action="store_true",
help="Print table of the User classes' task execution ratio. Use this with non-zero --user option if some classes define non-zero fixed_count property.",
)
other_group.add_argument(
"--show-task-ratio-json", action="store_true", help="Print json data of the User classes' task execution ratio"
"--show-task-ratio-json",
action="store_true",
help="Print json data of the User classes' task execution ratio. Use this with non-zero --user option if some classes define non-zero fixed_count property.",
)
# optparse gives you --version but we have to do it ourselves to get -V too
other_group.add_argument(
Expand Down
107 changes: 85 additions & 22 deletions locust/dispatch.py
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@
import time
from collections.abc import Iterator
from operator import attrgetter
from typing import Dict, Generator, List, TYPE_CHECKING, Tuple, Type
from typing import Dict, Generator, List, TYPE_CHECKING, Optional, Tuple, Type

import gevent
import typing
Expand Down Expand Up @@ -98,6 +98,10 @@ def __init__(self, worker_nodes: "List[WorkerNode]", user_classes: List[Type[Use

self._rebalance = False

self._try_dispatch_fixed = True

self._no_user_to_spawn = False
Comment on lines +101 to +103
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you also need to set these in

def new_dispatch(self, target_user_count: int, spawn_rate: float) -> None:

?

Copy link
Contributor Author

@EzR1d3r EzR1d3r Jan 13, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess no, self._no_user_to_spawn resets to False almost immediately after checking in _dispatcher
self._try_dispatch_fixed will be True if we stop to spawn somewhere on fixed users, so when we continue it is fine. If we spawned all fixed and it is False, it is also ok. The only case we need to reset it is ramp-down. Also I guess nothing changes if to set this fields, except a superfluous work in _try_dispatch_fixed if it was False. But I do not want to mislead about such a need.


@property
def dispatch_in_progress(self):
return self._dispatch_in_progress
Expand Down Expand Up @@ -132,6 +136,9 @@ def _dispatcher(self) -> Generator[Dict[str, Dict[str, int]], None, None]:
if self._rebalance:
self._rebalance = False
yield self._users_on_workers
if self._no_user_to_spawn:
self._no_user_to_spawn = False
break

while self._current_user_count > self._target_user_count:
with self._wait_between_dispatch_iteration_context():
Expand Down Expand Up @@ -241,13 +248,19 @@ def _add_users_on_workers(self) -> Dict[str, Dict[str, int]]:
current_user_count_target = min(
self._current_user_count + self._user_count_per_dispatch_iteration, self._target_user_count
)

for user in self._user_generator:
if not user:
self._no_user_to_spawn = True
break
worker_node = next(self._worker_node_generator)
self._users_on_workers[worker_node.id][user] += 1
self._current_user_count += 1
self._active_users.append((worker_node, user))
if self._current_user_count >= current_user_count_target:
return self._users_on_workers
break

return self._users_on_workers

def _remove_users_from_workers(self) -> Dict[str, Dict[str, int]]:
"""Remove users from the workers until the target number of users is reached for the current dispatch iteration
Expand All @@ -264,9 +277,17 @@ def _remove_users_from_workers(self) -> Dict[str, Dict[str, int]]:
return self._users_on_workers
self._users_on_workers[worker_node.id][user] -= 1
self._current_user_count -= 1
self._try_dispatch_fixed = True
if self._current_user_count == 0 or self._current_user_count <= current_user_count_target:
return self._users_on_workers

def _get_user_current_count(self, user: str) -> int:
count = 0
for users_on_node in self._users_on_workers.values():
count += users_on_node.get(user, 0)

return count

def _distribute_users(
self, target_user_count: int
) -> Tuple[dict, Generator[str, None, None], typing.Iterator["WorkerNode"], List[Tuple["WorkerNode", str]]]:
Expand All @@ -289,6 +310,8 @@ def _distribute_users(
user_count = 0
while user_count < target_user_count:
user = next(user_gen)
if not user:
break
worker_node = next(worker_gen)
users_on_workers[worker_node.id][user] += 1
user_count += 1
Expand All @@ -307,26 +330,66 @@ def _user_gen(self) -> Generator[str, None, None]:
weighted round-robin algorithm, we'd get AAAAABAAAAAB which would make the distribution
less accurate during ramp-up/down.
"""
# Normalize the weights so that the smallest weight will be equal to "target_min_weight".
# The value "2" was experimentally determined because it gave a better distribution especially
# when dealing with weights which are close to each others, e.g. 1.5, 2, 2.4, etc.
target_min_weight = 2
min_weight = min(u.weight for u in self._user_classes)
normalized_weights = [
(user_class.__name__, round(target_min_weight * user_class.weight / min_weight))
for user_class in self._user_classes
]
gen = smooth(normalized_weights)
# Instead of calling `gen()` for each user, we cycle through a generator of fixed-length
# `generation_length_to_get_proper_distribution`. Doing so greatly improves performance because
# we only ever need to call `gen()` a relatively small number of times. The length of this generator
# is chosen as the sum of the normalized weights. So, for users A, B, C of weights 2, 5, 6, the length is
# 2 + 5 + 6 = 13 which would yield the distribution `CBACBCBCBCABC` that gets repeated over and over
# until the target user count is reached.
generation_length_to_get_proper_distribution = sum(
normalized_weight[1] for normalized_weight in normalized_weights
)
yield from itertools.cycle(gen() for _ in range(generation_length_to_get_proper_distribution))

def infinite_cycle_gen(users: List[Tuple[User, int]]) -> Generator[Optional[str], None, None]:
if not users:
return itertools.cycle([None])

# Normalize the weights so that the smallest weight will be equal to "target_min_weight".
# The value "2" was experimentally determined because it gave a better distribution especially
# when dealing with weights which are close to each others, e.g. 1.5, 2, 2.4, etc.
target_min_weight = 2

# 'Value' here means weight or fixed count
normalized_values = [
(
user.__name__,
round(target_min_weight * value / min([u[1] for u in users])),
)
for user, value in users
]
generation_length_to_get_proper_distribution = sum(
normalized_val[1] for normalized_val in normalized_values
)
gen = smooth(normalized_values)

# Instead of calling `gen()` for each user, we cycle through a generator of fixed-length
# `generation_length_to_get_proper_distribution`. Doing so greatly improves performance because
# we only ever need to call `gen()` a relatively small number of times. The length of this generator
# is chosen as the sum of the normalized weights. So, for users A, B, C of weights 2, 5, 6, the length is
# 2 + 5 + 6 = 13 which would yield the distribution `CBACBCBCBCABC` that gets repeated over and over
# until the target user count is reached.
return itertools.cycle(gen() for _ in range(generation_length_to_get_proper_distribution))

fixed_users = {u.__name__: u for u in self._user_classes if u.fixed_count}

cycle_fixed_gen = infinite_cycle_gen([(u, u.fixed_count) for u in fixed_users.values()])
cycle_weighted_gen = infinite_cycle_gen([(u, u.weight) for u in self._user_classes if not u.fixed_count])

# Spawn users
while True:
if self._try_dispatch_fixed:
self._try_dispatch_fixed = False
current_fixed_users_count = {u: self._get_user_current_count(u) for u in fixed_users}
spawned_classes = set()
while len(spawned_classes) != len(fixed_users):
user_name = next(cycle_fixed_gen)
if not user_name:
break

if current_fixed_users_count[user_name] < fixed_users[user_name].fixed_count:
current_fixed_users_count[user_name] += 1
if current_fixed_users_count[user_name] == fixed_users[user_name].fixed_count:
spawned_classes.add(user_name)
yield user_name

# 'self._try_dispatch_fixed' was changed outhere, we have to recalculate current count
if self._try_dispatch_fixed:
current_fixed_users_count = {u: self._get_user_current_count(u) for u in fixed_users}
spawned_classes.clear()
self._try_dispatch_fixed = False

yield next(cycle_weighted_gen)

@staticmethod
def _fast_users_on_workers_copy(users_on_workers: Dict[str, Dict[str, int]]) -> Dict[str, Dict[str, int]]:
Expand Down
12 changes: 9 additions & 3 deletions locust/html.py
Original file line number Diff line number Diff line change
Expand Up @@ -4,9 +4,10 @@
import datetime
from itertools import chain
from .stats import sort_stats
from .user.inspectuser import get_task_ratio_dict
from .user.inspectuser import get_ratio
from html import escape
from json import dumps
from .runners import MasterRunner


def render_template(file, **kwargs):
Expand Down Expand Up @@ -62,9 +63,14 @@ def get_html_report(environment, show_download_link=True):
static_css.append(f.read())
static_css.extend(["", ""])

is_distributed = isinstance(environment.runner, MasterRunner)
user_spawned = (
environment.runner.reported_user_classes_count if is_distributed else environment.runner.user_classes_count
)

task_data = {
"per_class": get_task_ratio_dict(environment.user_classes),
"total": get_task_ratio_dict(environment.user_classes, total=True),
"per_class": get_ratio(environment.user_classes, user_spawned, False),
"total": get_ratio(environment.user_classes, user_spawned, True),
}

res = render_template(
Expand Down
13 changes: 4 additions & 9 deletions locust/main.py
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@
from .stats import print_error_report, print_percentile_stats, print_stats, stats_printer, stats_history
from .stats import StatsCSV, StatsCSVFileWriter
from .user import User
from .user.inspectuser import get_task_ratio_dict, print_task_ratio
from .user.inspectuser import print_task_ratio, print_task_ratio_json
from .util.timespan import parse_timespan
from .exception import AuthCredentialsError
from .shape import LoadTestShape
Expand Down Expand Up @@ -218,18 +218,13 @@ def main():
if options.show_task_ratio:
print("\n Task ratio per User class")
print("-" * 80)
print_task_ratio(user_classes)
print_task_ratio(user_classes, options.num_users, False)
print("\n Total task ratio")
print("-" * 80)
print_task_ratio(user_classes, total=True)
print_task_ratio(user_classes, options.num_users, True)
sys.exit(0)
if options.show_task_ratio_json:

task_data = {
"per_class": get_task_ratio_dict(user_classes),
"total": get_task_ratio_dict(user_classes, total=True),
}
print(dumps(task_data))
print_task_ratio_json(user_classes, options.num_users)
sys.exit(0)

if options.master:
Expand Down
15 changes: 8 additions & 7 deletions locust/static/tasks.js
Original file line number Diff line number Diff line change
Expand Up @@ -29,11 +29,12 @@ function _getTasks_div(root, title) {
}


function initTasks() {
var tasks = $('#tasks .tasks')
var tasksData = tasks.data('tasks');
console.log(tasksData);
tasks.append(_getTasks_div(tasksData.per_class, 'Ratio per User class'));
tasks.append(_getTasks_div(tasksData.total, 'Total ratio'));
function updateTasks() {
$.get('/tasks', function (data) {
var tasks = $('#tasks .tasks');
tasks.empty();
tasks.append(_getTasks_div(data.per_class, 'Ratio per User class'));
tasks.append(_getTasks_div(data.total, 'Total ratio'));
});
}
initTasks();
updateTasks();
7 changes: 7 additions & 0 deletions locust/templates/index.html
Original file line number Diff line number Diff line change
Expand Up @@ -333,6 +333,13 @@ <h2>Version <a href="https://github.com/locustio/locust/releases/tag/{{version}}
<script type="text/javascript" src="./static/chart.js?v={{ version }}"></script>
<script type="text/javascript" src="./static/locust.js?v={{ version }}"></script>
<script type="text/javascript" src="./static/tasks.js?v={{ version }}"></script>
<script type="text/javascript">
function updateTasksWithTimeout() {
updateTasks()
setTimeout(updateTasksWithTimeout, 1000);
}
updateTasksWithTimeout()
</script>
{% block extended_script %}
{% endblock extended_script %}
</body>
Expand Down
Loading