Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Better distribution of work with execution segments? #1308

Closed
na-- opened this issue Jan 10, 2020 · 10 comments
Closed

Better distribution of work with execution segments? #1308

na-- opened this issue Jan 10, 2020 · 10 comments
Labels
bug enhancement evaluation needed proposal needs to be validated or tested before fully implementing it in k6 executors
Milestone

Comments

@na--
Copy link
Member

na-- commented Jan 10, 2020

The discussions surrounding #1295 and #1307 made me realize that we might have an issue with work partitioning.... Some of the executors have 2 different attributes that have to be distributed when we partition them with execution segments (#997) - the number of VUs they have at any given moment, and the work they have to do. The per-vu-iterations, constant-looping-vus, and variable-looping-vus don't have this problem, since for them, the only important thing we need to segment/partition is the number of VUs - the work each VU has to do isn't affected, it's constant.

But for shared-iterations, constant-arrival-rate, and variable-arrival-rate, things are a bit different. Say that we have the following configuration:

export let options = {
    executionSegment: "1/3:2/3",
    execution: {
        shared_iters: {
            type: "shared-iterations",
            vus: 2,
            iterations: 3,
            maxDuration: "10s",
        },
    },
};

This is the execution segment that has no VUs, but a single iteration... Similarly, with the arrival-rate executors it will probably be even easier to end up having a segment with work but no VUs... So, I think we may need to have a 2-tier partitioning for these executors:

  1. Partition the workers (i.e. VUs) first
  2. Based on the number of allotted workers for that segment, partition the actual work (iterations or iterations/s respectively).

So, in the above example, this is how the iterations and VUs are currently partitioned if we split the work into thirds, and also how I think they should:

InstID Segment VUs Iterations now Iterations after fix
1 (0 - ⅓] 1 1 1
2 (⅓ - ⅔] 0 1 0
3 (⅔ - 1] 1 1 2

For the arrival-rate executors things like this would probably happen even more often, since iters/s are practically infinitely divisible. Say that we have 2 iter/s with 2 VUs, and we again want to split that execution into thirds:

export let options = {
    executionSegment: "1/3:2/3",
    execution: {
        constant_arr_rate: {
            type: "constant-arrival-rate",
            rate: 2,
            duration: "10s",
            preAllocatedVUs: 2,
            maxVUs: 2,
        },
    },
};

We'd have something like this:

InstID Segment VUs Iters/s now Iters/s after fix
1 (0 - ⅓] 1 1
2 (⅓ - ⅔] 0 0
3 (⅔ - 1] 1 1

These changes should be fairly easy and efficient to achieve with the execution segments, though of course, serious testing would be required to validate it...

@na-- na-- added bug enhancement evaluation needed proposal needs to be validated or tested before fully implementing it in k6 executors labels Jan 10, 2020
@mstoykov
Copy link
Collaborator

@na-- What do we do in the constant-arrival-rate case with maxVUs if it is 3?

Do we say that one of the instances with 1 preallocated VU can not spin more VUs but the other can?
or do both can spin 1 more and so maxVUs goes up to 4 if combined between all instances?

@na--
Copy link
Member Author

na-- commented Feb 19, 2020

Not sure I fully understand your question... You're asking what happens if we have a constant-arrival-rate executor with preAllocatedVUs: 2, maxVUs: 3, distributed across 3 instances? If I understand your question correctly, I think we need to have this:

  • instance 1: preAllocatedVUs: 1, maxVUs: 1
  • instance 2: preAllocatedVUs: 0, maxVUs: 1
  • instance 3: preAllocatedVUs: 1, maxVUs: 1

@na--
Copy link
Member Author

na-- commented Feb 19, 2020

This actually somewhat ties with the issues we're investigating in #1296 (comment) - I sincerely hope that reverting to the old algorithm would solve the issue, otherwise we might have a problem with this as well...

@mstoykov
Copy link
Collaborator

I find this separation to be the most ... useless one, unless you also think that then instance 2 should also do iterations.

And what you are basically saying is that the separation of the iterations/s between instances should be done based on maxVUs instead of the preAllocatedVUs?

export let options = {
    executionSegment: "1/3:2/3",
    execution: {
        constant_arr_rate: {
            type: "constant-arrival-rate",
            rate: 2,
            duration: "10s",
            preAllocatedVUs: 2,
            maxVUs: 3,
        },
    },
};
InstID Segment VUs maxVUs Iters/s after fix
1 (0 - ⅓] 1 1 2/3
2 (⅓ - ⅔] 0 1 2/3
3 (⅔ - 1] 1 1 2/3

And for example with maxVUs :4

export let options = {
    executionSegment: "1/3:2/3",
    execution: {
        constant_arr_rate: {
            type: "constant-arrival-rate",
            rate: 2,
            duration: "10s",
            preAllocatedVUs: 2,
            maxVUs: 4,
        },
    },
};
InstID Segment VUs maxVUs Iters/s after fix
1 (0 - ⅓] 1 2 2/3
2 (⅓ - ⅔] 0 1 2/3
3 (⅔ - 1] 1 1 2/3

instead of if we decide that we separate preallocatedVUs based on the segments and then based on those we separate the rest:

InstID Segment VUs maxVUs Iters/s after fix
1 (0 - ⅓] 1 2 1
2 (⅓ - ⅔] 0 0 1
3 (⅔ - 1] 1 2 1

@na--
Copy link
Member Author

na-- commented Feb 19, 2020

And what you are basically saying is that the separation of the iterations/s between instances should be done based on maxVUs instead of the preAllocatedVUs?

Hmm yeah, that seems to be the most fair approach. Consider the case where you have preAllocatedVUs: 2, maxVUs: 1000...

@mstoykov
Copy link
Collaborator

that seems like some corner case where you basically say that all VUs will be allocated during the execution and that is what we expect to happen ...

While I don't think that that is what maxVUs is supposed to be .. it the preAllocatedVUs that are supposed to the work and the maxVUs are there if things start to go bad ... but in your separation - things go bad immediately for one of the three instances.

To be honest your approach is easier to implement (or at least I have problems with implementing mine from the current code). But this really shouldn't matter in this case.

On a related matter(constant-arrival-rate), still working with the above example and saying that we separate based on maxVUs: Do we want to have the same fix as with variable arrival rate where when we separate it between instances we don't do iterations at the same time:

InstID Segment VUs maxVUs Iters/s after fix 1st iteration 2nd iteration
1 (0 - ⅓] 1 1 2/3 1.5 s 3s
2 (⅓ - ⅔] 0 1 2/3 1.5 s 3s
3 (⅔ - 1] 1 1 2/3 1.5 s 3s

or:

InstID Segment VUs maxVUs Iters/s after fix 1st iteration 2nd iteration
1 (0 - ⅓] 1 1 2/3 0.5 s 2s
2 (⅓ - ⅔] 0 1 2/3 1 s 2.5s
3 (⅔ - 1] 1 1 2/3 1.5 s 3s

Because If this is true I would like to merge the previous two PRs now and then work from them as otherwise we will end up with one HUGE PR in the end

@mstoykov
Copy link
Collaborator

cc @sniku @robingustafsson

@na-- na-- added this to the v0.27.0 milestone Apr 22, 2020
@na--
Copy link
Member Author

na-- commented Apr 22, 2020

This is not fixed yet, shared-iterations doesn't partition iterations and constant-arrival-rate even has a new bug. If you have

export let options = {
    execution: {
        constant_arr_rate: {
            type: "constant-arrival-rate",
            rate: 2,
            duration: "10s",
            preAllocatedVUs: 2,
            maxVUs: 2,
        },
    },
};

And you run the script in 3 instances with --execution-segment=0:1/3, --execution-segment=1/3:2/3, and --execution-segment=2/3:1, you'd somehow get 3 VUs...

@na--
Copy link
Member Author

na-- commented Apr 23, 2020

Never mind, I'm an idiot, I tested this without a segment sequence... constant-arrival-rate works fine 😊

mstoykov added a commit that referenced this issue Apr 28, 2020
Unfortunately because of the way the ExecutorConfig interface is
designed I can't just cache some of the results which will result in
some probably unneeded amount of calculation.

This will need either a change to the interface or for ExecutionTuple to
cache the results of GetNewExecutionTupleBasedOnValue, but I find it
unlikely this to be noticeable outside of extreme examples

closes #1308
mstoykov added a commit that referenced this issue Apr 30, 2020
Unfortunately because of the way the ExecutorConfig interface is
designed I can't just cache some of the results which will result in
some probably unneeded amount of calculation.

This will need either a change to the interface or for ExecutionTuple to
cache the results of GetNewExecutionTupleBasedOnValue, but I find it
unlikely this to be noticeable outside of extreme examples

closes #1308
@mstoykov
Copy link
Collaborator

closed by #1419, #1323 and #1285

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug enhancement evaluation needed proposal needs to be validated or tested before fully implementing it in k6 executors
Projects
None yet
Development

No branches or pull requests

2 participants