Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat(test runner): improve sharding algorithm to better spread similar tests among shards #30962

Open
wants to merge 24 commits into
base: main
Choose a base branch
from

Conversation

muhqu
Copy link
Contributor

@muhqu muhqu commented May 22, 2024

Adds alternative algorithms to assign test groups to shards to better distribute tests.

Problem

Currently the way sharding works is something like this…

         [  1,  2,  3,  4,  5,  6,  7,  8,  9, 10, 11, 12]
Shard 1:  ^---------^                                      : [  1, 2, 3 ]
Shard 2:              ^---------^                          : [  4, 5, 6 ]
Shard 3:                          ^---------^              : [  7, 8, 9 ]
Shard 4:                                      ^---------^  : [ 10,11,12 ]

Tests are ordered in the way they are discovered, which is mostly alphabetically. This has the effect that test cases are sorted nearby similar tests… for example your have first 6 tests which are testing logged-in state and then 6 tests which test logged-out state. The first 6 tests require more setup time as they are testing logged-in behaviour… With the current sharding algorithm shard 1 & 2 get those slow logged-in tests and shard 3 & 4 get the more quicker tests…

Solution

This PR adds a new shardingMode configuration which allows to specify the sharding algorithm to be used…

shardingMode: 'partition'

That's the current behaviour, which is the default. Let me know if you have a better name to describe the current algorithm...

shardingMode: 'round-robin'

Distribute the test groups more evenly. It…

  1. sorts test groups by number of tests in descending order
  2. then loops through the test groups and assigns them to the shard with the lowest number of tests.

Here is a simple example where every test group represents a single test (e.g. --fully-parallel) ...

         [  1,  2,  3,  4,  5,  6,  7,  8,  9, 10, 11, 12]
Shard 1:    ^               ^               ^              : [  1, 5, 9 ]
Shard 2:        ^               ^               ^          : [  2, 6,10 ]
Shard 3:            ^               ^               ^      : [  3, 7,11 ]
Shard 4:                ^               ^               ^  : [  4, 8,12 ]

…or a more complex scenario where test groups have different number of tests…

Original Order: [ [1], [2, 3], [4, 5, 6], [7], [8], [9, 10], [11], [12] ]
Sorted Order:   [ [4, 5, 6], [2, 3], [9, 10], [1], [7], [8], [11], [12] ]
Shard 1:           ^-----^                                                : [ [ 4,   5,   6] ]
Shard 2:                      ^--^                       ^                : [ [ 2,  3],  [8] ]
Shard 3:                              ^---^                    ^          : [ [ 9, 10], [11] ]
Shard 4:                                       ^    ^                ^    : [ [1], [7], [12] ]

shardingMode: 'duration-round-robin'

It's very similar to round-robin, but it uses the duration of a tests previous run as cost factor. The duration will be read from .last-run.json when available. When a test can not be found in .last-run.json it will use the average duration of available tests. When no last run info is available, the behaviour would be identical to round-robin.

Other changes

  • Add testDurations?: { [testId: string]: number } to .last-run.json
  • Add builtin lastrun reporter, which allows merge-reports to generate a .last-run.json to be generated

Appendix

Below are some runtime stats from a project I've been working on, which shows the potential benefit of this change.

The tests runs had to complete 161 tests. Single test duration ranges from a few seconds to over 2 minutes.

image

The partition run gives the baseline performance and illustrates the problem quite good. We have a single shard that takes almost 16 min while another one completes in under 5 min.


image

The round-robin algorithm gives a bit better performance, but it still has a shard that requires twice the time of another shard.


image

The duration-round-robin run was using the duration info from a previous run and achieves the best result by far. All shards complete in 10-11 minutes. 🏆 🎉

@muhqu
Copy link
Contributor Author

muhqu commented May 22, 2024

Maybe it's better to make this an option to allow restoring the old behaviour. ¯_(ツ)_/¯

And… there should be unit-tests, no? found them…

This comment has been minimized.

This comment has been minimized.

This comment has been minimized.

This comment has been minimized.

This comment has been minimized.

@pavelfeldman
Copy link
Member

Do you think you can achieve the same better behavior with your sharding seed? Or are you looking for additional bias against subsequent tests being put into the same group?

@muhqu
Copy link
Contributor Author

muhqu commented May 23, 2024

Do you think you can achieve the same better behavior with your sharding seed?

Not sure yet. But I will test this new sharding logic in our test setup to gather some results.

Or are you looking for additional bias against subsequent tests being put into the same group?

The seeded shuffle is basically just a quick and easy way to influence the test group to shard assignment… it's random and so it's results may vary.

However, this change is aimed to improve the sharding logic to generally yield better results, which yet needs to be proved. 😅

Currently this sharding algorithm uses the number of tests per test group as a cost metric. It would be great if we could use the test duration of a previous run (when available) to even better distribute the tests among the shards. But the algorithm would be quite similar.

@pavelfeldman
Copy link
Member

I think your seed change allows users to experiment with the seeds and arrive at some better state than they are today. Any other changes w/o the timing feedback are going to yield similar results, not need to experiment with biases.

It would be great if we could use the test duration of a previous run (when available) to even better distribute the tests among the shards.

This requires a feedback loop with the test time stats which we don't have today. We recently started storing the last run stats in .last-run.json, I think it is Ok to store the test times there and to use it on the subsequent runs for better sharding, if it is available. Would you be interested in working on it?

@muhqu
Copy link
Contributor Author

muhqu commented May 24, 2024

Yes, I would like to work on that.

I was not yet aware of the .last-run.json. Is that something that is also written by the merge reports command? Because we need the stats combined from all shard runs.

I was thinkings about adding a separate reporter for that purpose, but if those last run stats are already there…, then there might not be the need to create a separate reporter.

@pavelfeldman
Copy link
Member

I was thinkings about adding a separate reporter for that purpose, but if those last run stats are already there…, then there might not be the need to create a separate reporter.

Shaping this code as reporter sounds good, but Playwright core would need to consume the output of that reporter, so it needs to be baked in. Merging those should not be a hard problem, reporter or not. Unfortunately merging mangles test ids today, so we'd need to figure that out. Maybe not using the ids altogether and falling back to the file names and test titles. Also has some tricky edge cases as tests that are fast on Chromium and are slow on Firefox...

This comment has been minimized.

@muhqu
Copy link
Contributor Author

muhqu commented May 27, 2024

I've added a lastrun reporter that can be used with merge-reports to generate .last-run.json.

Surprisingly when merging the reports the test ids just had a 1 character suffix that I was able to strip off… but it doesn't feel like the right way to do this.

What's the reason to modify test ids when merging blobs? Couldn't this be done in a way that only modifies a test id when there is a collision?

This comment has been minimized.

This comment has been minimized.

Comment on lines +165 to +169
const testDurations = testRun.rootSuite?.allTests().reduce((map, t) => {
if (t.results.length)
map[t.id] = t.results.reduce((a, b) => a + b.duration, 0);
return map;
}, {} as { [testId: string]: number });
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm actually not sure it is the right way to sum all the durations… maybe it makes more sense to calc the average? Or only include durations from successful test runs… 🤔

@muhqu
Copy link
Contributor Author

muhqu commented May 28, 2024

@pavelfeldman last-run.json is a little bit tricky to work with atm… it constantly gets overwritten even if you just list tests and VS Code seems to do that from time to time to refresh the tests panel... I think it should only get written when it's actually running tests.

@pavelfeldman
Copy link
Member

I think it should only get written when it's actually running tests.

Agreed. Note that for the PGO-alike behavior, we would probably explicitly point to the file and commit it to the repo. Would be the same format as last-run, but user would copy it over into some playwright/ folder and point to it explicitly.

This comment has been minimized.

This comment has been minimized.

This comment has been minimized.

This comment has been minimized.

This comment has been minimized.

@muhqu
Copy link
Contributor Author

muhqu commented May 30, 2024

@pavelfeldman […] we would probably explicitly point to the file […]

Do you have a recommendation how you would name cli parameter / configuration option?

Something like --sharding-read-last-run-info path/to/merged-last-run.json is probably too much of a mouth full? However, I think we should somehow make it obvious when data is read or written to a file…

dgozman pushed a commit that referenced this pull request May 30, 2024
When merging blob reports test ids are patched to make sure there is no
collision when merging reports that might have overlapping test ids.
However, even if you were merging reports that had no overlapping ids,
all test ids will be modified, which is an undesirable side effect.

This PR only modify test ids when the same test id has already been used
in a previous blob report.

----

This change is also part of
#30962
Copy link
Contributor

Test results for "tests 1"

27501 passed, 608 skipped
✔️✔️✔️

Merge workflow run.

pavelfeldman added a commit to pavelfeldman/playwright that referenced this pull request Jun 11, 2024
…icrosoft#30817)"

This reverts commit 825e0e4.

API review notes: sounds like this change did not solve the problem
for the contributor, there is a new approach under development in
microsoft#30962
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants