-
-
Notifications
You must be signed in to change notification settings - Fork 2.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Parallelize tests on test block level vs. file level #3962
Comments
This is currently not possible with Mocha and Jasmine, we need to look into a new testrunner that might support this. |
Would be awesome if we find some option @wswebcreation 👍 |
Yeah, this is a functionality you need at the framework level, but also the concept itself might not be compatible with WDIO. The WDIO Test Runner is built around running spec files each in their own process. If you wanted it blocks in parallel from the framework inside the runner, each instance would then be spawning their own child processes. |
I'm pretty new to wdio and I like what it's doing 👍 . It also means I don't understand a lot. I come from a C# and Java background and in those languages, you can parallelize at the method level and you avoid having this problem at all. Furthermore, you avoid having a single driver session run multiple tests. Which is just harder to debug because now every session is a conglomeration of say 10 individual tests. The only argument that I heard for this so far is that you save on the initialization of the driver time because now you will only need to start a driver for files. But that math doesn't really work out: Current: 10 files x 10 tests each x 10s per test. However, you can only parallelize 10 max. Hence the total suite run time = 10 files x 10 iterations (one for each method) = 100sec With method level parallelization: 10 files x 10 tests each x 20s per test ( assuming an extra 10 sec per test to start a browser). 100 tests in parallel = 20 sec We can continue to play with the numbers, but you can see how unlikely it is that wdio suites will ever run faster than a suite that can parallelize on every method. Parallelization is the most powerful way to scale suites, not optimization on test case run time (although this helps a little). Scale horizontally, not vertically. At least until we really reach our hardware capacities. From working with clients, I'd say that's more an exception as opposed to a rule. Also, in the last 2 weeks, I've had about 5 clients complain to me about this limitation. I'll keep sending them here. Maybe if we get enough support, is this feature viable? Or is it out of the question? Please let me know if I'm not getting something :) As I said, I'm still a neophyte. |
I totally agree it would be faster. What I'm trying to say is that WDIO is effectively a framework on top of a framework. It's like if in C# you put something on top of nUnit or in Java you put something else on top of JUnit. WDIO supports using different runners like Mocha, Jasmina, Chai and Cucumber underneath as the test runner portion. Most of these frameworks don't parallel execute by the it block. It is totally possible, it's just a matter of finding a framework that can run it blocks in parallel and then create an adapter for it that works with @wdio/runner. But during the creation of the adapter we may find that the concept of running it block in parallel might not translate well and there is probably an incapability architecture within WDIO that would block us. But won't know until we try. |
If there is some js test runner that supports block level parallelization wdio would be able to support it (I hope, or we'll force it, lol), if you are aware of such, let us know. |
@naddisson So you’re saying there’s hope haha :) |
@CrispusDH do you have plans adding AVA as a test framework to wdio? |
@mgrybyk I was thinking about it several times but currently I don't have enough skill for that. I think it would be really nice. |
@mgrybyk and @CrispusDH Ava was also the library @christian-bromann was talking about a few weeks ago to do this kind of work. |
Hmm, just looked into this for Jest and it looks like Jest might be able to run tests inside a testfile concurrent with a limitation of X amount of tests per testfile, see also this thread Maybe worth investigating |
FYI: this landed on the roadmap (see #4057) |
Awesome it is going to be a good feature |
Hey guys, what about cucumber? Isn't it doing it already? I have worked with a Protractor framework that uses cucumber and we had everything running in parallel at feature/test level using maxInstances and shardTestFiles capabilities |
Every framework supported by wdio, including Cucumber, runs test files parallel, same as protractor. With this PR we want to run spec test blocks in parallel, like it is done in ava |
Is it possible already to run features in parallel using few mobile devices (I. E. android emulators), could you please tell me how? |
Yes, it's possible, depending on your need configuration may differ, please use gitter for such questions, this is wrong place to discuss it. You might need setup grid with multiple appium nodew attached to it, and run your tests in parallel then |
I've been investigating moving from a c# framework to wdio, and just found this bug as I couldn't work out how to run scenarios from within the same file in parallel. I see the Roadmap Item tag was removed - how come? Is there a work around for this bug? Can I verify something? Our regression set is 3000 atomic tests and we have a grid server with 100 slots. These 3000 scenarios are split into a number of different files, with the largest file containing 500 scenarios. The regression suite runs and all other files finish running, but the file with 500 scenarios will continue running one test at a time, despite there being 99 free slots on the grid server? That sounds bonkers! I must be misunderstanding something? (Not blaming wdio here, as it seems that wdio just calls Jasmine/ Mocha etc.. who should do the parallelisation) |
you can use |
Because we changed the way how WebdriverIO manages its roadmap.
I don't think this is a "bug". It is more a limitation/requirement that you have (and some others).
That is bonkers. However WebdriverIO is not able to allocate other slots in your grid server just like that. It depends on how Cucumber is running the feature files. There are ways to mitigate this and provide a test session on a per scenario or spec basis. This is a very desirable feature that we would love to implement. If you want to get involved, we appreciate any contribution in that direction. Right now WebdriverIO creates an instance per spec/feature file. I could see a scenario where we would introduce a new options (e.g. |
I implemented an initial POC here: https://github.com/webdriverio/webdriverio/tree/cb-sharding The problem is that frameworks like Mocha, Jasmine and Cucumber run one spec/feature one at a time. Even with add-ons like |
It is a bit out of the box, but it is possible to achieve the same outcome with Mocha and Jasmine by using tooling to splitting each spec into multiple specs. (Hat tip to Eric Elliot for the idea). I've managed to set that up in WebdriverIO using the The code to code split is pretty short. Here's the gist of it without writing new files. You can likely improve the performance a bit too. https://gist.github.com/seanpoulter/33299b0d38bc81dfeb75c6370dbdcc56 Unfortunately I dropped this exploration because the majority of my day-to-day work has been using watch mode to write page objects. Since watch mode does not handle added or removed files very well, I shelved this. |
Thanks for the gist, it was really usefull. I took it a step further and change my config to this. This is really buggy code, but it was just for an initial POC const {ConfigParser} = require('@wdio/config');
const {ensureFileSync, readFileSync, removeSync, writeFileSync} = require('fs-extra');
const {parseSync, transformFromAstSync} = require('@babel/core');
const {cloneDeep} = require('lodash');
exports.config = {
// ====================
// Runner Configuration
// ====================
runner: 'local',
// ==================
// Specify Test Files
// ==================
specs: [
'./test/specs/**/*.js',
// './test/specs/login.spec.js'
],
// ============
// Capabilities
// ============
maxInstances: 100,
// capabilities can be found in the `wdio.local.chrome.conf.js` or `wdio.sauce.conf.js`
// ===================
// Test Configurations
// ===================
logLevel: 'silent',
bail: 0,
baseUrl: 'https://www.saucedemo.com/',
waitforTimeout: 10000,
connectionRetryTimeout: 90000,
connectionRetryCount: 3,
framework: 'jasmine',
reporters: ['spec'],
jasmineNodeOpts: {
defaultTimeoutInterval: 60000,
helpers: [require.resolve('@babel/register')],
},
services: [],
onPrepare: (config) => {
const configParser = new ConfigParser();
// Get the specs
const specs = config.specs;
const exclude = config.exclude;
const currentSpecs = configParser.getSpecs(specs, exclude);
// Make a copy of the original spec and empty the current one
config.originalSpecs = config.specs;
config.specs=[];
// Now iterate over each spec and split it into single it's per file
currentSpecs.forEach(spec => {
const file = readFileSync(spec, 'utf8');
// @TODO: this is crappy, but enough for an initial POC
const singleDescribe = file.match(/(describe\()/g).length === 1;
if (singleDescribe) {
const ast = parseSync(file);
const describeIndex = findDescribeIndex(ast);
const itIndexes = findItIndex(ast.program.body[describeIndex]);
// Now do the magic
createSingleItFiles(ast, describeIndex, itIndexes, spec);
// Push the new specs into the config
itIndexes.forEach(currentItIndex => config.specs.push(`${spec}.${currentItIndex}.js`));
} else {
console.log(` WARNING, THIS SPEC FILE: ${spec}
CONTAINS MULTIPLE DESCRIBES AND CAN NOT BE SPLIT!`
);
config.specs.push(spec);
}
});
},
onComplete: (exitCode, config)=>{
// When done remove the files and clean up the config.specs
config.specs.forEach(spec=> removeSync(spec));
config.specs = config.originalSpecs;
delete config.originalSpecs;
}
};
/**
* For the describes
*/
const isCallToDescribe = (node) =>
node.type === 'ExpressionStatement'
&& node.expression.type === 'CallExpression'
&& node.expression.callee.type === 'Identifier'
&& node.expression.callee.name.toLowerCase() === 'describe';
const findDescribeIndex = (ast) =>
ast.program.body.reduce((array, node, index) => isCallToDescribe(node) ? [...array, index] : array, []);
/**
* For the it's
*/
const isCallToIt = (node) =>
node.type === 'ExpressionStatement'
&& node.expression.type === 'CallExpression'
&& node.expression.callee.type === 'Identifier'
&& node.expression.callee.name.toLowerCase() === 'it';
const findItIndex = (body) =>
body.expression.arguments[1].body.body.reduce((array, node, index) => isCallToIt(node) ? [...array, index] : array, []);
const createSingleItFiles = (ast, describeIndex, itIndexes, spec)=> {
itIndexes.forEach((currentItIndex) => {
const newAst = cloneDeep(ast);
// Get the describe
const describe = newAst.program.body[describeIndex];
// First one is always the StringLiteral, second one the (Arrow)FunctionExpression
const describeArgs = describe.expression.arguments[1].body.body;
// Filter out the before/after-All/Each and the matching it so we get
// one it with the current before/after-All/Each
describe.expression.arguments[1].body.body = describeArgs.filter((arg, index) => {
return index === currentItIndex || !itIndexes.includes(index);
});
const newCode = transformFromAstSync(newAst).code;
ensureFileSync(`${spec}.${currentItIndex}.js`);
writeFileSync(`${spec}.${currentItIndex}.js`, newCode);
});
} The outcome, I had 24 tests divided over 8 testfiles, when I run that (with the old config) I get this log NOT running the tests in parallel per
After implementing the new With single it's per spec file
|
In the end:
I also ran this on the following config config.capabilities = [
{
browserName: 'googlechrome',
platformName: 'Windows 10',
browserVersion: 'latest',
'sauce:options': {
...defaultBrowserSauceOptions,
},
...chromeOptions,
},
{
browserName: 'firefox',
platformName: 'Windows 10',
browserVersion: 'latest',
'sauce:options': {
...defaultBrowserSauceOptions,
},
},
{
browserName: 'internet explorer',
platformName: 'Windows 8.1',
browserVersion: 'latest',
'sauce:options': {
...defaultBrowserSauceOptions,
iedriverVersion: '3.141.59',
},
},
{
browserName: 'MicrosoftEdge',
platformName: 'Windows 10',
browserVersion: '18.17763',
'sauce:options': {
...defaultBrowserSauceOptions,
},
},
{
browserName: 'MicrosoftEdge',
platformName: 'Windows 10',
browserVersion: 'latest',
'sauce:options': {
...defaultBrowserSauceOptions,
},
},
// Safari 11 is not W3C compliant,
// see https://developer.apple.com/documentation/webkit/macos_webdriver_commands_for_safari_11_1_and_earlier
{
browserName: 'safari',
platform: 'macOS 10.13',
version: '11.1',
...defaultBrowserSauceOptions,
},
{
browserName: 'safari',
platformName: 'macOS 10.14',
browserVersion: 'latest',
'sauce:options': {
...defaultBrowserSauceOptions,
},
}
]; and it this was the endresult
against this data when I don't split the files
As you can see, almost double the time due to more memory/cpu consumption |
I've released a new service that will be the first step in helping with this challenge. It can be found here. I'll close this issue here and maybe we can continue the conversation there. Thanks for all ideas and help! |
Well done and congratulations Wim! It isn't too surprising that running things fully in parallel can take longer. Have you tried to find the number of describes per spec that may make it run faster? I'd reach for something like _.chunk. 😁 |
Hi @seanpoulter Thanks, well, it's not the splitting that makes it take longer, it's spinning up 100 workers at the same time which consumes a lot of CPU and memory. That's something for phase 2 😉 |
I think we're saying the same thing. Starting all those workers takes a lot of resources. I'm looking forward to seeing if we can get results faster than if you split the files by distributing |
Is your feature request related to a problem? Please describe.
As our automation suites grow, we need to be able to run our tests in parallel since that's the most powerful mechanism to help with the constant growth of tests. The challenge that myself and other clients face is that we can only parallelize at the .js file level.
So for someone that wants to run in massive parallel, that's a really annoying problem to have. For example, if I would like to run 100 tests in parallel, that means that I need 100 JS files. This problem is magnified for larger clients that like to run several hundred tests in parallel. Although doable, it's not practical as then you are forced to create JS files for the sake of achieving parallelization. And then you end up doing weird things like breaking up your feature files into multiple .js files.
Describe the solution you'd like
I would like to be able to parallelize on every test method as opposed to a test file. Of course you can keep the parallelization on file level too. This way, I can have something like 10 files with 10 tests each and run all 100 in parallel.
Describe alternatives you've considered
Additional context
Add any other context or screenshots about the feature request here.
The text was updated successfully, but these errors were encountered: