New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
local
runner spawns one transpilation process for each feature file with cucumber framework
#4949
Comments
There is nothing can be done on our side, transpile your code before running tests instead of using in memory transpilers. It's not about webdriverio but how nodejs work. The memory cannot be shared between main and forked nodejs instances, so every package is loaded for every nodejs process, including in memory transpilers which are required to load typescript files |
that means that I have a disconnect between the code that I am developing on, e.g. there is an indirection in the files I watch and the ones I edit. This has potential implications on sourcemaps/errors, breakpoints, etc.
I am not sure this is 100% correct. wdio 4 works fine with this, e.g. I have one transpilation process, all features are run against the output of it - the problem is the runner in wdio5 which creates child processes. I understand that the current setup of the runner can't work but it basically means that wdio5 is unusable for non-native JS on a big codebase; I'd appreciate some help in solving this - wdyt about any of the possible solutions I outlined above (own runner, changing the ootb local runner to be able to use one process, service)? |
Yes, it's correct, wdio v5 works differently. |
mhm .. I wonder how v4 is different then v5. In both (v4 and v5 we fork a child process. Also in both we would transpile code in the forked process. I can see the following solutions:
@joscha thanks for reporting this. I think this is an interesting use case which we haven't run into yet. I would love to provide a built in solution for this. What do you think would work best? |
I think ideally we'd transpile in memory once and then use the result for all feature runs. Watch mode should be based on the actual files loaded ideally, so my preferred solution would be to modify the local runner to configure it to use onechild process and then run each feature file sequentially against it. I played around with The optimum would be a configuration option for the OOTB local runner to support this, but if you think it is an edge case and don't want to support it the second best option would be to write our own runner and use it with a custom config. |
To clarify, your scenario is running tests in watch mode and recompile them when a feature file is changed, right?
What is the OOTB local runner? |
Not yet. I can't even get a normal test run through because even in non-watch mode there is a transpile step per feature file (see sample repository; you can run
Sorry, OOTB = out of the box - the |
e.g.: webdriverio/packages/wdio-local-runner/src/worker.js Lines 67 to 72 in 5cceaa8
|
I tried a few things now:
const suite = options.cid.split('-')[0];
let worker;
if(!this.workerPool[suite]) {
log.info(`New worker for ${suite}`)
worker = new _worker.default(this.config, options, this.stdout, this.stderr);
this.workerPool[suite] = worker;
} else {
log.info(`Reusing worker for ${suite}`)
worker = this.workerPool[suite];
} but unfortunately this doesn't work properly as workers build up state when they are first created: webdriverio/packages/wdio-local-runner/src/worker.js Lines 38 to 48 in 5cceaa8
Thoughts:
e.g.: for { cid: '0-0',
command: 'run',
configFile: '/Users/joscha/work/wdio-ts-no-recompile/wdio.conf.js',
argv: { ...},
caps: { maxInstances: 5, browserName: 'chrome' },
specs:
[
'/Users/joscha/work/wdio-ts-no-recompile/test/features/1.feature'
'/Users/joscha/work/wdio-ts-no-recompile/test/features/2.feature'
],
server: { ... },
execArgv: [],
retries: undefined
} but two separate events like this: { cid: '0-0',
command: 'run',
configFile: '/Users/joscha/work/wdio-ts-no-recompile/wdio.conf.js',
argv: { ...},
caps: { maxInstances: 5, browserName: 'chrome' },
specs:
[ '/Users/joscha/work/wdio-ts-no-recompile/test/features/1.feature' ],
server: { ... },
execArgv: [],
retries: undefined
} { cid: '0-1',
command: 'run',
configFile: '/Users/joscha/work/wdio-ts-no-recompile/wdio.conf.js',
argv: { ...},
caps: { maxInstances: 5, browserName: 'chrome' },
specs:
[ '/Users/joscha/work/wdio-ts-no-recompile/test/features/2.feature' ],
server: { ... },
execArgv: [],
retries: undefined
} |
Seems as if I can fix the performance problem by combining spec files into one suite, e.g. if I change this part of the webdriverio/packages/wdio-cli/src/launcher.js Lines 158 to 165 in 5cceaa8
from: specs: this.configParser.getSpecs(caps.specs, caps.exclude).map(s => ({
files: [s],
retries: specFileRetries
})), to: specs: [{
files: this.configParser.getSpecs(capabilities.specs, capabilities.exclude),
retries: specFileRetries
}],
instead of the correct:
|
@christian-bromann the attempt merging the spec files in one suite seems to be the most promising. Is there any downside you can see? If we made this configurable via a |
Yes, these spec files are being run consecutively which slows down your test execution time.
Sounds like an acceptable solution to me. I would love to get the reporting fixed as part of it. |
Yes, these spec files are being run consecutively which slows down your
test execution time.
Yes, that's expected, but if we make it a configuration option then
everyone can decide what the trade-off is between number of specs per suite
and transpilation/processing time.
Sounds like an acceptable solution to me. I would love to get the reporting
fixed as part of it.
Agreed, will take a look!
|
I have a fix + tests in https://github.com/webdriverio/webdriverio/compare/master...joscha:joscha/specs-per-suite?expand=1 but for the reporting output it seems it's not actually the spec reporter that is the problem, but the data that's being passed on to the reporters. It seems incorrect:
Some data seems to be jumbled up somehow. I think it might be the test |
Tried to find out where the reporting goes wrong. I think it is incorrect because currently the launcher assumes that there is one spec per run. I added d78f22f to fix this, but it turns out that the run specs are not reported properly either. @christian-bromann is there any way I can pair with someone on this? I feel like maybe there are other options to solve this problem that I am missing? |
trying to make specs batchable seems to be a rabbit hole. There a multiple places that have code like this: if (passed) {
this.result.passed++
this.onSpecPass(cid, job, retryAttempts)
} else if (retry) {
this.totalWorkerCnt++
this.result.retries++
this.onSpecRetry(cid, job, retryAttempts)
} else {
this.result.failed++
this.onSpecFailure(cid, job, retryAttempts)
} (see the passed, retried and failed state being just incremented?) This code would at least need to become: if (passed) {
this.result.passed += job.specs.length
this.onSpecPass(cid, job, retryAttempts)
} else if (retry) {
this.totalWorkerCnt++
this.result.retries += job.specs.length
this.onSpecRetry(cid, job, retryAttempts)
} else {
this.result.failed += job.specs.length
this.onSpecFailure(cid, job, retryAttempts)
} but even that is not entirely correct because each spec of a job has its own status, but webdriverio currently assumes that there is only ever one spec per job so a lot of the conditions are simplified to just counting up or down by 1. In order for these places to fix the way we store state about specs inside suites needs to be changed which pretty much affect the whole codebase from launcher over runner to reporter. |
I unsuccessfully tried the following:
Things that I haven't tried, yet:
|
How about giving another shot for manual transpilation? I had it in a big project and didn't experience any serious issues. |
What do you mean by manual transpilation? As in generation of the
transpiled code ahead of time and then put it somewhere and run webdriverio
against it?
If yes, did that work for you with file watching, breakpoints and stack
traces?
It also doesn't make a difference for the configuration file as far as I
can tell unless we transpile the wdio.conf.ts ahead of time as well.
Cheers,
J
…On Thu, 23 Jan 2020, 18:22 Mykola Grybyk, ***@***.***> wrote:
How about giving another shot for manual transpilation? I had it in a big
project and didn't experience any serious issues.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#4949?email_source=notifications&email_token=AABN5BWBUOFOIRSDAOSNUVLQ7FATVA5CNFSM4KFAK3YKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEJWISWY#issuecomment-577538395>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AABN5BXKML6BTZ6LI74L6JDQ7FATVANCNFSM4KFAK3YA>
.
|
Yes, I transpile code in onPrepare hook, so it happens only once, and doing some magic in debug when running single file in VSCode because it's required to change ts to js. I had some minor issues, but nothing critical. I end up using ts node when I need to debug single file, and transpiling files in advance for other cases. Consider using source maps for your ts files. |
I think ideally the debug case and the normal case would be the same. I am really keen to make this work out of the box so all docs, etc. for everything hold true for this case as well and I don't have to have 200+ developers relearn how to debug just because we do a wdio upgrade. I also think that this case comes up a lot more often than you know - if we had a small codebase that transpiles fast I wouldn't have noticed either, even though some seconds are lost in each test run. The reason I noticed is because the transpilation literally takes 40s to 1m and wdio 5+ becomes unusable for us. If we do the calculation for a smaller company and assume 100 feature files and a small transpilation overhead of 5s, then we are still looking at an overall test slowdown of ~8min (99*5/60) which is bearable but something I'd still call substantial. |
Absolutely agree, feel free to help! |
I opened a draft pull request here: #4975 for discussion. It solves the problem of multiple transpilations whilst keeping reporting intact. I was hoping to contain the changes to the |
@joscha I am trying to pick up this conversation and I am wondering if it would be possible to extend the example repo so that we can see the increased transpilation time there (not up to a minute but like 10s). Something that is weird is that we have been using child processes in v4 as well and I don't see where v5 has changed that would make this a problem. |
Okay, will try to extend the example. Did you check out my first fix where I batch tests together? It solves the problem and you can even see the speed increase in the simple repository I created. |
Yes, I commented on it.
It runs faster because it doesn't spin up a new session for every small feature file. However this would tremendously slow down tests with larger test files and more capabilities as they are not run in parallel. |
Here is the PR joscha/wdio-ts-no-recompile#2 that adds some ts files to increase a compile time to make it noticeable. @christian-bromann please take a look. |
@christian-bromann did you have time to check out the sample, yet? |
We will tackle running multiple specs in a single worker in #6469 |
Environment (please complete the following information):
5.18.4
local
?sync
v10.16.0
not using npm; yarn 1.21.1
Chrome
, but does not matter for this problemOSX Mojave 10.14.6 (18G87)
Config of WebdriverIO
https://github.com/joscha/wdio-ts-no-recompile/blob/master/wdio.conf.ts
EDIT: https://github.com/joscha/wdio-ts-no-recompile/blob/master/wdio.conf.js (removed one layer of Typescript to show the problem without noise)
Describe the bug
When using the cucumber framework with the
local
runner and Typescript it turns out that because thelocal
runner spawns one subprocess per feature file the project gets transpiledn
times forn
feature files.Our (mono-)repository has ~250 feature files and tens of thousands of Typescript files, which means that the overhead is immense (about 1 minute for each transpilation step; and that's without type-checking enabled).
We are trying to upgrade from wdio 4 (
4.14.4
) to wdio 5 and with the currentlocal
runner it doesn't seem possible to make this work.Current thought is to do one of the following:
local
runner in this repository to teach it to run all feature files in once process if so desired (something somethingmaxWorkers=1
)To Reproduce
Steps to reproduce the behavior:
local
runnerTS_NODE_DEBUG=true wdio
to run wdio to see transpilation output (you should see one for each feature file being run)Example repo is here: https://github.com/joscha/wdio-ts-no-recompile - you can run
yarn test
in it to see log output fromts-node
three times (once for the wdio config and once for each feature file)Expected behavior
transpilation should only happen once for all feature files.
cc @olebedev @gustavohenke
The text was updated successfully, but these errors were encountered: