-
Notifications
You must be signed in to change notification settings - Fork 24k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Tests] Yeoman tests are flaky (specifically generator-test and generator-android-test) #2974
Comments
I also noticed the Podfile test failed on my local machine when running the jest tests in-band. I believe we should look into mocking "fs" if that's possible. |
Thanks for reporting! |
Sure thing. A couple other things I observed were:
|
Example of a failed test run: https://travis-ci.org/facebook/react-native/jobs/81832471 |
Looks like we're using the Yeoman test helpers correctly: http://yeoman.io/authoring/testing.html |
The tests not being isolated properly sounds like a reasonable explanation as well. Does |
It looks like the tests run sequentially when I run them locally. |
One way to debug this would be to add logging to the tests and check output on Travis. |
Jest will spawn child processes for each test file, but I believe the specs within each file run sequentially.
I looked into that and saw that beforeEach -> spec 1 -> beforeEach -> spec 2 -> etc... so it seemed like the specs would run sequentially. The fact that the tests are flaky made me think it had to do with scheduling though. |
I think |
Note: after the tests are more reliable we should delete |
👍 Thanks for the "fix"! :) Asked whether Yeoman tests can be run in parallel here: http://stackoverflow.com/questions/32751180/can-yeoman-tests-be-run-in-parallel |
Turns out the Yeoman tests rely on changing the current working directory. Therefore they can't be run in parallel: http://stackoverflow.com/questions/32751180/can-yeoman-tests-be-run-in-parallel/32751527 |
Nice find :) We could manually mock out |
Yes just saw that line too and was thinking of how to avoid relying on the CWD. Since |
Or we could just tell Jest to run the only CLI tests sequentially and remove |
I'd prefer we try to get the mock fs + process.chdir working, and if that fails then run the CLI tests sequentially because there is a measurable perf improvement with the parallelism on my machine (2~3x I think) . For in-memory fs, webpack has one that it has used for quite awhile: https://www.npmjs.com/package/memory-fs |
Cool, can give it a try tomorrow unless you're faster while I'm asleep :) |
OK. I probably won't get around to it but am happy to review if you'd like an extra pair of eyes on it. |
kk |
Yeah, we should aim at mocking the things that are clashing and make sure each test can be run safely in isolation. Let me know if you need any help @mkonicek. |
Hmm, how has this only now started to happen? Did something change in Jest? Did tests use to run sequentially? It's weird because these tests were stable for a long time, and the tests themselves haven't changed. Perhaps an easier fix than mocking fs would be to simply change these tests to use different |
Talked to @foghina, the right solution is to submit a PR to Yeoman. Mocking |
What about what @foghina said and using a different |
#3000 (different names) seems to be working well. If we see more failures in the future let's revisit this issue then. |
These tests sometimes fail on Travis and on my local machine as well. My suspicion is that it has to do with how the tests are scheduled since the generator tests write to a temporary directory on disk, and the Yeoman APIs are global so maybe there isn't good isolation between tests (there is one
assert.file
function that somehow knows what the current temp dir is).Perhaps we should disable these tests for now, or run the local-cli tests separately from the JS tests and mark them as "allowed to fail" in Travis.
The text was updated successfully, but these errors were encountered: