Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Run individual tests in separate processes #421

Closed
jamestalmage opened this issue Jan 13, 2016 · 29 comments
Closed

Run individual tests in separate processes #421

jamestalmage opened this issue Jan 13, 2016 · 29 comments
Labels

Comments

@jamestalmage
Copy link
Contributor

Right now we fork per test file, but not per individual test method. We have discussed the latter a few times in other threads, and finally decided to open an issue to discuss.

Pros:

  • Truly isolated tests. No chance they interfere with each-other unless they are interacting with outside resources (servers / databases / file system).
  • Fresh global state for each test. This would be a big help for polyfill developers.

Cons:

  • It will almost certainly introduce a performance penalty.
  • Complete isolation could mask some errors. (i.e. What if your function always blows up the second time it is run? Complete isolation might mean your test suite never runs it twice).
  • It will be really hard to replicate when it comes to implementing browser support.
@sindresorhus sindresorhus added this to the Future milestone Jan 13, 2016
@sindresorhus
Copy link
Member

Fresh global state for each test. This would be a big help for polyfill developers.

This would be huge, not only for polyfill dev, but anything that modifies global state.

It will almost certainly introduce a performance penalty.

Sometimes it's worth it. For example in https://github.com/sindresorhus/get-stdin I had to split my tests into two files as they somehow conflicted with each other.

It will be really hard to replicate when it comes to implementing browser support.

That won't be a goal either. We'll just run it normally there. Running files in parallel will also be gone when run in the browser.

@jamestalmage
Copy link
Contributor Author

Running files in parallel will also be gone when run in the browser.

Not sure why that would need to be true. Just open a tab per test-file (setting this flag).

@sindresorhus
Copy link
Member

Oh, didn't realize that was possible. Make sure the mention that in #24 ;)

@ariporad
Copy link
Contributor

@sindresorhus: You changed your profile picture!

As for the actual topic at hand, I'm not really sure either way. On one
hand, it would be nice, but on the other, there would be a huge
performance penalty. And some unexpected behavior (read: it should be
behind a flag). It kind of starts to remind me of Google Apps Script, which
isn't a good thing.

Also, then before === beforeEach? That's something else we'd have to think
about
On Wed, Jan 13, 2016 at 2:07 PM ⌁ sɪɴᴅʀᴇ sᴏʀʜᴜs ⌁ notifications@github.com
wrote:

Oh, didn't realize that was possible. Make sure the mention that in #24
#24 ;)


Reply to this email directly or view it on GitHub
#421 (comment).

@sindresorhus
Copy link
Member

@ariporad Test hooks would still work as we would execute the file as normal, but only run a specific test (with hooks) instead of all.

@jamestalmage
Copy link
Contributor Author

Yep, the only issue would be:

before(t => setup database())

@ariporad
Copy link
Contributor

@sindresorhus: I know, but then the before hook would have to be run for
each test (because it would have to be run once per thread), therefore
making it equal to beforeEach.
On Wed, Jan 13, 2016 at 3:41 PM ⌁ sɪɴᴅʀᴇ sᴏʀʜᴜs ⌁ notifications@github.com
wrote:

@ariporad https://github.com/ariporad Test hooks would still work as we
would execute the file as normal, but only run a specific test (with hooks)
instead of all.


Reply to this email directly or view it on GitHub
#421 (comment).

@sindresorhus
Copy link
Member

Ah, that's true.

@jamestalmage
Copy link
Contributor Author

Maybe add setup/teardown methods that maintain the before/after symantics regardless. They wouldn't be able to participate in t.context, or t.context would need to be serializable

@sindresorhus
Copy link
Member

I'd rather look into solutions that don't require adding two new methods with very similar semantics.

@ariporad
Copy link
Contributor

@jamestalmage: Yeah, didn't you hear? @sindresorhus has decreed that there shall be only one possible way to do anything! 😄

@sindresorhus
Copy link
Member

@ariporad Not at all. I just want us to explore other options before adding more API surface. Adding new methods and options are easy. Keeping things minimal while powerful is very hard.

horrible-ui

@ariporad
Copy link
Contributor

@sindresorhus: that was like 98% joke, sorry.

On Fri, Jan 15, 2016 at 6:14 AM ⌁ sɪɴᴅʀᴇ sᴏʀʜᴜs ⌁ notifications@github.com
wrote:

@ariporad https://github.com/ariporad Not at all. I just want us to
explore other options before adding more API surface. Adding new methods
and options are easy. Keep things minimal while powerful is very hard.

[image: horrible-ui]
https://cloud.githubusercontent.com/assets/170270/12355055/68b5df7e-bb9a-11e5-90eb-285aab88f633.png


Reply to this email directly or view it on GitHub
#421 (comment).

@sindresorhus
Copy link
Member

@ariporad I know, but when rereading my previous comment I sounded a bit dictatorial, so just wanted to clarify ;)

@jamestalmage
Copy link
Contributor Author

That picture is awesome

@vadimdemedes
Copy link
Contributor

Have no idea how I missed this thread, just stumbled upon it.

I think it's a bad idea. Not only it will dramatically slow down tests, but also introduce a lot of mess in the AVA core. Quick example that will require "custom trickery" - serial tests. They won't work after this change. In my opinion, isolation per file is more than enough. New node process + babel witchcraft per each tests is a huge overhead.

@jamestalmage
Copy link
Contributor Author

I certainly don't think it should be the default behavior. But for certain situations (polyfills / etc) I think the benefits are so huge, they outweigh the downside.

@sindresorhus
Copy link
Member

I see both up and downsides. This issue is just to explore it. It's very likely we'll never do this.

@novemberborn
Copy link
Member

For nyc @jamestalmage wrote forking-tap which splits a single test file into a bunch of files, one per individual test. Then tap can run the tests in isolation. This was critical since nyc modifies the process its run in.

A forking-ava pretest step could do the same for us.

@vadimdemedes
Copy link
Contributor

I'm pretty sure it is an insane idea at the moment, so we should not even try to do it now :)

@jamestalmage
Copy link
Contributor Author

I don't see why it should be considered insane. There are real downsides to performance, so I don't think it can ever be the default, but there are also real benefits.

Perhaps the solution might be to use a fork method sugar:

function installPolyfill() {

}

function setup() {
  // other setup??
}

test.before.fork(setup); // run as before in this process, and every forked one.

test.before(installPolyfill); // only run in this process, not in forked ones.

test.fork('pre/post install check 1', t => {
  // isolated process
  // setup has already run, but installPolyfill has not
  t.notOk(check1());
  installPolyfill(); 
  t.ok(check1());
});

test.fork('pre/post install check 2', t => {
  // isolated process
  // setup has already run, but installPolyfill has not
  t.notOk(check2());
  installPolyfill(); 
  t.ok(check2());
});

test('foo', t => {
  // shared process with 'bar'
  // setup and installPolyfill have both run
});

test('bar', t => {
  // shared process with 'foo'
  // setup and installPolyfill have both run
});

This gets around performance issues by only creating additional forks for the few tests where complete isolation is necessary (I think pre/post polyfill checks is a good example) - the majority of your tests can stay fast.

I think the API also helps make sense of the "single before" vs "before per process" ambiguity discussed above.

I still think we have higher priorities, but I would like to see this implemented eventually.

@sindresorhus
Copy link
Member

Sure, but I don't see this being something we prioritize for 1.0.0. It's a nice to have, but not essential at all.

@novemberborn
Copy link
Member

Perhaps the solution might be to use a fork method sugar:

👍

Of course at that point you could also make separate test files for the one or two forks you need.

@jamestalmage
Copy link
Contributor Author

Of course at that point you could also make separate test files for the one or two forks you need.

True. Though if that number is 6/7 this has a lot more appeal. Especially if there is some amount of code sharing (helper functions, etc)

@kristianmandrup
Copy link
Contributor

@jamestalmage I like your proposal. I think it should be implemented as an extension to ava.
You could even add some conventions, like *.test.fork.js for test files where all tests should be forked or even a /*-fork folder?

@sindresorhus sindresorhus changed the title Fork per test. Run individual tests in separate processes Nov 3, 2016
@novemberborn novemberborn removed this from the Future milestone Aug 13, 2017
@novemberborn novemberborn added the enhancement new functionality label Aug 13, 2017
@novemberborn
Copy link
Member

This just seems far too niche and therefore destined to be a low priority issue that we'll never land. I'm closing this issue.

If you're reading this and are thinking "but this would be perfect for my use case" please chime in.

@lavie
Copy link

lavie commented Feb 8, 2018

@novemberborn chiming in. :)

So, our use case for this (wonderful idea, btw) is testing various hooking of Node.js standard streams (stdout/err). We do that kind of stuff a lot because we run arbitrary user code in a sort of sandbox, and we need to hook the streams in order to process them and present them to the user. Needless to say, such hooking is really problematic if you try doing it more than once in the same process. So a fork per test would be wonderful. I admit it's not a mainstream use-case, but if any of you guys could think of a way of doing this without actually building it into Ava, then I would be very thankful. Right now we have to resort to "docker container per test case" level hacks, and it's not fun. :(

@novemberborn
Copy link
Member

@lavie thanks for your comment.

I'd put each test in its own test file. Or, alternatively, write a fixture that is run in a child process. Then you can have all the tests in one file but still control the streams inside the fixture. (You'd probably want to use test.serial to avoid spinning up too many processes all at once.)

@StuAtGit
Copy link

StuAtGit commented May 3, 2019

If you're reading this and are thinking "but this would be perfect for my use case" please chime in.

Chiming in. Albeit with caveats. There are some other (social) processes that are triggering my situation.

Was told "We're using ava for tests, not anything else".. why? It's better, it runs everything in parallel, so speed. Hmm.. but every test I see is marked with "serial()", "Well those tests are bad, good tests can be run in parallel".

Why are they all marked with serial? "Because we're using sinon, not anything else". And:
#1066

🤷‍♀

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

8 participants