Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

--fail-fast keeps running other test files #1158

Closed
novemberborn opened this issue Dec 19, 2016 · 12 comments
Closed

--fail-fast keeps running other test files #1158

novemberborn opened this issue Dec 19, 2016 · 12 comments
Labels
bug current functionality does not work as desired help wanted scope:scheduling

Comments

@novemberborn
Copy link
Member

The test process that encounters the error simply exits if --fail-fast is enabled. Since each file runs in its own process, no other tests from that file are reported after the first failure is encountered.

However, other test processes run to completion. I'm not sure whether to:

  • terminate them outright, which prevents any cleanup, even if there are no failures in those processes
  • silence their output, which at least means the test output isn't cluttered with passing tests, but makes it harder to see why ava won't exit
  • shrug and say that this is expected

When running with limited concurrency we'll also run into this scenario, so we should implement the same behavior. Additionally we mustn't launch new test processes.

Any thoughts on what to do with processes that are already running?

@novemberborn novemberborn added bug current functionality does not work as desired question labels Dec 19, 2016
@novemberborn
Copy link
Member Author

You can reproduce this issue by creating one.test.js:

import test from 'ava'

test(t => t.fail())

and two.test.js:

import test from 'ava'

test(t => t.pass())

Then ava --verbose --fail-fast.

@ThomasBem
Copy link
Contributor

I also noticed this while working on issue #1134 today. Here is a screenshot of me trying to run two different files both with a failing test somewhere in the middle.

screen shot 2016-12-21 at 19 58 17

When things are as small as these examples it does not seem like much of a problem. But if you are running a huge test suite for a complex application. Then allowing Ava to keep running tests kind of defeats the purpose of --fail-fast in my opinion.

So to keep the --fail-fast option working as intended, i would say shutdown any running processes. But i don´t know what problems that might cause since you talk about preventing cleanup:

terminate them outright, which prevents any cleanup, even if there are no failures in those processes

@mightyiam
Copy link
Contributor

My thoughts:

  1. do not run any more test files
  2. do not run any more tests
  3. allow tests to complete, including their afterEach
  4. allow test files to complete, including their after

Only then, exit.

@novemberborn
Copy link
Member Author

So to keep the --fail-fast option working as intended, i would say shutdown any running processes. But i don´t know what problems that might cause since you talk about preventing cleanup

Test files with failing tests are terminated quickly (but not immediately, in a synchronous, blocking sense). So --fail-fast already implies that any cleanup code you have in your test file may not run if there's an error. I'm fine with extending this behavior to other test files, too.

I'm leaning towards forcefully stopping all other test processes and not logging any further output once a test has failed.

@mightyiam
Copy link
Contributor

Test files with failing tests are terminated quickly (but not immediately, in a synchronous, blocking sense). So --fail-fast already implies that any cleanup code you have in your test file may not run if there's an error. I'm fine with extending this behavior to other test files, too.

I'm leaning towards forcefully stopping all other test processes and not logging any further output once a test has failed.

How is this reasonable? This should perhaps be a feature, if you find it useful, but not --fail-fast. This should be --fail-very-fast or some other naming scheme.

There should be a feature to fail on the first failing test, yet, let the run finish gracefully, like I described. And it should be the more approachable, defaulty, recommended option.

Is how I see it.

@novemberborn
Copy link
Member Author

That seems unnecessarily complicated. The way I see it you'd use --fail-fast to quickly find errors so you can fix them. It helps if any state created by your tests is preserved (by virtue of not being cleaned up).

@mightyiam
Copy link
Contributor

It will probably not help that state created by your tests is preserved because that would be state created by tests other than the one which actually failed. Confusing, I think.

@novemberborn
Copy link
Member Author

The state of the failing test would be preserved amongst potentially that of others.

@mightyiam
Copy link
Contributor

The state of the failing test is the only one we'd be interested in. The others' could be confusing and a hassle to manually clean up. That's why I propose: let them finish and clean up after themselves. No new tests will start. Just the ones already running.

@novemberborn
Copy link
Member Author

Ah, so finish the currently running, non-failing tests, including their afterEach callbacks. That way most state would be cleared up, though you may see more than one failure in the test output.

We wouldn't be able to run the after callbacks though — they may assume the failing tests have completed and therefore crash.

@novemberborn
Copy link
Member Author

#1693 ensures --fail-fast works with limited concurrency mode, preventing new test files from being run. It also fails fast after a timeout or if a test file crashed.

As part of #1684 I'll make sure we stop reporting subsequent test results once a test fails, while still letting the currently active tests run to completion.

novemberborn added a commit that referenced this issue Feb 10, 2018
Don't run new test files after:

* A timeout has occurred
* A test has failed
* Running the test file itself caused an error

Refs #1158.
novemberborn added a commit that referenced this issue Feb 10, 2018
Fixes #1684. Fixes #1416. Refs #1158.

Properly support serial hooks. Hooks are divided into the following
categories:

* before
* beforeEach
* afterEach
* afterEach.always
* after
* after.always

For each category all hooks are run in the order they're declared in.
This is different from tests, where serial tests are run before
concurrent ones.

By default hooks run concurrently. However a serial hook is not run
before all preceding concurrent hooks have completed. Serial hooks are
never run concurrently.

Always hooks are now always run, even if --fail-fast is enabled.

Internally, TestCollection, Sequence and Concurrent have been removed.
This has led to a major refactor of Runner, and some smaller breaking
changes and bug fixes:

* Unnecessary validations have been removed
* Macros can be used with hooks
* A macro is recognized even if no additional arguments are given, so it
can modify the test (or hook) title
* --match now also matches todo tests
* Skipped and todo tests are shown first in the output
* --fail-fast prevents subsequent tests from running as long as the
failure occurs in a serial test
novemberborn added a commit that referenced this issue Feb 10, 2018
Fixes #1684. Fixes #1416. Refs #1158.

Properly support serial hooks. Hooks are divided into the following
categories:

* before
* beforeEach
* afterEach
* afterEach.always
* after
* after.always

For each category all hooks are run in the order they're declared in.
This is different from tests, where serial tests are run before
concurrent ones.

By default hooks run concurrently. However a serial hook is not run
before all preceding concurrent hooks have completed. Serial hooks are
never run concurrently.

Always hooks are now always run, even if --fail-fast is enabled.

Internally, TestCollection, Sequence and Concurrent have been removed.
This has led to a major refactor of Runner, and some smaller breaking
changes and bug fixes:

* Unnecessary validations have been removed
* Macros can be used with hooks
* A macro is recognized even if no additional arguments are given, so it
can modify the test (or hook) title
* --match now also matches todo tests
* Skipped and todo tests are shown first in the output
* --fail-fast prevents subsequent tests from running as long as the
failure occurs in a serial test
novemberborn added a commit that referenced this issue Feb 10, 2018
Fixes #1684. Fixes #1416. Refs #1158.

Properly support serial hooks. Hooks are divided into the following
categories:

* before
* beforeEach
* afterEach
* afterEach.always
* after
* after.always

For each category all hooks are run in the order they're declared in.
This is different from tests, where serial tests are run before
concurrent ones.

By default hooks run concurrently. However a serial hook is not run
before all preceding concurrent hooks have completed. Serial hooks are
never run concurrently.

Always hooks are now always run, even if --fail-fast is enabled.

Internally, TestCollection, Sequence and Concurrent have been removed.
This has led to a major refactor of Runner, and some smaller breaking
changes and bug fixes:

* Unnecessary validations have been removed
* Macros can be used with hooks
* A macro is recognized even if no additional arguments are given, so it
can modify the test (or hook) title
* --match now also matches todo tests
* Skipped and todo tests are shown first in the output
* --fail-fast prevents subsequent tests from running as long as the
failure occurs in a serial test
@novemberborn
Copy link
Member Author

As part of #1684 I'll make sure we stop reporting subsequent test results once a test fails, while still letting the currently active tests run to completion.

OK so what I ended up doing is that workers won't run other serial tests, or (as long as the error occurs early enough) concurrent tests. Tests that have started are run to completion, including hooks, and shown in the test output. If a test fails in one worker AVA will try and interrupt other workers.

novemberborn added a commit that referenced this issue Feb 10, 2018
If a failure occurs in one worker, attempt to interrupt
other workers. This only works as long as the other worker has not yet
started running concurrent tests.

Fixes #1158.
novemberborn added a commit that referenced this issue Feb 11, 2018
If a failure occurs in one worker, attempt to interrupt
other workers. This only works as long as the other worker has not yet
started running concurrent tests.

Fixes #1158.
novemberborn added a commit that referenced this issue Feb 11, 2018
Don't run new test files after:

* A timeout has occurred
* A test has failed
* Running the test file itself caused an error

Refs #1158.
novemberborn added a commit that referenced this issue Feb 11, 2018
Fixes #1684. Fixes #1416. Refs #1158.

Properly support serial hooks. Hooks are divided into the following
categories:

* before
* beforeEach
* afterEach
* afterEach.always
* after
* after.always

For each category all hooks are run in the order they're declared in.
This is different from tests, where serial tests are run before
concurrent ones.

By default hooks run concurrently. However a serial hook is not run
before all preceding concurrent hooks have completed. Serial hooks are
never run concurrently.

Always hooks are now always run, even if --fail-fast is enabled.

Internally, TestCollection, Sequence and Concurrent have been removed.
This has led to a major refactor of Runner, and some smaller breaking
changes and bug fixes:

* Unnecessary validations have been removed
* Macros can be used with hooks
* A macro is recognized even if no additional arguments are given, so it
can modify the test (or hook) title
* --match now also matches todo tests
* Skipped and todo tests are shown first in the output
* --fail-fast prevents subsequent tests from running as long as the
failure occurs in a serial test
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug current functionality does not work as desired help wanted scope:scheduling
Projects
None yet
Development

No branches or pull requests

3 participants