Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implement static analysis of test files (Blocker for #78). #696

Closed
jamestalmage opened this issue Apr 1, 2016 · 11 comments
Closed

Implement static analysis of test files (Blocker for #78). #696

jamestalmage opened this issue Apr 1, 2016 · 11 comments
Assignees
Labels
enhancement new functionality priority

Comments

@jamestalmage
Copy link
Contributor

Blocks #78.

We need a way to do static analysis of test files, specifically to discover if any use exclusive tests (i.e. test.only).

There are plenty of complications:

  1. Naming the AVA import something other than test:

    import ava from 'ava';
  2. Storing a reference to a test.only for reuse:

    const shortcut = test.serial.only;
    
    shortcut(t => t.is(...)); // will be an exclusive, serial test

    Similar can also be achieved with import statements:

    import test, {skip, todo, only} from 'ava';

    Or require statements:

    var skip = require('ava').skip;
  3. Dynamically generating tests (see [Idea]: test macros #695 possible solution).

    function testAddition(a, b, expected) {
     test(`${a} + ${b} === ${expected}`, t => t.is(a + b, expected));
    }
    
    testAddition(2, 2, 4);
    testAddition(3, 4, 7);
  4. Non-constant variables (variables that are reassigned after declaration, not referring to const here):

    var test;
    test = require('ava');

It seems possible to support 1 and 2 without a lot of pain. 3 quickly becomes impossible. I think #695 provides a solution that would allow us to cover most use cases for dynamically generating tests and still give us the ability to do the static analysis we need to. 4 just seems silly and probably not very likely in practice.

@jamestalmage
Copy link
Contributor Author

Whatever we decide to support via static analysis, we need to have a well defined behavior for when it fails (either with a false-positive for .only usage, or a false-negative).

In the case of false-positives I think the solution is non-trivial, but pretty straightforward:

  1. Run the files we believe contain .only tests (based on static analysis).
  2. If we have a false-positive in a single file, and there are more files where we believe .only tests exist, do not run any tests from that file, just mark the file as a false-positive and kill it.
  3. If we get through the complete list of files we believe have exclusive tests, but find everything was a false positive. Go back and re-run any files marked as false-positive.
  4. In watch mode - use the dependency change algorithm to store and invalidate the false-positive markers.

false-negatives on the other hand present quite few problems:

  • If all files appear (via static-analysis) to not use .only, but we discover one that does. Then just stop executing any non .only tests from that point on.
  • The big problem is when we have a mix of true-positives and false-negatives. If we only launch the true-positive files, then we never get to discover the false-negatives. This is problematic. If a user specified the .only extension however, I think it's really problematic to make the user wait while we launch additional files (possibly hundreds) to verify they have no .only calls.
  • In watch mode we could proceed to launch all the files we believe do not have .only calls, and verify while waiting for file changes to be reported. We could then store that verification and use the dependency change algorithm to determine which files need to be rechecked.

@novemberborn
Copy link
Member

We could do very conservative analysis and rewrite the .only to a private reference. Then if at runtime there are remaining t.only calls we can log a warning but ignore the exclusivity.

@jamestalmage
Copy link
Contributor Author

Just making sure I understand. Rewrite .only to .__only__ using a custom babel transform.Then during runtime __only__ will function as an exclusive test, .only works as a normal one?

I am tempted to agree with it because it seems easier than static analysis. At the same time, I think this cripples a pretty key use case (watch mode and helper generated tests).

@novemberborn
Copy link
Member

Yea, only statically recognized .only create exclusive tests. We can then warn for dynamic .only calls. That should still work for watch mode (it'll be better because the watcher can predict exclusivity). Not sure what to do about helper generated tests.

Your suggestions in #696 (comment) seem on point. The only remaining issue seems to be "If a user specified the .only extension however, I think it's really problematic to make the user wait while we launch additional files (possibly hundreds) to verify they have no .only calls."

We could modify the progress spinner to say "looking for exclusive tests in [filename]". Hitting Ctrl+C would abort and show the test results so far. That might strike a proper balance?

@jamestalmage
Copy link
Contributor Author

I think we cycle through all the tests once in watcher mode, guaranteeing all the only tests are found. In a single run, there is a chance dynamic onlys will be missed - If you are using dynamic only, you really need to use watch mode.

We could use is-ci to trigger the full scan, but people should not be checking in only tests so supporting that use case seems counter productive.

@jamestalmage
Copy link
Contributor Author

I have a start on static analysis here: avajs/babel-plugin-ava-test-data#1

@novemberborn
Copy link
Member

If you are using dynamic only, you really need to use watch mode.

Yea happy to shift some of these advanced use cases into watch mode. It's AVA on steroids!

We could use is-ci to trigger the full scan, but people should not be checking in only tests so supporting that use case seems counter productive.

Indeed.

Do you think we should alert users when we encounter a dynamic .only call?

@jamestalmage
Copy link
Contributor Author

I am not sure what to do with dynamic .only. There are no solutions I love.

Should dynamic .only throw for is-ci?

We should definitely print a warning.

@novemberborn
Copy link
Member

Should dynamic .only throw for is-ci?

If it fails CI you'll notice.

@jamestalmage
Copy link
Contributor Author

That's what I was thinking

@novemberborn
Copy link
Member

In lieu of this we're going for #1472.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement new functionality priority
Projects
None yet
Development

No branches or pull requests

2 participants