Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Integrate other test runners next to Jasmine #3155

Closed
cpojer opened this issue Mar 16, 2017 · 10 comments
Closed

Integrate other test runners next to Jasmine #3155

cpojer opened this issue Mar 16, 2017 · 10 comments

Comments

@cpojer
Copy link
Member

cpojer commented Mar 16, 2017

Jest is completely test runner agnostic. For example, Jest can wrap other test runners entirely and for a while we shipped with both jasmine1 and jasmine2. However, we have never capitalized on the difference between Jest "the test-framework runner" and the test runner (currently Jasmine) itself.

TODO:

  • Fix up runTest.js and make it more generic. This piece is re-used in jest-runtime and it should be more reusable.
  • Solidify the API between things like jest-jasmine2 and runTest.js/TestRunner.js. Currently the API is jestJasmine2(…).then(results => …). The result currently looks like this: https://github.com/cpojer/jest/blob/master/types/TestResult.js#L126 which comes from Jasmine. We should rename these things to make more sense and normalize the results we get from Jasmine.
  • Make modules in jest-jasmine2 generic so that the setup-pieces can be shared across test runner adapters.
  • Figure out which test runners we want to wrap officially.
  • Build a complete replacement for Jasmine.

cc @MarcoWorms

@MarcoWorms
Copy link

Nice! @derekstavis

@rogeliog
Copy link
Contributor

rogeliog commented Jun 8, 2017

This should be easier now that #3668 is merged

@cpojer
Copy link
Member Author

cpojer commented Aug 24, 2017

@rogeliog wanna try making an ava runner for Jest? :)

@rogeliog
Copy link
Contributor

I'll give it a try!

@dyst5422
Copy link

I've been working with Python and Perl test runners based on your example with pyjest-runner, @cpojer . One of the issues I have run into is what to do when the test matching parameter precludes all tests for a particular project.

Not all runners that one would want to interface with will report skipped tests, so knowing how many exist and how many are skipped may not be possible. Thus we would get the Your test suite must contain at least one test. failure. Not sure what makes the most sense in terms of handling this case.

Haven't seen anyone discuss this yet.

@cpojer
Copy link
Member Author

cpojer commented Oct 27, 2017

@dyst5422 thanks for weighing in. You should only be getting this error message if the results object doesn't contain a single "test". If you are skipping tests, you should return the list of tests or a dummy list of tests so you don't get that error message. However, you are right this error message should possibly be moved into jest-runner itself. Does either of these seem reasonable?

@rogeliog made runners for mocha and ava (jest-runner-mocha, jest-runner-ava), so I'm closing this task as most of what's written above was implemented.

@cpojer cpojer closed this as completed Oct 27, 2017
@dyst5422
Copy link

@cpojer Yes, this case where ALL tests are skipped is the one I'm interested in.

For now, I can put in a dummy list of tests (I mean really I'm not overly offended by the test failure when it notes that it was because of no tests, so I might skip actually going that).

I do think that this error message might better belong in the jest-runner as it seems more of a "feature" of the jest-runner to declare an empty test suite as a failed test, rather than a statement on ALL POSSIBLE test runners.

@cpojer
Copy link
Member Author

cpojer commented Oct 31, 2017

The reason this exists is that we had product developers put product code into test files, so we added this error to prevent people from doing it. I think for now the dummy result makes sense.

@dyst5422
Copy link

I have no idea how it would come about that they would put product code into the test files, but I'm probably just not being imaginative enough. In any case, the dummy result is fine if this error is serving other needs.

@github-actions
Copy link

This issue has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs.
Please note this issue tracker is not a help forum. We recommend using StackOverflow or our discord channel for questions.

@github-actions github-actions bot locked as resolved and limited conversation to collaborators May 13, 2021
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants