Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

🚀 Feature: Continue large test suite where I left off ("rerun failing") #1690

Closed
SystemParadox opened this issue May 7, 2015 · 12 comments
Labels
status: wontfix typically a feature which won't be added, or a "bug" which is actually intended behavior type: feature enhancement proposal

Comments

@SystemParadox
Copy link

When running through a large (and slow) test suite, fixing errors one by one, it becomes tedious to run through all the passing tests each time to find the next error.

It would be great if mocha was able to keep a log of tests which had passed, so that it can quickly skip all these when running again after fixing an error.

Does anyone have any thoughts about how this could be implemented?

Thanks.

@boneskull boneskull added type: feature enhancement proposal status: waiting for author waiting on response from OP - more information needed labels May 12, 2015
@boneskull
Copy link
Contributor

This isn't a bad idea, but it's not entirely trivial.

$ mocha --rerun-failing

The first time you execute this, we could dump a .mocha-failing file into /tmp or something. The next time you run it, it will read .mocha-failing and skip the tests which passed. Similar behavior could be achieved via perhaps localStorage in the browser.

If it's to use the skip functionality in Mocha, then this file should be a list of regular expressions matching test names:

^should do the damn thing$
^should turn on the funk motor$
^should get up offa that thing$

But if two tests have the same name, you're looking at the potential for false positives.

I'd like some opinions from @mochajs/mocha. This is going to cater to users with slow integration tests, and I feel like Mocha could be stronger with features geared towards non-unit-test use cases.

@danielstjules
Copy link
Contributor

I really like this idea. :) But rather than --rerun-failing, how about --failures-first? That way you don't skip tests that may have passed on the initial run, but started failing and went unnoticed as mocha kept running. We could just prioritize specs that failed, which I think offers the same benefits.

@chenchaoyi
Copy link

I am actually doing this by just parsing the mocha json report.

@jbnicolai jbnicolai added the status: accepting prs Mocha can use your help with this one! label Jul 5, 2015
@jbnicolai
Copy link

Sounds interesting :)

@noorus
Copy link

noorus commented Aug 5, 2015

Our company is using mocha for REST API testing, and the test load and execution time will only increase in the future. A "rerun failed" option is already becoming a high priority for us, even with only 75 tests at hand.
In API testing this feature is important in a special way, since after a test fails in CI, the first thing you want to make sure is that it's not a one-off problem due to network latency, bad backend state, or whatever. And you do that by rerunning fails automatically, at least once :)
The Robot Framework, which we employ for browser testing, has already had this feature for two years.

@chenchaoyi
Copy link

A dup of #1773 ?

@SystemParadox
Copy link
Author

@chenchaoyi no absolutely not.

I don't want mocha to retry tests while running them. I want it to fail, I fix the failing test, and try again, skipping all the tests that already passed.

@chenchaoyi
Copy link

Then this could be done by parsing the JSON report and rerun using -g option, if you have unique identifier to each of the test.

What @noorus mentioned might be #1773 then.

@noorus
Copy link

noorus commented Aug 6, 2015

Kind of, yes, but as is the unpleasant case, our test suites tend to be somewhat stateful, much like browser tests. For example, the beginning of a suite creates a temporary database in the backend from a fixture, and suite end tears it down.
If I had to choose, I would value this feature over #1773, although #1773 is also useful in its own right.

As for the parsing of a report - since mocha has numerous reporters with wildly varying output, and multi-reporting isn't even in the core yet (?), I'd much prefer an internal standard way of rerunning tests.

The unique identification of tests, which would be a prerequisite of this feature, is also an issue which I believe should be resolved in the core, rather than each user rolling their own solution by way of test naming schemes. Somewhat related: #1445

@boneskull boneskull added semver-minor implementation requires increase of "minor" version number; "features" and removed status: waiting for author waiting on response from OP - more information needed labels Oct 18, 2017
@ryust-ax
Copy link

ryust-ax commented Dec 21, 2022

It's not a direct solution, but if you were able to collect a list of the afore-executed, passing tests, you could perhaps implement a dynamic skip or selective execution. I'm here on the mocha github because I've had to do somethings similar in my use of CodeceptJS which depends on mocha and was researching details for improvements in my implementation.
Here's the link to discussions that have helped me with the skip approach and now my selective approach:
skip: codeceptjs/CodeceptJS#661
selective: codeceptjs/CodeceptJS#3544
I haven't figured out how to programmatically or dynamically hook into mocha's .only() functionality; and these are specific to the CodeceptJS framework, but maybe these can help as inspiration.

@JoshuaKGoldberg JoshuaKGoldberg changed the title Continue large test suite where I left off 🚀 Feature: Continue large test suite where I left off ("rerun failing") Dec 27, 2023
@JoshuaKGoldberg JoshuaKGoldberg removed status: accepting prs Mocha can use your help with this one! semver-minor implementation requires increase of "minor" version number; "features" labels Dec 27, 2023
@JoshuaKGoldberg
Copy link
Member

Coming back to this: I think we'll want to think holistically on what niceties Mocha is missing compared to other test frameworks. Rerunning failing is a reasonable thing that others already offer - what else?

@JoshuaKGoldberg
Copy link
Member

Talking with @voxpelli: doing this all-up would require a lot of state management inside Mocha. Per #5027 we're trying to avoid large additions/rewrites. So, while this would be a nice and useful feature, for now we'd rather have Mocha be able to support other tools to do this.

For example, one could write an editor extension / Node.js library / shell script / etc. that keeps track of Mocha's outputs and re-runs tests for you. #1457 might make this easier too.

Closing as out of scope. But I'd encourage anybody interested in this to build community packages to do the task. If they get super popular we can always take that as evidence to reopen this issue. Cheers, thanks all! 🤎

@JoshuaKGoldberg JoshuaKGoldberg closed this as not planned Won't fix, can't repro, duplicate, stale Feb 27, 2024
@JoshuaKGoldberg JoshuaKGoldberg added status: wontfix typically a feature which won't be added, or a "bug" which is actually intended behavior and removed status: in discussion Let's talk about it! labels Feb 27, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
status: wontfix typically a feature which won't be added, or a "bug" which is actually intended behavior type: feature enhancement proposal
Projects
None yet
Development

No branches or pull requests

8 participants