Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature Request: Retry Failed Tests #101

Closed
BlacktoBlue opened this issue Oct 16, 2017 · 8 comments
Closed

Feature Request: Retry Failed Tests #101

BlacktoBlue opened this issue Oct 16, 2017 · 8 comments

Comments

@BlacktoBlue
Copy link

BlacktoBlue commented Oct 16, 2017

Can we add a parameter to the protractor config file for (number of retries) i.e. have a run through all tests then retry all failed tests, then iterate to number of retries. I believe Serenity BDD has something like this already?

@BlacktoBlue BlacktoBlue changed the title Retry Failed Tests Feature Request: Retry Failed Tests Oct 17, 2017
@Tom-Hudson
Copy link

Tom-Hudson commented Oct 18, 2017

@BlacktoBlue - Not sure if this works with SerenityJS - i don't see why not, but you could try Protractor Flake?

@jan-molak
Copy link
Member

jan-molak commented Oct 19, 2017

@hudson-t's suggestion should work; Serenity/JS relies on the Protractor test runner, so if Protractor runner invokes the test several times (for example when using Protractor Flake), the test will be executed several times. I haven't used Protractor Flake myself, but I don't see any obvious reasons why it should not work.

I don't think you'll see any record of the number of attempts made, though. If you run a given scenario several times the scenario report (.json) will overwrite the previous result, but that might actually be what you're after?

Please let me know if Protractor Flake works for you, maybe we should add it to Serenity/JS docs for future reference?

Hope this helps!
Jan

Ps. Having said all the above, I'd suggest trying to remove "flakiness" from the tests if possible before resorting to something like Protractor Flake... There's a risk there that you might be unintentionally masking a problem in the app.

@BlacktoBlue
Copy link
Author

BlacktoBlue commented Oct 30, 2017

So I haave tried using Protractor-Flake and it does re-run when tests have failed. However, I get the following message:

Using standard to parse output
Re-running tests: test attempt 2

Tests failed but no specs were found. All specs will be run again.

And all tests are re-run not just failed ones.

@jan-molak : I think you are right in terms of removing flakiness, I have 1 or 2 tests that are flakey because of a loading overlay, (overlay waits for response from server which can vary in length of time)

@jan-molak
Copy link
Member

jan-molak commented Oct 30, 2017

Gotcha, have you considered using Wait interactions to wait for the application to be ready for testing?

@BlacktoBlue
Copy link
Author

BlacktoBlue commented Oct 31, 2017

Yeah, I think i have fixed the flakiness. I was waiting for the overlay to be visible and then invisible (with a try catch round it) but in some cases it disappears before it had checked for it appearing and although I had it in a try catch it still caused issues. Now I am waiting for an element on the following page to be visible.

As for Protrctor-Flake it would be nice to be able to get this working as sometimes this flakiness just cannot be avoided.

@emosGambler
Copy link

emosGambler commented Oct 2, 2019

Hey guys,
Just for the record, I've managed to rerun only failed scenarios using protractor-flake. There are couple of points to remember when using that. After installation of protractor-flake:

  • run tests using command protractor-flake --parser cucumber --node-bin node --max-attempts=<number_of_attempts> -- ./<path_to_config>.js
  • in protractor config file, add below properties to capabilities:
capabilities: {
     shardTestFiles: true,
     maxInstances: <number_of_instances_of_browser_to_be_launched>,
     // the rest of the capabilities
}

Thanks to above, protractor will run every feature file in a new browser instance and will collect needed results, so that the package knows what tests to re-run.

I've tested that by launching 3 scenarios and failing one of them on purpose. The failed one got relaunched and was green afterwards. The serenity report showed only green tests :).

@abhinaba-ghosh
Copy link

abhinaba-ghosh commented Mar 26, 2020

@emosGambler @jan-molak

I think this is finding dificulties to work with serenity V2. It does identify how many tests failed but do not identify which tests got failed. So, it re-run all the scenarios. Do you have a workaround?

My command:

protractor-flake --parser cucumber --max-attempts=2 -- protractor.conf.js --cucumberOpts.tags @smoke

The output I am getting:

Using cucumber to parse output
Re-running tests: test attempt 2

Tests failed but no specs were found. All specs will be run again.

nbarrett pushed a commit to nbarrett/serenity-js that referenced this issue May 1, 2020
…d_yarn/rimraf-3.0.1

Bump rimraf from 3.0.0 to 3.0.1
@jan-molak jan-molak moved this from Ideas to In progress in Serenity/JS Board Jun 19, 2020
Serenity/JS Board automation moved this from In progress to Done Jun 20, 2020
@jan-molak
Copy link
Member

jan-molak commented Aug 27, 2021

This should help #973

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Development

No branches or pull requests

5 participants