Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Possibility to run single feature iteration #1026

Vampire opened this issue Sep 4, 2019 · 3 comments


Copy link

commented Sep 4, 2019

As far as I have seen, there is currently no way to run only specific iterations of a feature.
So if you run a feature with multiple iterartions and only some fail, you may want to only execute these iterations until you fixed the problem and only after that run all iterations again to see whether you broke something else.

This is also important for IDE support.
If you for example in IntelliJ use "Rerun failed tests", or right-click a failed iteration and try to run it, in both cases you are blocked off with "No tests found".


This comment has been minimized.

Copy link
Contributor Author

commented Sep 4, 2019

As the data providers could be one-off iterators, you can of course not construct all the unrolled iteration names up-front to take part in the normal JUnit filtering.

I think we need something like "could the test name in the filter match the unrolled feature name then include the feature" and then when actually arriving at the iteration first check the definitive name against the iteration name and then exclude it at that time.

The first can probably be done like replacing all parts that match UnrollNameProvider.EXPRESSION_PATTERN by .* and then match the unroll-pattern against that.
Maybe before replace #featureName by the concrete feature name if it is not followed by a ., so that you do not include all features for something like @Unroll('#featureName #iterationCount') or similar which would result in .* .* otherwise.


This comment has been minimized.

Copy link

commented Sep 4, 2019

That would be nice to see in the core spock. Just sharing my experience on how I deal with it now:

  1. When debugging a failing iteration one can manually add an assumption on certain iteration markers to the beginning of the test
def "my test"() {
   assumeTrue(iterationNumber == 2)

   iterationNumber << [1,2,3]
  1. Regarding your pattern suggestion. This is currently 100% achievable by cusom extensions (but still a nice feature for core). I do have some examples of including/excluding certain iterations based of generated iteration name. See how we 'tag' certain iterations by their name which afterwards allows us to include/exclude them from the test run

This comment has been minimized.

Copy link
Contributor Author

commented Sep 4, 2019

I'm aware of both options, thanks.
Actually I tend to just modify the where part, commenting out the iterations I don't want if the iterations need too much time to simply run all, or I want to debug into a specfic iteration.

And the exclusions by extension would of course also work e. g. as I described at

But both approaches have significant drawbacks, for example:

  • the skipped iterations are still reported as skipped instead of simply suppressed
  • it is not the standard way on how to run specific tests
  • the standard way is to tell JUnit "I want to run the test named X"

Especially the last point is the most important and tragic one.
As you can neither copy the name of a failed test out of the test report and do gradlew test --tests <paste here>,
nor can you right-click an iteration in the IDE and run it,
or use "Rerun failed tests",
as those options depend on the standard JUnit filtering mechanism working like expected.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
None yet
2 participants
You can’t perform that action at this time.