Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Being Able to Continue Next Steps on @Then Failures #79

Closed
slaout opened this issue Sep 23, 2016 · 53 comments
Closed

Being Able to Continue Next Steps on @Then Failures #79

slaout opened this issue Sep 23, 2016 · 53 comments
Labels
⚡ enhancement Request for new functionality

Comments

@slaout
Copy link

slaout commented Sep 23, 2016

Summary

Add the possibility to being able to mark some "Then" step-definitions so that when they fail, they still mark the result as failed, but continue executing the next steps of the scenario.

Current Behavior

Sometimes, you have such Cucumber scenario:

When I do some extensive computation like loading a heavy web page
Then I check thing 1
And I check thing 2
And I check thing 3
And So on

In this particular situation:

  • all four "Then" have no impact on the rest of the execution of the scenario
  • they are all independent
  • if "I check thing 1" fails, the other 3 "Then"s are in a Schrödinger state: they may be OK, or they may fail, but we will know for sure only after we fix and re-execute the failed step, possibly many times if there are many errors, until all is fine. These retry/fix rountrips can be costly if the number of scenarios and steps are big and if execution is slow, like when testing a heavy website (a pretty common use case of Cucumber).

Expected Behavior

The step-definitions developer should be able to tell that one particular step is an assertion that have no impact on the following steps of the scenario it's used in.
He/she could then add an annotation on that step-definition to allow Cucumber to continue the next steps executions if that one fails.

See the discussion on Google Groups.

Possible Solution

I've made this Cucumber fork for our own needs:
github.com/slaout/cucumber-jvm

Please read the README.md for an explanation and our return of experience on this subject.

It's only working on the Java backend.

See the discussion on Google Groups for people proposing to port it to Ruby, for instance.

Please tell me if the fork is a good starting point for you and you would like me to transform this fork into a Pull Request.

@aslakhellesoy aslakhellesoy added the ⚡ enhancement Request for new functionality label Sep 23, 2016
@aslakhellesoy
Copy link
Contributor

If we implement this, it should be done in all implementations (Java, Ruby, JavaScript, others...).

Annotations is a Java-only thing. Not sure how this would be specified in annotation-less languages like Ruby. Maybe this?

Given "some step", {error: :continue} do
end

What do people think?

@paoloambrosio
Copy link
Member

I read in the thread that the core Cucumber team considers this a useful feature. Is it because you want to expand the user base at the cost of promoting best practices? Or am I missing why this is not a hack?

I do understand why teams would find this a simpler approach than fixing the test code, but in my experience shortcuts cause problems down the line.

@slaout
Copy link
Author

slaout commented Sep 23, 2016

@aslakhellesoy
If you come up with a simple flag for Ruby (eg. {error: :continue} instead of something like {exception1.class: :continue, exception2.class: continue}), then I'm totally fine with the Java implementation to become just @ContinueOnFailure (without any Exception.class array as parameter) instead of @ContinueNextStepsFor({AssertionError.class, NoSuchElementException.class }).

In practice, we always use this same set of two Exception classes for consistency, simplicity of development, and a not-error-prone development workflow.
And sometimes, developers went to some crazy stuff to transform a NullPointerException into an AssertionError.

So we can keep it simple with only @ContinueOnFailure and nothing else.

@brasmusson
Copy link
Contributor

Another option is to use a similar mechanism as for pending steps, that is not require that something is specified on the step definitions, but rather a specific exception type built into each Cucumber implementation that mean, "step failed, but continue". That should at least be straight forward to implement in all the Cucumber implementations (also in languages without Annotations).

@aslakhellesoy
Copy link
Contributor

I like @brasmusson's suggestion, because it's more portable. For example, for Java8 there are no annotations - what do we do there?

@paoloambrosio I did say in the mailing list thread that I'm open to adding this, but I'm still very much on the fence. Thanks for challenging this. You're making a good point that it's better to fix the code, and in the past I've held a strong position to keep Cucumber an opinionated tool. I would never use this feature myself - it's at odds with my ethics ;-)

@mattwynne what do you think? It's a bit of a slippery slope this one. If we add it aren't we essentially saying that flickering tests are ok? (They most certainly are not in my book).

@paoloambrosio
Copy link
Member

I think that @brasmusson 's idea is elegant as an implementation in Cucumber but not much for the user in all languages. Users with flickering tests probably want to use the same assertion library but triggering a different behaviour. In this case they would have to surround everything in a try/catch/rethrow block. Sure, some languages might allow that to be defined as a higher order function (i.e. magicFix { standard assertions } ), but in others like Java 6 the syntax would be horrible.

I still haven't heard why we are trying to promote bad practices with this feature instead of convincing users that it will just mask their broken pipeline and process, so I would me more in favour of introducing official extension points so that external plugins could decide to make Cucumber continue on failure in certain cases (like if a specific exception is raised... that would work well since the plugin doesn't need to support all languages and implementations). Is this worth investigating?

@muhqu
Copy link
Member

muhqu commented Sep 30, 2016

...trying to promote bad practices with this feature...

@paoloambrosio I would not say it is bad practice. It allows you to see what all went wrong in a given scenario instead of just the first thing that went wrong. This type of reporting errors in tests is quiet common in golang, for example.

@brasmusson
Copy link
Contributor

The Google Test framework has both ASSERT_THAT and EXPECT_THAT. If checking a person data object with:

  ASSERT_THAT(<name is correct>);
  ASSERT_THAT(<address is correct>);
  ASSERT_THAT(<email is correct>);

then if the address is not correct, the test will not continue and I will not know whether the email was correct or not. If using:

  EXPECT_THAT(<name is correct>);
  EXPECT_THAT(<address is correct>);
  EXPECT_THAT(<email is correct>);

then if the address is not correct, the email will also be checked and I will know if only the address is the problem, or if both the address and the email are wrong. That is valuable information when trying to fix the issue. In JUnit the same thing can be achieved with @Rule annotations and ErrorCollectors (IIRC there is a request to support this in Cucumber-JVM).

Users with flickering tests

@paoloambrosio I see this feature as also supporting "EXPECT_THAT" checks in Cucumber, not as something that can be activated for flickering tests.

@paoloambrosio
Copy link
Member

@muhqu can you help me with an example of when it's useful? I'm struggling to see the value without examples and no one has given any valid ones in the threads I've read.

I can understand how this could be useful at the unit level when asserting multiple aspects of a domain object. This in Cucumber's terms would be using a table argument (that was indeed the first suggestion from George in the thread linked above).

Packing multiple unrelated assertions in a unit testing is IMO as bad as unrelated Thens in Cucumber, and should be avoided.

If the assertions are related, does it really help if you see all failures or only the first one?

@muhqu
Copy link
Member

muhqu commented Sep 30, 2016

@paoloambrosio here is an example showing what I mean when I said this type of reporting errors in tests is quiet common in golang.

I think especially in Cucumber/Gherkin it makes totally sense to bail out on the first error when running a scenario's Given, When and their And steps, as these are intended to set the ground for the scenario. However, with Then steps we're starting to check for expectations and there is a clear benefit of letting the user know all the expectations that have been failed.

Here's a simple example:

Scenario: User Login
   Given I am a registered user of this site
     And I am on the homepage
    When I enter my credentials in the nav-bar login form
     And I submit the form by hitting return
    Then I should see my inbox
     And I should see my name in the navbar
     And I should see ...

Assuming these tests run on some selenium cluster to test the behavior on various different browsers and devices… the time to run the tests will be significant. Therefor the tester has an interest to know as much as possible about what expectations have not been met. Don't you agree on that?

@slaout
Copy link
Author

slaout commented Oct 14, 2016

Here is another return about several months of using the patch.

I just made another pass on all our @then steps.
The conclusion: almost all of them have the @ContinueNextSteps annotations (with the same declared exceptions, for simplicity and less-error-prone sake).
The only sentences that do not have the annotations are sentences like:

  • System redirects to the Xxxx page
  • On block Xxxx, an error message "Yyyy" appears
  • The frame Xxxx is unfolded

With such sentences, failure means the remaining of the scenario will fail.

Beside the "This @ContinueOnFailure is helpful in many projects, tools and languages" vs. "This @ContinueOnFailure can be used as bad practices" debate, there is another debate: if used, it needs a lot of care to make sure to not forget to tag the right steps, and use the right exceptions.
We solved the "and use the right exceptions" part by a dumb "we always use the same exception set".
We still have the "it needs a lot of care to make sure to not forget to tag the right steps" part.

To answer this second part, considering we use the annotation in almost all @then steps, and to take into account all Cucumber supported languages, why not:

Forget about the annotation, and offer a new runtime option "Continue next steps on @then step failures"? Much like the strict=false option (which is also a bad practice to allow it to be false, by the way)

It would come with a warning "be careful, this is a bad practice" when enabled.
And for the rare cases it would not make sense to continue execution, I don't care: next steps will fail too quite quickly.

@mattwynne
Copy link
Member

For me the benefit of this feature is for teams where the cost of the Given / When parts of the scenario is high - dependencies on slow external services or a deep workflow process. In this context it's often pragmatic to make a handful of checks while you're in that spot.

What I worry about is how we'd clearly feed back the results. Cucumber implementations are built with the assumption that each scenario has only one result. Capturing and communicating multiple failure results for one scenario seems to me would require some deep internal changes. Can you explain more about how this works in your fork?

@slaout
Copy link
Author

slaout commented Oct 25, 2016

@mattwynne the fork is really simple: if you exclude groupId change, documentation change, the Java annotation class declaration, redirection of the annotation from layer to layer... the "heavy" refactoring only comes down to that change (yes, there even was a skipNextStep boolean already in Runtime.java):

core/src/main/java/cucumber/runtime/Runtime.java
-                skipNextStep = true;
+                if (!match.continueNextStepsAnyway(t)) {
+                    skipNextStep = true;
+                }

The whole patch difference is here:
cucumber/cucumber-jvm@master...slaout:continue-next-steps-for-exceptions-1.2.4

Here is a real example from our project; this is a Jenkins report (we did NOT fork the Jenkins plugin for that, it works out of the box):

image

  • The page takes 16 seconds to load fully (with async analytics JSs, heavy images and videos... For a real user, most things are loaded in a few seconds, but Selenium waits everything is loaded before continuing the scenario)
  • There are 7 "Then" assertions, roughly 0.1 second each
  • It takes 17 seconds to run, instead of 2 minutes if each "Then" was in its own scenario
  • Each failed step has its own error message (here, I unfolded one, and my mouse is on the other one): thankfully, errors are attached to the failed step instead of the whole scenario, and all report tools already support this fork without knowing it :-)
  • JUnit view on Eclipse or IntelliJ are also capable to displaying the same sort of results where each step (viewed as a single JUnit test) can fail: see the screen capture of https://github.com/slaout/cucumber-jvm
  • Attachements to the scenario are still at the end of the scenario (here logs, screenshot and video link)

@muhqu
Copy link
Member

muhqu commented Oct 26, 2016

@mattwynne wrote: [...]
Cucumber implementations are built with the assumption that each scenario has only one result. […]

That is still true. A scenario fails when at least one step failed. Not when the last executed step failed.

However, I'm wondering what status a scenario would have when it contains failing and pending steps... @slaout, can you let us know?

@slaout
Copy link
Author

slaout commented Oct 26, 2016

@muhqu That's a good question, as I always run in strict mode.

In such cases, the result is FAILED in strict mode.

But in non-strict mode, you made a good point, because it is inconsistent: SKIPPED for the scenario, and FAILED for the feature file.

In Strict Mode

On JUnit:
image

As well as in the HTML reports:
image

In Non-Strict Mode:

On JUnit:
image

In HTML reports: exact same result as in strict mode, but only because the feature is not highlighted with any color, I think.

@jenisys
Copy link
Contributor

jenisys commented Oct 26, 2016

I had a similar request a while ago in behave. Initially, I suggested the EXPECT_THAT() approach that was suggested by @brasmusson above. But when I tried that approach, it was simpler to provide the basic functionality to user (and allow the user to use this functionality when needed).

SEE ALSO:

@cbaldan
Copy link

cbaldan commented Nov 15, 2016

I'm going to propose an approach to the issue of continuing steps even when they fail. Many will condemn it, but works quite well. It's just easier then going into endless debates.

From time to time I met the need to have a soft assert, just like TestNG has, but jUnit folks can't stand it. Sounds like the same dispute going on here.

So, do that: implement a soft assert class in your framework. You might need to add the burden of creating a step to "turn on" and off the soft assert, and a step to check that assert class and fail, if necessary.
Then, use the @before hook to reset the soft assertion to off when each test start.

@elarbi
Copy link

elarbi commented Feb 1, 2017

Hi,

We are losing this feature in our test projects, coudl you please tell us if it's planned for next release ?

Thanks

@aslakhellesoy
Copy link
Contributor

@elarbi no, this isn't planned for any upcoming releases.

@djwgit
Copy link

djwgit commented Feb 23, 2017

hope this will be supported soon,
just FYI, xcode XCTest supports a test case could continue after a failure.
https://developer.apple.com/reference/xctest/xctestcase/1496260-continueafterfailure

@ylavoie
Copy link

ylavoie commented Jun 30, 2017

It would help lowering developer time if all failing steps could be gathered in one run instead of slowly getting them one by one.

@pjlsergeant
Copy link

I've been pointed to this by @ylavoie, because I maintain the non-official Perl port of Cucumber. Being non-official gives me some flexibility for doing random stuff. My MSc dissertation actually digs into this behaviour -- Ruby and Python's xUnit heritage push them in one direction, which is noticeably different from how tests generally work with Perl - in Perl almost all test failures are seen as a soft failures that allow test execution to fail.

Getting Test::BDD::Cucumber to work like the official version actually took quite some work :-P

The big question for me is whether to put the flag inside the step-definition codeblock, ala:

When qr/I've added "(.+)" to the object/, sub {
    S->softfail();
    S->{'object'}->add( C->matches->[0] );
};

or as an annotation like @aslakhellesoy has it in his example above:

When qr/I've added "(.+)" to the object/, { soft => 1 }, sub {
    S->{'object'}->add( C->matches->[0] );
};

and I think it probably needs to be the latter, as it forms part of the meta-data for a test case, rather than the testing assertions.

Anywho, I think it's a very important potential feature, and I'll help @ylavoie get it implemented in our Wild West Fake Cucumber clone :D

@stale
Copy link

stale bot commented Nov 8, 2017

This issue has been automatically marked as stale because it has not had recent activity. It will be closed in a week if no further activity occurs.

@stale stale bot added the ⌛ stale Will soon be closed by stalebot unless there is activity label Nov 8, 2017
@stale
Copy link

stale bot commented Nov 15, 2017

This issue has been automatically closed because of inactivity. You can support the Cucumber core team on opencollective.

@mattwynne
Copy link
Member

how could you close an issue, just due to inactivity? does not make sense, right?

We use stalebot to help us keep the number of issues from growing out of hand and becoming unmanageable. Stalebot helpfully closes any issues where the conversation and activity around the issue has died down. If we want stalebot to leave an issue alone, just make a comment.

@mattwynne
Copy link
Member

FWIW I can see the value in this feature and would support it being added. Although we'd want to end up with it working consistently in every implementation, I think it would be pragmatic to do each implementation separately.

@mrmattrains
Copy link

But isn't this already handled by breaking the tests down into smaller units?

Ignoring the issue could hide a blocker, used sparingly though I guess?

I suppose there might be a case. This is strictly for data entry?

Syntactic though I think I would prefer a single character identifying it as a continue step.

Like a minus at the front.

-And some statement

@slaout
Copy link
Author

slaout commented Jul 19, 2018

@mrmattrains:

Yes, unit tests needs to have only one assert. But some integration tests like Selenium or mobile-apps ones are too slow for this to be practical.

Oh, brilliant! The character would make the implementation work on all languages.

It could be a character that better indicate an optional state than just "-And". Like "And?" or better: "(And)"...

Definitely a tough to think about.

I see three downsides (but minor compared to it's advantages):

  • An optional step has to be marked as such EACH time it is used (but then, it's more obvious it's optional by just reading the feature file)

  • IDE plugins need to be updated

  • As it could be an incompatible breaking change, it needs to be released in a major version of Cucumber?

@mrmattrains
Copy link

mrmattrains commented Jul 19, 2018 via email

@abardallis
Copy link

abardallis commented Sep 6, 2018

Any updates on this issue?

I would find this useful in the following mobile app UI automation context (which many have eluded to previously):

Scenario: verify that product name, image, and price are displayed on product cards  
  Given I am on the welcome page  
  When I sign in as user_01  
  And I tap on browse  
  Then I can see the product_card_name  
  And I can see the product_card_image  
  And I can see the product_card_price  

If the product_card_name was not present, the scenario fails and I get no information on whether or not the product_card_image or product_card_price was present. This limits the value I get from this scenario.

I understand the principle of each scenario having a single thing that it asserts, however, there are times where I believe this needs to be weighed with the cost of splitting the scenario. In the example above, it makes more sense to check the all three product card elements in a single scenario rather than launching the app, signing in, and navigating to browse to check one item in three separate scenarios.

@mpkorstanje
Copy link
Contributor

Fail fast is such a common and sensible behaviour that I see no value in the cost of the added complexity of making it optional. Not for Cucumber, its developers, the ecosystem or its users.

Looking at this usecase I would strongly recommend rewriting the scenario to contain only a single then step to check for a list/table of elements leveraging the power of Hamcrest and PageObjects.

I.e. assertThat(page, hasElements(elements));

And then make sure that the matcher is written in such a way that it reports all elements present and missing. Similair to how Hamcrest matchers for collections work.

This keeps the semantics of Cucumber simple while placing the cost of the non standard behavior on those who require it.

@abardallis
Copy link

Thanks for the reply @mpkorstanje. In practice I actually have this written as a single step that passes in a table of elements. As you mentioned, I’ll just have to rewrite the matcher so it reports all elements present and not present. I’m not familiar with Hamcrest but will look into it. Thanks again.

@mpkorstanje
Copy link
Contributor

Try extending http://hamcrest.org/JavaHamcrest/javadoc/1.3/org/hamcrest/TypeSafeDiagnosingMatcher.html and look at the known subclasses for patterns.

@djwgit
Copy link

djwgit commented Oct 9, 2018

hope this can be considered in next release.
compare to unit-test, cucubmer bdd is for workflow testing, usually multiple check-points in a workflow...
this really makes sense to be able to finish the test workflow after minor issues found in the middle.

@aslakhellesoy
Copy link
Contributor

how could you close an issue, just due to inactivity? does not make sense, right?

and:

This needs to be done. Not closed. The community has spoken.

As a project maintainer closing inactive issues makes a lot of sense. It means nobody cares enough to provide a fix.

@mpkorstanje
Copy link
Contributor

mpkorstanje commented Oct 9, 2018

I am closing this issue. No one has been willing to implement this in the last two years. If you are willing to implement this feature feel free to reopen it again.

That said I do expect that most use cases for this will be symptomatic of an underlying problem in the software or organization that develops it.

So far all examples feature slow, unstable or complex tests. In my opinion you should fix the root cause of this.

@techanon
Copy link

I find this feature request valuable. My reasoning is that I am doing AQA for a website that imitates an sequential application and has a lot of e2e tests, with a number of Then checks throughout. If I did not have those there, the execution time would be exponentially greater, because if I had to end a scenario on every Then check, I would have to start at the beginning every time. I do not have control over the code of the app I'm testing, nor over the requirements the app is derived from.

Being able to soft-fail steps would be very beneficial to me as a way to handle the nature of the app I'm testing.

I like the custom exception/error idea from @brasmusson (maybe something like PassiveFailureException). That way it requires explicit declaration that a specific step definition will allow continuation after a failure as well.

Here's an example of the idea in Java

@When("^I do a thing$")
public static void iDoAThing() { new SomeClass().doSomething(); }

@Then("^I check for certain values$")
public static void iCheckForCertainValues() throws PassiveFailureException { 
    try {
        new SomeClass().possibleFailureWhenCheckingValues();
    } catch (Exception e) {
        throw new PassiveFailureException(e);
    }
}

@Then("^I check the visual layout$")
public static void iCheckTheVisualLayout() throws PassiveFailureException {
    try {
        new SomeClass().possibleFailureWhenCheckoutLayout();
    } catch (Exception e) {
        throw new PassiveFailureException(e);
    }
}

I'm not familiar with the source code/structure, but if someone helps point me in the right direction, I can take a stab at implementing the above idea. Additionally, what all languages need to be considered?

@mpkorstanje
Copy link
Contributor

I suspect that for your use case you may be better served using something akin to JUnits ErrorCollector as mentioned by @brasmusson earlier.

You can set this up in about 5 minutes:

public class ErrorCollector extends org.junit.rules.ErrorCollector {
    @After
    public void reportErrors() throws Throwable {
        this.verify();
    }
}

Using cucumber-pico you can use dependency injection to share the error collector between steps and step definitions.

public class RpnCalculatorStepdefs {

    private final ErrorCollector errorCollector;

    private RpnCalculator calc;

    public RpnCalculatorStepdefs(ErrorCollector errorCollector) {
        this.errorCollector = errorCollector;
    }

    @Given("a calculator I just turned on")
    public void a_calculator_I_just_turned_on() {
        calc = new RpnCalculator();
    }

    @When("I add {int} and {int}")
    public void adding(int arg1, int arg2) {
        errorCollector.checkThat("First argument is small", arg1, lessThan(10));
        errorCollector.checkThat("Second argument is small", arg2, lessThan(10));

        calc.push(arg1);
        calc.push(arg2);
        calc.push("+");
    }

    @Then("the result is {int}")
    public void the_result_is(double expected) {
        errorCollector.checkThat("Expected value is too high", expected, lessThan(100));
        assertEquals(expected, calc.value());
    }
}

Which will report:

image

This setup has the added advantage of not interfering with the execution flow. This makes multiple soft-failures work in any step -- this is important, being only able to check a single assertion in a Then step will create poorly understood features.

Now you probably don't want to extend JUnits ErrorCollector but instead implement it on your own. Fortunately the ErrorCollector implementation is trivial.

Implementing your own error collector will also allow you to log the errors to the step instead using Scenario.write. If you utilize cucumber dependency injection you can also inject a reference to the web driver and use it to attach screenshots through Scenario.embed. The only thing left to do in the @After hook is then to fail the test by throwing an exception.

I do not have control over the code of the app I'm testing, nor over the requirements the app is derived from.

I'm sorry.

@UltimateGeek
Copy link

UltimateGeek commented May 31, 2019

Perhaps a way to approach having multiple Thens all evaluated is to combine them into a single Then:

When the stress test completes
Then the following expectations are met:
  | Expectation                                       |
  | And the maximum memory usage was no more than 90% |
  | And the maximum cpu load was no more than 1.8     |

Steps definitions are provided for each of the expectations in the table, and the step definition for the single then dynamically runs each of the steps in the table, catching their exceptions, continuing through all of them, and asserts that they all passed.

@switch201
Copy link

switch201 commented Oct 1, 2019

This would be a very useful feature. In theory you should only have 1 Then for any given test scenario to solve this issue, but in practicality that can end up increasing run time exponentially. The app I work on is also an end to end test that fills out forms and checks data values on the form. It takes about 3-5 mins to get to a portion of the form where displayed data needs to be asserted. right now there might be about 20 values that need to be asserted. If it takes 3-5 mins to get the app to a state where the displayed data can be asserted that increases the run time from maybe 10 mins up to 100 mins thats 10 times as long. Yes that is technically the way cucumber wants to you to it, but in practice the apps that need testing don't all work that way. Many other testing frame works have this feature. I don't understand the reluctance to implement it here.

@vrudhay
Copy link

vrudhay commented Mar 13, 2020

I am waiting eagerly for this... Is it implemented somewhere. Please show some sympathy!!!!

@UltimateGeek
Copy link

Here is an example of a working implementation in cucumber-ruby using a data table to evaluate Thens dynamically to workaround this limitation. Code is here: https://github.com/UltimateGeek/cuke-continue

Feature: Test all thens
  Scenario: First then fails
    When I run `echo testing continues`
    Then the following expectations are met:
      | the exit status should be 1                 |
      | the output should match /testing continues/ |

example output

@mpkorstanje
Copy link
Contributor

mpkorstanje commented Apr 11, 2020 via email

@twometresteve
Copy link

twometresteve commented Apr 11, 2020 via email

@djwgit
Copy link

djwgit commented May 6, 2020

sorry, not found the solution of "soft assertion" MP mentioned (or I missed it...)
saw the issue #2 on "soft assertion" has been closed too...

the disagreement here might come from our usage of cucumber testing.

  • some thinks test should just stop if fails a checkpoint
  • some thinks, cucumber test is not exactly as unit-test, a workflow may have several checkpoints. each of them might be independent. it is ok let it continue even one check point failed. (but still shows in red in report for sure...)

still hope we could have a nice solution for this case, hope this issue can be re-opened.
cucumber users really want this, otherwise, this thread would not be this long... just my 2c.

@UltimateGeek
Copy link

UltimateGeek commented May 7, 2020

@djwgit Could you help me understand how my solution of dynamic Thens using a data table doesn't address the need?

The only downsides I see are that the output isn't as clear, and the reported total number of steps performed is reduced.

@alfonso-presa
Copy link

We are on the need of something similar to what is requested in this issue and I'll try to explain why, and which is our current solution.
Why
We use cucumber for E2E testing with browsers and devices. Some of our applications are really slow, and devices are too (install the app, start it, ...) and we also have limited testing data. This means that we need to drastically reduce the number of scenarios to the minimum, and also try to make as much assertions as possible (in the places requested by our business people) during the testing flows.

To give an example let's say we have a process that has 5 different screens. We need to perform some assertions (that if failed don't block the test flow execution but need to be highlighted if don't succeed, for example, missing labels or tooltips) on each of this 5 different screens. We write scenarios that go though this 5 different screens with a sort of When-Then-When-Then-When-Then pattern (that I know is wrong, but due to our context is the only fit for our resources and app characteristics). If test fails in the first Then with a non blocking assertion (like a missing literal) then we miss the next 4 screen feedback.

A possible solution
I have implemented a little library (https://github.com/alfonso-presa/soft-assert) for javascript that helps wrapping assertion libraries so that soft-assertions can be performed during the test execution were needed, and at the end of the test in a hook or in an ending step they can all be "flushed".

Drawbacks
The solution works, but it lacks in terms of understanding the reason of the failure. We would like the report (both console and json) to show each assertion failure on it's corresponding step instead of in the failing step or hook. It would be nice to have this "warning" state along with an captured error, so that assertion libraries can in some way populate that information for the reporter to pick them.

@mpkorstanje
Copy link
Contributor

Cucumber JS is adding step hooks. These could provide the information you need.

@vrudhay
Copy link

vrudhay commented Nov 13, 2020 via email

@dennisl68-castra
Copy link

@Jaykul @renehernandez
I do recognize that gherkin should exit a scenario when the steps would interfere with each other.

But I really need an option to continue with the next step to highlight all issues found in a test scenario to be able to address all issues found and not only one at a time...
It is very time consuming.
So I need a version of Gherkin that handles "slacked" tests scenarios.

Can you please point me in the direction to where I alter the code in Pester v4 to remove the "SkipNextStep" behavior as a work around for my own benefit?

@dennisl68-castra
Copy link

dennisl68-castra commented Mar 10, 2021

@Jaykul @renehernandez
I do recognize that gherkin should exit a scenario when the steps would interfere with each other.

But I really need an option to continue with the next step to highlight all issues found in a test scenario to be able to address all issues found and not only one at a time...
It is very time consuming.
So I need a version of Gherkin that handles "slacked" tests scenarios.

Can you please point me in the direction to where I alter the code in Pester v4 to remove the "SkipNextStep" behavior as a work around for my own benefit?

Ok, I found the place.

Module: Pester v4.10.1 File: Gherkin.ps1, Line: 734

# Iterate over the test results of the previous steps
        for ($i = $TestResultIndexStart; $i -lt ($Pester.TestResult.Count); $i++) {
            $previousTestResult = $Pester.TestResult[$i].Result
            if ($previousTestResult -eq "Failed" -or $previousTestResult -eq "Inconclusive") {
                $previousStepsNotSuccessful = $true
                break
            }
        }

And a crude work around would be

# Iterate over the test results of the previous steps
        for ($i = $TestResultIndexStart; $i -lt ($Pester.TestResult.Count); $i++) {
            $previousTestResult = $Pester.TestResult[$i].Result
            # if ($previousTestResult -eq "Failed" -or $previousTestResult -eq "Inconclusive") {
            #     $previousStepsNotSuccessful = $true
            #     break
            # }
        }

A better solution would be using a switch for slacked runs.
I'll probably provide a PR on this later on for Pester v4...

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
⚡ enhancement Request for new functionality
Projects
None yet
Development

No branches or pull requests