-
-
Notifications
You must be signed in to change notification settings - Fork 698
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Being Able to Continue Next Steps on @Then Failures #79
Comments
If we implement this, it should be done in all implementations (Java, Ruby, JavaScript, others...). Annotations is a Java-only thing. Not sure how this would be specified in annotation-less languages like Ruby. Maybe this? Given "some step", {error: :continue} do
end What do people think? |
I read in the thread that the core Cucumber team considers this a useful feature. Is it because you want to expand the user base at the cost of promoting best practices? Or am I missing why this is not a hack? I do understand why teams would find this a simpler approach than fixing the test code, but in my experience shortcuts cause problems down the line. |
@aslakhellesoy In practice, we always use this same set of two Exception classes for consistency, simplicity of development, and a not-error-prone development workflow. So we can keep it simple with only @ContinueOnFailure and nothing else. |
Another option is to use a similar mechanism as for pending steps, that is not require that something is specified on the step definitions, but rather a specific exception type built into each Cucumber implementation that mean, "step failed, but continue". That should at least be straight forward to implement in all the Cucumber implementations (also in languages without Annotations). |
I like @brasmusson's suggestion, because it's more portable. For example, for Java8 there are no annotations - what do we do there? @paoloambrosio I did say in the mailing list thread that I'm open to adding this, but I'm still very much on the fence. Thanks for challenging this. You're making a good point that it's better to fix the code, and in the past I've held a strong position to keep Cucumber an opinionated tool. I would never use this feature myself - it's at odds with my ethics ;-) @mattwynne what do you think? It's a bit of a slippery slope this one. If we add it aren't we essentially saying that flickering tests are ok? (They most certainly are not in my book). |
I think that @brasmusson 's idea is elegant as an implementation in Cucumber but not much for the user in all languages. Users with flickering tests probably want to use the same assertion library but triggering a different behaviour. In this case they would have to surround everything in a try/catch/rethrow block. Sure, some languages might allow that to be defined as a higher order function (i.e. magicFix { standard assertions } ), but in others like Java 6 the syntax would be horrible. I still haven't heard why we are trying to promote bad practices with this feature instead of convincing users that it will just mask their broken pipeline and process, so I would me more in favour of introducing official extension points so that external plugins could decide to make Cucumber continue on failure in certain cases (like if a specific exception is raised... that would work well since the plugin doesn't need to support all languages and implementations). Is this worth investigating? |
@paoloambrosio I would not say it is bad practice. It allows you to see what all went wrong in a given scenario instead of just the first thing that went wrong. This type of reporting errors in tests is quiet common in golang, for example. |
The Google Test framework has both
then if the address is not correct, the test will not continue and I will not know whether the email was correct or not. If using:
then if the address is not correct, the email will also be checked and I will know if only the address is the problem, or if both the address and the email are wrong. That is valuable information when trying to fix the issue. In JUnit the same thing can be achieved with
@paoloambrosio I see this feature as also supporting " |
@muhqu can you help me with an example of when it's useful? I'm struggling to see the value without examples and no one has given any valid ones in the threads I've read. I can understand how this could be useful at the unit level when asserting multiple aspects of a domain object. This in Cucumber's terms would be using a table argument (that was indeed the first suggestion from George in the thread linked above). Packing multiple unrelated assertions in a unit testing is IMO as bad as unrelated Thens in Cucumber, and should be avoided. If the assertions are related, does it really help if you see all failures or only the first one? |
@paoloambrosio here is an example showing what I mean when I said this type of reporting errors in tests is quiet common in golang. I think especially in Cucumber/Gherkin it makes totally sense to bail out on the first error when running a scenario's Given, When and their And steps, as these are intended to set the ground for the scenario. However, with Then steps we're starting to check for expectations and there is a clear benefit of letting the user know all the expectations that have been failed. Here's a simple example: Scenario: User Login
Given I am a registered user of this site
And I am on the homepage
When I enter my credentials in the nav-bar login form
And I submit the form by hitting return
Then I should see my inbox
And I should see my name in the navbar
And I should see ... Assuming these tests run on some selenium cluster to test the behavior on various different browsers and devices… the time to run the tests will be significant. Therefor the tester has an interest to know as much as possible about what expectations have not been met. Don't you agree on that? |
Here is another return about several months of using the patch. I just made another pass on all our @then steps.
With such sentences, failure means the remaining of the scenario will fail. Beside the "This @ContinueOnFailure is helpful in many projects, tools and languages" vs. "This @ContinueOnFailure can be used as bad practices" debate, there is another debate: if used, it needs a lot of care to make sure to not forget to tag the right steps, and use the right exceptions. To answer this second part, considering we use the annotation in almost all @then steps, and to take into account all Cucumber supported languages, why not: Forget about the annotation, and offer a new runtime option "Continue next steps on @then step failures"? Much like the strict=false option (which is also a bad practice to allow it to be false, by the way) It would come with a warning "be careful, this is a bad practice" when enabled. |
For me the benefit of this feature is for teams where the cost of the Given / When parts of the scenario is high - dependencies on slow external services or a deep workflow process. In this context it's often pragmatic to make a handful of checks while you're in that spot. What I worry about is how we'd clearly feed back the results. Cucumber implementations are built with the assumption that each scenario has only one result. Capturing and communicating multiple failure results for one scenario seems to me would require some deep internal changes. Can you explain more about how this works in your fork? |
@mattwynne the fork is really simple: if you exclude groupId change, documentation change, the Java annotation class declaration, redirection of the annotation from layer to layer... the "heavy" refactoring only comes down to that change (yes, there even was a skipNextStep boolean already in Runtime.java): core/src/main/java/cucumber/runtime/Runtime.java
- skipNextStep = true;
+ if (!match.continueNextStepsAnyway(t)) {
+ skipNextStep = true;
+ } The whole patch difference is here: Here is a real example from our project; this is a Jenkins report (we did NOT fork the Jenkins plugin for that, it works out of the box):
|
That is still true. A scenario fails when at least one step failed. Not when the last executed step failed. However, I'm wondering what status a scenario would have when it contains failing and pending steps... @slaout, can you let us know? |
@muhqu That's a good question, as I always run in strict mode. In such cases, the result is FAILED in strict mode. But in non-strict mode, you made a good point, because it is inconsistent: SKIPPED for the scenario, and FAILED for the feature file. In Strict ModeAs well as in the HTML reports: In Non-Strict Mode:In HTML reports: exact same result as in strict mode, but only because the feature is not highlighted with any color, I think. |
I had a similar request a while ago in SEE ALSO: |
I'm going to propose an approach to the issue of continuing steps even when they fail. Many will condemn it, but works quite well. It's just easier then going into endless debates. From time to time I met the need to have a soft assert, just like TestNG has, but jUnit folks can't stand it. Sounds like the same dispute going on here. So, do that: implement a soft assert class in your framework. You might need to add the burden of creating a step to "turn on" and off the soft assert, and a step to check that assert class and fail, if necessary. |
Hi, We are losing this feature in our test projects, coudl you please tell us if it's planned for next release ? Thanks |
@elarbi no, this isn't planned for any upcoming releases. |
hope this will be supported soon, |
It would help lowering developer time if all failing steps could be gathered in one run instead of slowly getting them one by one. |
I've been pointed to this by @ylavoie, because I maintain the non-official Perl port of Cucumber. Being non-official gives me some flexibility for doing random stuff. My MSc dissertation actually digs into this behaviour -- Ruby and Python's xUnit heritage push them in one direction, which is noticeably different from how tests generally work with Perl - in Perl almost all test failures are seen as a soft failures that allow test execution to fail. Getting The big question for me is whether to put the flag inside the step-definition codeblock, ala: When qr/I've added "(.+)" to the object/, sub {
S->softfail();
S->{'object'}->add( C->matches->[0] );
}; or as an annotation like @aslakhellesoy has it in his example above: When qr/I've added "(.+)" to the object/, { soft => 1 }, sub {
S->{'object'}->add( C->matches->[0] );
}; and I think it probably needs to be the latter, as it forms part of the meta-data for a test case, rather than the testing assertions. Anywho, I think it's a very important potential feature, and I'll help @ylavoie get it implemented in our Wild West Fake Cucumber clone :D |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed in a week if no further activity occurs. |
This issue has been automatically closed because of inactivity. You can support the Cucumber core team on opencollective. |
We use stalebot to help us keep the number of issues from growing out of hand and becoming unmanageable. Stalebot helpfully closes any issues where the conversation and activity around the issue has died down. If we want stalebot to leave an issue alone, just make a comment. |
FWIW I can see the value in this feature and would support it being added. Although we'd want to end up with it working consistently in every implementation, I think it would be pragmatic to do each implementation separately. |
But isn't this already handled by breaking the tests down into smaller units? Ignoring the issue could hide a blocker, used sparingly though I guess? I suppose there might be a case. This is strictly for data entry? Syntactic though I think I would prefer a single character identifying it as a continue step. Like a minus at the front. -And some statement |
Yes, unit tests needs to have only one assert. But some integration tests like Selenium or mobile-apps ones are too slow for this to be practical. Oh, brilliant! The character would make the implementation work on all languages. It could be a character that better indicate an optional state than just "-And". Like "And?" or better: "(And)"... Definitely a tough to think about. I see three downsides (but minor compared to it's advantages):
|
I agree with the downsides.
But as we said before, it shouldn't happen all that often it should be kind
of a rare case. If your tests are broken up I mean.
…On Wed, Jul 18, 2018, 10:55 PM slaout ***@***.***> wrote:
@mrmattrains <https://github.com/mrmattrains>:
Yes, unit tests needs to have only one assert. But some integration tests
like Selenium or mobile-apps ones are too slow for this to be practical.
Oh, brilliant! The character would make the implementation work on all
languages.
It could be a character that better indicate an optional state than just
"-And". Like "And?" or better: "(And)"...
Definitely a tough to think about.
I see two downsides (but minor compared to it's advantages):
-
An optional step has to be marked as such EACH time it is used (but
then, it's more obvious it's optional by just reading the feature file)
-
IDE plugins need to be updated
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#79 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AG9Eh3-ITDlUCzQCQgI9W5s59Lvp7Amaks5uIB8_gaJpZM4KEzIL>
.
|
Any updates on this issue? I would find this useful in the following mobile app UI automation context (which many have eluded to previously):
If the product_card_name was not present, the scenario fails and I get no information on whether or not the product_card_image or product_card_price was present. This limits the value I get from this scenario. I understand the principle of each scenario having a single thing that it asserts, however, there are times where I believe this needs to be weighed with the cost of splitting the scenario. In the example above, it makes more sense to check the all three product card elements in a single scenario rather than launching the app, signing in, and navigating to browse to check one item in three separate scenarios. |
Fail fast is such a common and sensible behaviour that I see no value in the cost of the added complexity of making it optional. Not for Cucumber, its developers, the ecosystem or its users. Looking at this usecase I would strongly recommend rewriting the scenario to contain only a single then step to check for a list/table of elements leveraging the power of Hamcrest and PageObjects. I.e. And then make sure that the matcher is written in such a way that it reports all elements present and missing. Similair to how Hamcrest matchers for collections work. This keeps the semantics of Cucumber simple while placing the cost of the non standard behavior on those who require it. |
Thanks for the reply @mpkorstanje. In practice I actually have this written as a single step that passes in a table of elements. As you mentioned, I’ll just have to rewrite the matcher so it reports all elements present and not present. I’m not familiar with Hamcrest but will look into it. Thanks again. |
Try extending http://hamcrest.org/JavaHamcrest/javadoc/1.3/org/hamcrest/TypeSafeDiagnosingMatcher.html and look at the known subclasses for patterns. |
hope this can be considered in next release. |
and:
As a project maintainer closing inactive issues makes a lot of sense. It means nobody cares enough to provide a fix. |
I am closing this issue. No one has been willing to implement this in the last two years. If you are willing to implement this feature feel free to reopen it again. That said I do expect that most use cases for this will be symptomatic of an underlying problem in the software or organization that develops it. So far all examples feature slow, unstable or complex tests. In my opinion you should fix the root cause of this. |
I find this feature request valuable. My reasoning is that I am doing AQA for a website that imitates an sequential application and has a lot of e2e tests, with a number of Then checks throughout. If I did not have those there, the execution time would be exponentially greater, because if I had to end a scenario on every Then check, I would have to start at the beginning every time. I do not have control over the code of the app I'm testing, nor over the requirements the app is derived from. Being able to soft-fail steps would be very beneficial to me as a way to handle the nature of the app I'm testing. I like the custom exception/error idea from @brasmusson (maybe something like Here's an example of the idea in Java @When("^I do a thing$")
public static void iDoAThing() { new SomeClass().doSomething(); }
@Then("^I check for certain values$")
public static void iCheckForCertainValues() throws PassiveFailureException {
try {
new SomeClass().possibleFailureWhenCheckingValues();
} catch (Exception e) {
throw new PassiveFailureException(e);
}
}
@Then("^I check the visual layout$")
public static void iCheckTheVisualLayout() throws PassiveFailureException {
try {
new SomeClass().possibleFailureWhenCheckoutLayout();
} catch (Exception e) {
throw new PassiveFailureException(e);
}
} I'm not familiar with the source code/structure, but if someone helps point me in the right direction, I can take a stab at implementing the above idea. Additionally, what all languages need to be considered? |
I suspect that for your use case you may be better served using something akin to JUnits ErrorCollector as mentioned by @brasmusson earlier. You can set this up in about 5 minutes: public class ErrorCollector extends org.junit.rules.ErrorCollector {
@After
public void reportErrors() throws Throwable {
this.verify();
}
} Using public class RpnCalculatorStepdefs {
private final ErrorCollector errorCollector;
private RpnCalculator calc;
public RpnCalculatorStepdefs(ErrorCollector errorCollector) {
this.errorCollector = errorCollector;
}
@Given("a calculator I just turned on")
public void a_calculator_I_just_turned_on() {
calc = new RpnCalculator();
}
@When("I add {int} and {int}")
public void adding(int arg1, int arg2) {
errorCollector.checkThat("First argument is small", arg1, lessThan(10));
errorCollector.checkThat("Second argument is small", arg2, lessThan(10));
calc.push(arg1);
calc.push(arg2);
calc.push("+");
}
@Then("the result is {int}")
public void the_result_is(double expected) {
errorCollector.checkThat("Expected value is too high", expected, lessThan(100));
assertEquals(expected, calc.value());
}
} Which will report: This setup has the added advantage of not interfering with the execution flow. This makes multiple soft-failures work in any step -- this is important, being only able to check a single assertion in a Now you probably don't want to extend JUnits Implementing your own error collector will also allow you to log the errors to the step instead using
I'm sorry. |
Perhaps a way to approach having multiple Thens all evaluated is to combine them into a single Then:
Steps definitions are provided for each of the expectations in the table, and the step definition for the single then dynamically runs each of the steps in the table, catching their exceptions, continuing through all of them, and asserts that they all passed. |
This would be a very useful feature. In theory you should only have 1 |
I am waiting eagerly for this... Is it implemented somewhere. Please show some sympathy!!!! |
Here is an example of a working implementation in cucumber-ruby using a data table to evaluate Thens dynamically to workaround this limitation. Code is here: https://github.com/UltimateGeek/cuke-continue Feature: Test all thens
Scenario: First then fails
When I run `echo testing continues`
Then the following expectations are met:
| the exit status should be 1 |
| the output should match /testing continues/ | |
Hey Steve,
for some reason your message only showed up in my email. I'm not prepared
to give this another look. Given that a reasonable alternative exists in
the form of implementing soft assertions as a library you may want to
direct your attention there.
Cheers,
MP
…On Thu, 2 Apr 2020 at 09:45, Steve Kirkland ***@***.***> wrote:
Having read most of the comments here, it seems clear to me that there is
large and sustained demand for this feature from the community. It's a real
shame that no one has been willing to implement it t date and it is
especially disappointing that some seem so grounded in software theory that
they are willing to overlook the practical benefits to many users of
introducing the feature.
@mpkorstanje <https://github.com/mpkorstanje> - back in 2018 you
suggested that this issue would be reopened if someone came forward willing
to implement it, but if someone did present such a PR would it actually be
examined, considered with an open mind and merged if appropriately
engineered? Or would it simply be rejected as unwanted?
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#79 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ABVAAZ64J5BZ5BY42BRON63RKQ7CDANCNFSM4CQTGIFQ>
.
|
Hi MP,
Many thanks for your reply. Yeah, I deleted the comment almost immediately
as I was concerned it might come across as a flame.
Again, I think it's a shame as the addition of something like a simple
switch to alter the behaviour for all scenarios in a given run would be far
easier to use and would make a lot of people happy. Even if it logged
"You're stupid and ugly for using this option" 😂. But I can see your mind
is made up on this one. 🙂
Take care,
Steve.
On Sat, Apr 11, 2020 at 9:54 PM M.P. Korstanje <notifications@github.com>
wrote:
… Hey Steve,
for some reason your message only showed up in my email. I'm not prepared
to give this another look. Given that a reasonable alternative exists in
the form of implementing soft assertions as a library you may want to
direct your attention there.
Cheers,
MP
On Thu, 2 Apr 2020 at 09:45, Steve Kirkland ***@***.***>
wrote:
> Having read most of the comments here, it seems clear to me that there is
> large and sustained demand for this feature from the community. It's a
real
> shame that no one has been willing to implement it t date and it is
> especially disappointing that some seem so grounded in software theory
that
> they are willing to overlook the practical benefits to many users of
> introducing the feature.
>
> @mpkorstanje <https://github.com/mpkorstanje> - back in 2018 you
> suggested that this issue would be reopened if someone came forward
willing
> to implement it, but if someone did present such a PR would it actually
be
> examined, considered with an open mind and merged if appropriately
> engineered? Or would it simply be rejected as unwanted?
>
> —
> You are receiving this because you were mentioned.
> Reply to this email directly, view it on GitHub
> <#79 (comment)>,
> or unsubscribe
> <
https://github.com/notifications/unsubscribe-auth/ABVAAZ64J5BZ5BY42BRON63RKQ7CDANCNFSM4CQTGIFQ
>
> .
>
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<#79 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ABH7EYWNIG6CCWJLYOPUTB3RMDKJRANCNFSM4CQTGIFQ>
.
--
Steve Kirkland
Senior Test Automation Engineer
bugsnag.com <https://www.bugsnag.com/>
—
Get started for free <https://app.bugsnag.com/user/new/>
|
sorry, not found the solution of "soft assertion" MP mentioned (or I missed it...) the disagreement here might come from our usage of cucumber testing.
still hope we could have a nice solution for this case, hope this issue can be re-opened. |
@djwgit Could you help me understand how my solution of dynamic Thens using a data table doesn't address the need? The only downsides I see are that the output isn't as clear, and the reported total number of steps performed is reduced. |
We are on the need of something similar to what is requested in this issue and I'll try to explain why, and which is our current solution. To give an example let's say we have a process that has 5 different screens. We need to perform some assertions (that if failed don't block the test flow execution but need to be highlighted if don't succeed, for example, missing labels or tooltips) on each of this 5 different screens. We write scenarios that go though this 5 different screens with a sort of When-Then-When-Then-When-Then pattern (that I know is wrong, but due to our context is the only fit for our resources and app characteristics). If test fails in the first Then with a non blocking assertion (like a missing literal) then we miss the next 4 screen feedback. A possible solution Drawbacks |
Cucumber JS is adding step hooks. These could provide the information you need. |
But how will the output report (e.g. cucumberJSON) can display... Even that
needs to be customized right?
…On Sat, Nov 7, 2020 at 5:49 PM M.P. Korstanje ***@***.***> wrote:
Cucumber JS is adding step hooks. These could provide the information you
need.
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<#79 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ABP62IWFWGA5MXYLTXY44PDSOU3NNANCNFSM4CQTGIFQ>
.
--
*Udhay.*
|
@Jaykul @renehernandez But I really need an option to continue with the next step to highlight all issues found in a test scenario to be able to address all issues found and not only one at a time... Can you please point me in the direction to where I alter the code in Pester v4 to remove the "SkipNextStep" behavior as a work around for my own benefit? |
Ok, I found the place. Module: Pester v4.10.1 File: Gherkin.ps1, Line: 734
And a crude work around would be
A better solution would be using a switch for slacked runs. |
Summary
Add the possibility to being able to mark some "Then" step-definitions so that when they fail, they still mark the result as failed, but continue executing the next steps of the scenario.
Current Behavior
Sometimes, you have such Cucumber scenario:
In this particular situation:
Expected Behavior
The step-definitions developer should be able to tell that one particular step is an assertion that have no impact on the following steps of the scenario it's used in.
He/she could then add an annotation on that step-definition to allow Cucumber to continue the next steps executions if that one fails.
See the discussion on Google Groups.
Possible Solution
I've made this Cucumber fork for our own needs:
github.com/slaout/cucumber-jvm
Please read the README.md for an explanation and our return of experience on this subject.
It's only working on the Java backend.
See the discussion on Google Groups for people proposing to port it to Ruby, for instance.
Please tell me if the fork is a good starting point for you and you would like me to transform this fork into a Pull Request.
The text was updated successfully, but these errors were encountered: