Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

When trying to run tests using tags for filtering, the results are inconsistent, when using Scenario Outline #3447

Open
dragos-panzaru-md opened this issue Apr 22, 2024 · 3 comments

Comments

@dragos-panzaru-md
Copy link

dragos-panzaru-md commented Apr 22, 2024

What happened?

I have three tests. Two of them is only for both ios and android (with tags @iOS and @android, one using Scenario and one using Scenario Outline) and 1 is just for android (only @android tag and using Scenario Outline).
I have a runner class like this:

`package runners;

import org.junit.platform.suite.api.ConfigurationParameter;
import org.junit.platform.suite.api.IncludeEngines;
import org.junit.platform.suite.api.SelectClasspathResource;
import org.junit.platform.suite.api.Suite;

import static io.cucumber.junit.platform.engine.Constants.GLUE_PROPERTY_NAME;

@suite
@IncludeEngines("cucumber")
@SelectClasspathResource("/features")
@ConfigurationParameter(key = GLUE_PROPERTY_NAME, value = "/stepDefinitions")
public class RunSuite {

}`

In junit-platform.properties I have the following configs:
`cucumber.execution.parallel.enabled=true

cucumber.execution.parallel.config.strategy=fixed

cucumber.execution.parallel.config.fixed.parallelism=3

cucumber.execution.parallel.config.fixed.max-pool-size=3

cucumber.plugin=io.cucumber.core.plugin.SerenityReporterParallel,pretty`

In pom.xml I have the following library versions:
`<cucumber-junit.version>7.13.0</cucumber-junit.version>

<appium-java-client.version>8.5.1</appium-java-client.version>

<junit.version>5.9.3</junit.version>

<junit.platform.version>1.9.2</junit.platform.version>

<serenity.version>3.9.8</serenity.version>`

Also I have a retry mechanism in place in pom.xml:
`

<groupId>org.apache.maven.plugins</groupId>

<artifactId>maven-failsafe-plugin</artifactId>

<version>${maven-failsafe-plugin.version}</version>

<configuration>
    
    <includes>
        
        <include>runners/*Suite.java</include>
    
    </includes>
    
    <parallel>classes</parallel>

</configuration>

<executions>
    
    <execution>
        
        <goals>
            
            <goal>integration-test</goal>
            
            <goal>verify</goal>
        
        </goals>
        
        <configuration>
            
            <rerunFailingTestsCount>1</rerunFailingTestsCount>
        
        </configuration>
    
    </execution>

</executions>

`

And I execute the following mvn command:
mvn clean verify -D"webdriver.provider.driver"="drivers.SaucelabsIOSDriver" -D"serenity.take.screenshots"=FOR_FAILURES -D"cucumber.filter.tags"="@ios"

What did you expect to happen?

On Saucelabs only 2 tests should be executed, but instead of two I have 3, the first two executed simultaneously (although I have set 3 parallel threads in junit-platform.properties), and a third (I am guessing the one that has only @android tag), is executed after the first two finish their execution, and it just open the app and then it ends the test, and always has status 'Completed', on Saucelabs, not failed or passed.
If the test that is only for android is changed to use Scenario instead of Scenario outline, then I no longer encounter the issue.

Serenity BDD version

3.9.8

JDK version

11

Execution environment

I don't believe it is related to environment

How to reproduce the bug.

I am attaching here the logs:

`[WARNING] Tests run: 3, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 113.5 s -- in runners.RunSuite

[INFO]

[INFO] Results:

[INFO]

[INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0

[INFO]

[INFO]

[INFO] --- serenity:3.9.8:aggregate (serenity-reports) @ tests ---

[INFO] GENERATING REPORTS USING 32 THREADS

[INFO] GENERATING SUMMARY REPORTS...

[INFO] GENERATING REQUIREMENTS REPORTS...

[INFO] GENERATING RESULT REPORTS...

[INFO] GENERATING ERROR REPORTS...

[INFO] Test results for 2 tests generated in 10.4 secs in directory:
file:/C:/Users/user/Desktop/Projects/tests/target/site/serenity/

[INFO] ------------------------------------------------

[INFO] | SERENITY TESTS: | SUCCESS

[INFO] ------------------------------------------------

[INFO] | Test scenarios executed | 2

[INFO] | Total Test cases executed | 2

[INFO] | Tests passed | 2

[INFO] | Tests failed | 0

[INFO] | Tests with errors | 0

[INFO] | Tests compromised | 0

[INFO] | Tests aborted | 0

[INFO] | Tests pending | 0

[INFO] | Tests ignored/skipped | 0

[INFO] ------------------------------- | --------------

[INFO] | Total Duration| 3m 12s

[INFO] | Fastest test took| 014ms

[INFO] | Slowest test took| 1m 48s

[INFO] ------------------------------------------------

[INFO]

[INFO] SERENITY REPORTS

[INFO] - Full Report: file:///C:/Users/username/Desktop/Projects/tests/target/site/serenity/index.html

[INFO] - Single Page HTML Summary: file:///C:/Users/user/Desktop/Projects/tests/target/site/serenity/serenity-
summary.html

[INFO] - Full Report As React Based Single Page Application:
file:///C:/Users/user/Desktop/Projects/tests/target/site/serenity/navigator/index.html

[INFO]

[INFO] --- failsafe:3.1.2:verify (default) @ tests ---

[INFO] ------------------------------------------------------------------------

[INFO] BUILD SUCCESS

[INFO] ------------------------------------------------------------------------`

So initially it says 3 tests run, and then 2 tests run.

How can we make it happen?

Add it to the Serenity BDD backlog and wait for a volunteer to pick it up

@wakaleo
Copy link
Member

wakaleo commented Apr 22, 2024

Looks like a project-specific issue. I doubt a volunteer will pick this up but if it is important to your company, your best bet is to request a commercial support package so we can look directly at your project.

@dragos-panzaru-md
Copy link
Author

How did you come to the conclusion it is a project specific issue?

@wakaleo
Copy link
Member

wakaleo commented Apr 22, 2024

I've never seen this behaviour elsewhere.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants