Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Multiple feature runs not considered in test result #476

Closed
liamharries opened this issue Aug 2, 2017 · 4 comments
Closed

Multiple feature runs not considered in test result #476

liamharries opened this issue Aug 2, 2017 · 4 comments

Comments

@liamharries
Copy link

I'm making use of SpecFlow in my organisation to automate web based tests and to test across multiple browsers we use the same feature file and set run targets in the SpecFlow profile.

In our report, we then get the results for each targeted run.
image

The issue we face is that if a test fails as above all Firefox tests are failing, the output in Pickles indicates everything has passed, see below.
image

Would it be possible to look into it supporting targeted runs and showing the result of highest severity, ie if one fails but the rest pass it displays the fail so it gives visibility to the issue.

@dirkrombauts
Copy link
Member

Hi,

Thank you for your interest. This sounds like a potentially interesting thing. It's highly SpecFlow+Runner-specific, though - so it needs to work in a totally transparent way so that we don't need to treat the SpecFlow+Runner different from the other test result providers.

The reality of open source projects is that they live from external contributions. I have a limited time budget for Pickles at the moment which is enough for managing issues, PRs and releases, but not for developing new features. So if you want this feature sooner rather than later, the best way is to implement it yourself and send me a pull request.

@liamharries
Copy link
Author

liamharries commented Aug 7, 2017

Hi,

I've managed to construct a workaround for this in the SpecFlow report template.

I'm not sure where this code would go in the Pickles source but here it is if anyone would like it.

I am tester over a coder so the quality of the code may not be brilliant, I believe it could do with some optimisation though it isn't noticeably slower on generating reports.

I have commented throughout to improve readability.

As described at http://docs.picklesdoc.com/en/latest/IntegratingTestResultsFromSpecRun/ in addition to the GetResultForPickles method the below code should be added.

`
@functions{

/// <summary>
/// Sorts through the features to remove duplicates from targeted runs.
/// </summary>
/// <returns>list of string names of the features.</returns>
List<String> GetFeatureNames()
{
    List<String> features = new List<String>();

    //Search each feature and add it to the list if not present
    foreach (var fixtureNode in GetTextFixtures())
    {
        if (!features.Contains(fixtureNode.Title))
        {
            features.Add(fixtureNode.Title);
        }
    }

    return features;
}

/// <summary>
/// Sorts through tests run against multiple targets to find the least favourable outcome.
/// This is needed as otherwise the first result will be used in documentation.
/// </summary>
/// <param name="feature">the name of the feature to return results for.</param>
/// <returns>List of TestNode's where the test node is the least or equally favourable result.</returns>
List<TestNode> GetLeastFavourableTestResults(string feature)
{
    List<TestNode> tests = new List<TestNode>();

    //Compile a list of all test's in the given feature
    List<string> testNames = new List<string>();
    //Search each feature that shares the given title.
    foreach (var fixtureNode in GetTextFixtures().Where(fixtureNode => fixtureNode.Title == feature))
    {
        //Search each test in the feature and add it to the list if not present
        foreach (TestNode testNode in fixtureNode.SubNodes)
        {
            if (!testNames.Contains(testNode.Title))
            {
                testNames.Add(testNode.Title);
            }
        }
    }

    // Iterate each test to find the least favourable
    foreach (string testName in testNames)
    {
        TestNode nodeToReturn = null;
        bool needToSetFirst = true;

        //Search each feature that shares the given title.
        foreach (var testFeature in GetTextFixtures().Where(fixtureNode => fixtureNode.Title == feature))
        {
            //Search each test in the feature
            foreach (var testNode in testFeature.SubNodes)
            {
                //Ensure the test target is the same as the feature target(ie only tests that could be run)
                if (testNode.Title == testName && testNode.TestTarget == testFeature.TestTarget)
                {
                    if (needToSetFirst)
                    {
                        needToSetFirst = false;
                        nodeToReturn = testNode;
                    }
                    nodeToReturn = GetWorstNode(nodeToReturn, testNode);
                }
            }
        }
        tests.Add(nodeToReturn);
    }
    //return latestData;
    return tests;
}

/// <summary>
/// Compares two tests and returns the least favourable.
/// </summary>
/// <param name="test1">First base test.</param>
/// <param name="test2">Second test to compare.</param>
/// <returns>TestNode with the least favourable outcome.</returns>
TestNode GetWorstNode(TestNode test1, TestNode test2)
{
    // Get the summary of each test
    var summary1 = GetSummary(test1);
    var summary2 = GetSummary(test2);

    // Return the test which was:
    //Failed
    if (summary1.TotalFailure > summary2.TotalFailure)
    {
        return test1;
    }
    else if (summary1.TotalFailure < summary2.TotalFailure)
    {
        return test2;
    }

    //Ignored
    if (summary1.Ignored > summary2.Ignored)
    {
        return test1;
    }
    else if (summary1.Ignored < summary2.Ignored)
    {
        return test2;
    }

    //Pending
    if (summary1.Pending > summary2.Pending)
    {
        return test1;
    }
    else if (summary1.Pending < summary2.Pending)
    {
        return test2;
    }

    //Inconclusive
    if (summary1.Succeeded > summary2.Succeeded)
    {
        return test2;
    }
    else if (summary1.Succeeded < summary2.Succeeded)
    {
        return test1;
    }

    // both are the same, return first for efficiency
    return test1;
}

}

`

And then in place of the output code the following code should be used instead.

<!-- Pickles Begin &lt;features&gt; @foreach (var feature in GetFeatureNames()) { <text>&lt;feature&gt;</text> <text>&lt;title&gt;</text>@feature<text>&lt;/title&gt;</text> <text>&lt;scenarios&gt;</text> foreach (TestNode testNode in GetLeastFavourableTestResults(feature)) { <text>&lt;scenario&gt;</text> <text>&lt;title&gt;</text>@testNode.Title<text>&lt;/title&gt;</text> <text>&lt;result&gt;</text>@GetResultForPickles(testNode)<text>&lt;/result&gt;</text> <text>&lt;/scenario&gt;</text> } <text>&lt;/scenarios&gt;</text> <text>&lt;/feature&gt;</text> } &lt;/features&gt; Pickles End -->

@dirkrombauts
Copy link
Member

Thanks for sharing.

I will not merge this into Pickles: integrating SpecFlow+ Runner results is difficult enough as it is without additional gimmicks. I will leave this issue open for future reference, though.

@dirkrombauts
Copy link
Member

It's been an amazing time for me to work on Pickles. Now it's finally time for me to lay down the mantle and move on. I am leaving Pickles completely.

I am closing this issue, so that the next maintainer of this repository can start from a clean slate.

Do you want to take over active development and maintenance at Pickles? Contact me directly at dirk dot rombauts at picklesdoc dot com. I will hand over everything Pickles-related to you. This email address will remain active until 11 December 2020.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants