New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Test Dependency Attribute #51

Open
ashes999 opened this Issue Nov 3, 2013 · 59 comments

Comments

Projects
None yet
@ashes999

ashes999 commented Nov 3, 2013

Hi,

I have a web app with extensive automated testing. I have some installation tests (delete the DB tables and reinstall from scratch), upgrade tests (from older to newer schema), and then normal web tests (get this page, click this, etc.)

I switched from NUnit to MbUnit because it allowed me to specify test orders via dependency (depend on a test method or test fixture). I switched back to NUnit, and would still like this feature.

The current work-around (since I only use the NUnit GUI) is to order test names alphabetically, and run them fixture by fixture, with the installation/first ones in their own assembly.

@CharliePoole

This comment has been minimized.

Show comment
Hide comment
@CharliePoole

CharliePoole Nov 3, 2013

Member

This bug duplicates and replaces https://bugs.launchpad.net/nunit-3.0/+bug/740539 which has some discussion.

While dependency and ordering are not identical, they are related in that ordering of tests is one way to model dependency. However, other things may impact ordering, such as the level of importance of a test. At any rate, the two problems need to be addressed together.

Member

CharliePoole commented Nov 3, 2013

This bug duplicates and replaces https://bugs.launchpad.net/nunit-3.0/+bug/740539 which has some discussion.

While dependency and ordering are not identical, they are related in that ordering of tests is one way to model dependency. However, other things may impact ordering, such as the level of importance of a test. At any rate, the two problems need to be addressed together.

@ashes999

This comment has been minimized.

Show comment
Hide comment
@ashes999

ashes999 Nov 3, 2013

I like the MbUnit model a lot:

  • Dependency on another test suite: annotate test (or test fixture) with [DependsOn(typeof(AnotherFixtureType))]
  • Dependency on another test: annotate test with [DependsOn("TestInThisFixture")]

ashes999 commented Nov 3, 2013

I like the MbUnit model a lot:

  • Dependency on another test suite: annotate test (or test fixture) with [DependsOn(typeof(AnotherFixtureType))]
  • Dependency on another test: annotate test with [DependsOn("TestInThisFixture")]
@CharliePoole

This comment has been minimized.

Show comment
Hide comment
@CharliePoole

CharliePoole Nov 3, 2013

Member

What does MbUnit do if you set up a cyclic "dependency"?

On Sun, Nov 3, 2013 at 12:25 PM, ashes999 notifications@github.com wrote:

I like the MbUnit model a lot:

  • Dependency on another test suite: annotate test (or test fixture)
    with [DependsOn(typeof(AnotherFixtureType))]
  • Dependency on another test: annotate test with
    [DependsOn("TestInThisFixture")]


Reply to this email directly or view it on GitHubhttps://github.com//issues/51#issuecomment-27653262
.

Member

CharliePoole commented Nov 3, 2013

What does MbUnit do if you set up a cyclic "dependency"?

On Sun, Nov 3, 2013 at 12:25 PM, ashes999 notifications@github.com wrote:

I like the MbUnit model a lot:

  • Dependency on another test suite: annotate test (or test fixture)
    with [DependsOn(typeof(AnotherFixtureType))]
  • Dependency on another test: annotate test with
    [DependsOn("TestInThisFixture")]


Reply to this email directly or view it on GitHubhttps://github.com//issues/51#issuecomment-27653262
.

@ashes999

This comment has been minimized.

Show comment
Hide comment
@ashes999

ashes999 Nov 4, 2013

@CharliePoole if you create a cycle or specify a non-existent test method dependency, MbUnit throws a runtime exception. Since depending on a class requires the type, that would be similar (depending on a non-test) or a compile-time error (that type doesn't exist).

ashes999 commented Nov 4, 2013

@CharliePoole if you create a cycle or specify a non-existent test method dependency, MbUnit throws a runtime exception. Since depending on a class requires the type, that would be similar (depending on a non-test) or a compile-time error (that type doesn't exist).

@candychiu

This comment has been minimized.

Show comment
Hide comment
@candychiu

candychiu May 13, 2014

Any update on this issue? I chose MbUnit because of the ordering. Now it's on indefinite hiatus, I need to look for an alternative. It would be nice if NUnit can support this essential feature in integration testing.

candychiu commented May 13, 2014

Any update on this issue? I chose MbUnit because of the ordering. Now it's on indefinite hiatus, I need to look for an alternative. It would be nice if NUnit can support this essential feature in integration testing.

@rprouse

This comment has been minimized.

Show comment
Hide comment
@rprouse

rprouse May 13, 2014

Member

Sorry, no update yet, although I have also come from MbUnit and use this for some of our integration tests. If you just want ordering of tests within a test class, NUnit will run the tests in alphabetical order within a test fixture. This is unsupported and undocumented, but it works until we have an alternative.

Member

rprouse commented May 13, 2014

Sorry, no update yet, although I have also come from MbUnit and use this for some of our integration tests. If you just want ordering of tests within a test class, NUnit will run the tests in alphabetical order within a test fixture. This is unsupported and undocumented, but it works until we have an alternative.

@faddison

This comment has been minimized.

Show comment
Hide comment
@faddison

faddison Jun 13, 2014

I thought this was coming in v3?

faddison commented Jun 13, 2014

I thought this was coming in v3?

@CharliePoole

This comment has been minimized.

Show comment
Hide comment
@CharliePoole

CharliePoole Jun 14, 2014

Member

Yes, but as yet it isn't being worked on. After the first 3.0 alpha, we'll
add further features.
On Jun 13, 2014 8:28 PM, "fraser addison" notifications@github.com wrote:

I thought this was coming in v3?


Reply to this email directly or view it on GitHub
#51 (comment)
.

Member

CharliePoole commented Jun 14, 2014

Yes, but as yet it isn't being worked on. After the first 3.0 alpha, we'll
add further features.
On Jun 13, 2014 8:28 PM, "fraser addison" notifications@github.com wrote:

I thought this was coming in v3?


Reply to this email directly or view it on GitHub
#51 (comment)
.

@CharliePoole CharliePoole added this to the 3.0 milestone Jul 29, 2014

@CharliePoole

This comment has been minimized.

Show comment
Hide comment
@CharliePoole

CharliePoole Aug 6, 2014

Member

For 3.0, we will implement a test order attribute, rather than an MbUnit-like dependency attribute. See issue #170

Member

CharliePoole commented Aug 6, 2014

For 3.0, we will implement a test order attribute, rather than an MbUnit-like dependency attribute. See issue #170

@CharliePoole CharliePoole modified the milestones: 3.0, Future Aug 6, 2014

@ashes999

This comment has been minimized.

Show comment
Hide comment
@ashes999

ashes999 Aug 7, 2014

See my comment in #170. Ordering is very limited and prone to maintenance (unless you use multiples of ten so you can insert tests in the middle without reordering everything). MbUnit has arbitrary dependency graphs, which I (or maybe I should say "we" since I'm not the only one) really need.

Depending on an alphabetic order is a crutch, and a pretty weak one considering this could change at any time.

ashes999 commented Aug 7, 2014

See my comment in #170. Ordering is very limited and prone to maintenance (unless you use multiples of ten so you can insert tests in the middle without reordering everything). MbUnit has arbitrary dependency graphs, which I (or maybe I should say "we" since I'm not the only one) really need.

Depending on an alphabetic order is a crutch, and a pretty weak one considering this could change at any time.

@CharliePoole

This comment has been minimized.

Show comment
Hide comment
@CharliePoole

CharliePoole Aug 7, 2014

Member

Hi,

On Thu, Aug 7, 2014 at 6:10 AM, Ashiq A. notifications@github.com wrote:

See my comment in #170. Ordering is very limited and prone to maintenance (unless you use multiples of ten so you can insert tests in the middle without reordering everything). MbUnit has arbitrary dependency graphs, which I (or maybe I should say "we" since I'm not the only one) really need.

Yes. The idea to use ordering is based on an unstated assumption: that
it will hardly be used at all. Trying to control the runtime order of
all your tests is a really bad idea. However, in rare cases, it may
be desirable to ensure some test runs first. For such limited use, an
integer ordering is fine and the difficulty of inserting new items in
the order might well serve as a discouragement to unnecessarily
ordering tests.

Note that this isssue relates to the ordering of Test methods, not
test fixtures. Issue #170 applies to test fixtures and not methods as
written, since it uses a Type as the dependent item. That said, the
examples in #170 seem to imply that ordering of methods is desired.

Basically, we decided that #170 requires too much design work to
include in the 3.0 release without further delaying it. We elected -
in this and other cases - to limit new features in favor of a quicker
release. Assigning #170 to the "Future" milestone doesn't mean it
won't happen. Most likely we will address it in a point release.

The use of an OrderAttribute was viewed as a way of quickly giving
"something" to those who want to control order of test method
execution. We felt we could get it in quickly. In fact, we may have
been wrong. Thinking about it further, I can see that it may introduce
a capability that is difficult to maintain in the face of parallel
test execution. In fact, a general dependency approach may be what we
need. For the moment, I'm moving both issues out of the 3.0 milestone
until we can look into them further.

Depending on an alphabetic order is a crutch, and a pretty weak one considering this could change at any time.

Indeed. We have always advised people not to use that for exactly that
reason. In fact, it is not guaranteed in NUnit 3.0.

Charlie

Member

CharliePoole commented Aug 7, 2014

Hi,

On Thu, Aug 7, 2014 at 6:10 AM, Ashiq A. notifications@github.com wrote:

See my comment in #170. Ordering is very limited and prone to maintenance (unless you use multiples of ten so you can insert tests in the middle without reordering everything). MbUnit has arbitrary dependency graphs, which I (or maybe I should say "we" since I'm not the only one) really need.

Yes. The idea to use ordering is based on an unstated assumption: that
it will hardly be used at all. Trying to control the runtime order of
all your tests is a really bad idea. However, in rare cases, it may
be desirable to ensure some test runs first. For such limited use, an
integer ordering is fine and the difficulty of inserting new items in
the order might well serve as a discouragement to unnecessarily
ordering tests.

Note that this isssue relates to the ordering of Test methods, not
test fixtures. Issue #170 applies to test fixtures and not methods as
written, since it uses a Type as the dependent item. That said, the
examples in #170 seem to imply that ordering of methods is desired.

Basically, we decided that #170 requires too much design work to
include in the 3.0 release without further delaying it. We elected -
in this and other cases - to limit new features in favor of a quicker
release. Assigning #170 to the "Future" milestone doesn't mean it
won't happen. Most likely we will address it in a point release.

The use of an OrderAttribute was viewed as a way of quickly giving
"something" to those who want to control order of test method
execution. We felt we could get it in quickly. In fact, we may have
been wrong. Thinking about it further, I can see that it may introduce
a capability that is difficult to maintain in the face of parallel
test execution. In fact, a general dependency approach may be what we
need. For the moment, I'm moving both issues out of the 3.0 milestone
until we can look into them further.

Depending on an alphabetic order is a crutch, and a pretty weak one considering this could change at any time.

Indeed. We have always advised people not to use that for exactly that
reason. In fact, it is not guaranteed in NUnit 3.0.

Charlie

@circa1741

This comment has been minimized.

Show comment
Hide comment
@circa1741

circa1741 Nov 27, 2014

I have not used MbUnit in years but would like to add to this discussion if my memory serves me correctly.

Assuming [Test Z] dependson [Test A]. I then run [Test Z]. It appears that NUnit evaluates the result of [Test A] first. If there is no available result then NUnit will automatically run [Test A] before attempting to run the requested [Test Z]. NUnit will only run [Test Z] if [Test A] passes. Otherwise, [Test Z] will be marked as inconclusive and indicate its dependency on the failed [Test A].

These might provide some insight:
https://code.google.com/p/mb-unit/source/browse/trunk/v3/src/MbUnit/MbUnit/Framework/DependsOnAssemblyAttribute.cs?spec=svn3066&r=1570

https://code.google.com/p/mb-unit/source/browse/trunk/v3/src/MbUnit/MbUnit/Framework/DependsOnAssemblyAttribute.cs?r=1570

circa1741 commented Nov 27, 2014

I have not used MbUnit in years but would like to add to this discussion if my memory serves me correctly.

Assuming [Test Z] dependson [Test A]. I then run [Test Z]. It appears that NUnit evaluates the result of [Test A] first. If there is no available result then NUnit will automatically run [Test A] before attempting to run the requested [Test Z]. NUnit will only run [Test Z] if [Test A] passes. Otherwise, [Test Z] will be marked as inconclusive and indicate its dependency on the failed [Test A].

These might provide some insight:
https://code.google.com/p/mb-unit/source/browse/trunk/v3/src/MbUnit/MbUnit/Framework/DependsOnAssemblyAttribute.cs?spec=svn3066&r=1570

https://code.google.com/p/mb-unit/source/browse/trunk/v3/src/MbUnit/MbUnit/Framework/DependsOnAssemblyAttribute.cs?r=1570

@CharliePoole

This comment has been minimized.

Show comment
Hide comment
@CharliePoole

CharliePoole Nov 27, 2014

Member

@circa1741: We will work on this in the "future" by which I mean the release after 3.0. Full-on dependency is really a pretty complex beast to implement and we are already taking on a lot in 3.0.

Ordering of tests is a bandaid if what you want is true dependency, but it's pretty easy to implement.

By doing ordering (#170) in 3.0 we do run a risk: some users will treat it as the answer to their dependency problems and come to depend on it in the wrong context. Still, it seems better than doing nothing.

I'd like to find the time to write an article on the various kinds of dependency and ordering and how they differ in usage and implementation... maybe... ;-)

Member

CharliePoole commented Nov 27, 2014

@circa1741: We will work on this in the "future" by which I mean the release after 3.0. Full-on dependency is really a pretty complex beast to implement and we are already taking on a lot in 3.0.

Ordering of tests is a bandaid if what you want is true dependency, but it's pretty easy to implement.

By doing ordering (#170) in 3.0 we do run a risk: some users will treat it as the answer to their dependency problems and come to depend on it in the wrong context. Still, it seems better than doing nothing.

I'd like to find the time to write an article on the various kinds of dependency and ordering and how they differ in usage and implementation... maybe... ;-)

@CharliePoole

This comment has been minimized.

Show comment
Hide comment
@CharliePoole

CharliePoole Nov 27, 2014

Member

Correction: after I wrote the preceding comment, I noticed that #170 is also scheduled for 'future'.

We'll continue to discuss whether it's possible to include some ordering feature in 3.0 as opposed to 3.2, but at the moment they are not planned.

Member

CharliePoole commented Nov 27, 2014

Correction: after I wrote the preceding comment, I noticed that #170 is also scheduled for 'future'.

We'll continue to discuss whether it's possible to include some ordering feature in 3.0 as opposed to 3.2, but at the moment they are not planned.

@circa1741

This comment has been minimized.

Show comment
Hide comment
@circa1741

circa1741 Nov 28, 2014

(Samples taken from http://blog.bits-in-motion.com/search?q=mbunit)

When writing integration tests, it is sometimes useful to chain several tests together that capture a logical sequence of operations.

MbUnit can capture this chaining either with dependencies:
dependson
Also allows [DependsOn(typeof(ClassName.MethodName))]

Or with explicit ordering:
order

circa1741 commented Nov 28, 2014

(Samples taken from http://blog.bits-in-motion.com/search?q=mbunit)

When writing integration tests, it is sometimes useful to chain several tests together that capture a logical sequence of operations.

MbUnit can capture this chaining either with dependencies:
dependson
Also allows [DependsOn(typeof(ClassName.MethodName))]

Or with explicit ordering:
order

@CharliePoole

This comment has been minimized.

Show comment
Hide comment
@CharliePoole

CharliePoole Nov 28, 2014

Member

Thanks for the example code. It gives something to aim for. Your first example is relevant to this issue. The second is exactly what we are planning for #170.

Member

CharliePoole commented Nov 28, 2014

Thanks for the example code. It gives something to aim for. Your first example is relevant to this issue. The second is exactly what we are planning for #170.

@circa1741

This comment has been minimized.

Show comment
Hide comment
@circa1741

circa1741 Jan 15, 2015

I have an idea that is more of a twist for dependency and ordering.

The discussion, so far, regarding dependency is "go ahead and run Test H (and possibly 8 other tests) only if Test 8 passes." In other words, there is no point of running Test H because if Test 8 fails then I know that Test H will also fail.

How about a dependency when a test fails?

Scenario:
I need a Smoke Test that covers a lot of ground. So, I am planning a Test Fixture that is an end-to-end test that has basic coverage of many of the SUT's features. The tests on said test fixture will use Test Ordering and are "not independent." The test order will be Test A then Test B then Test C, etc.

Now, because the tests are "not independent" I know that if Test C fails then all the following tests will also fail. Therefore, I need more tests to run in order to get a bigger picture of the Smoke Test.

I need to be able to configure to run Test 1 if Test A fails, Test 2 if Test B fails, Test 3 if Test C fails, etc.

My Test 3 is designed to be independent of Test B so if this fails then I have a better understanding of why Test C failed earlier. As it turns out my Tests 4 (for Test D), 5 (for E), 5 (for F), etc. all pass. Then I now understand that only the feature that was covered by Test C is the issue.

Why not run Tests 1, 2, 3, etc. instead? Well, because since those are isolated and independent tests then I am not doing Integration Tests. Again, I need a Smoke Test that covers a lot of ground.

Maybe something like:

  • [DependsOnPassing("Test so and so")]
  • [DependsOnFailing("Test blah blah blah")]

This will allow finer control in my automation test design.

circa1741 commented Jan 15, 2015

I have an idea that is more of a twist for dependency and ordering.

The discussion, so far, regarding dependency is "go ahead and run Test H (and possibly 8 other tests) only if Test 8 passes." In other words, there is no point of running Test H because if Test 8 fails then I know that Test H will also fail.

How about a dependency when a test fails?

Scenario:
I need a Smoke Test that covers a lot of ground. So, I am planning a Test Fixture that is an end-to-end test that has basic coverage of many of the SUT's features. The tests on said test fixture will use Test Ordering and are "not independent." The test order will be Test A then Test B then Test C, etc.

Now, because the tests are "not independent" I know that if Test C fails then all the following tests will also fail. Therefore, I need more tests to run in order to get a bigger picture of the Smoke Test.

I need to be able to configure to run Test 1 if Test A fails, Test 2 if Test B fails, Test 3 if Test C fails, etc.

My Test 3 is designed to be independent of Test B so if this fails then I have a better understanding of why Test C failed earlier. As it turns out my Tests 4 (for Test D), 5 (for E), 5 (for F), etc. all pass. Then I now understand that only the feature that was covered by Test C is the issue.

Why not run Tests 1, 2, 3, etc. instead? Well, because since those are isolated and independent tests then I am not doing Integration Tests. Again, I need a Smoke Test that covers a lot of ground.

Maybe something like:

  • [DependsOnPassing("Test so and so")]
  • [DependsOnFailing("Test blah blah blah")]

This will allow finer control in my automation test design.

@circa1741

This comment has been minimized.

Show comment
Hide comment
@circa1741

circa1741 Jan 15, 2015

How about something like these attributes instead:

  • [DependsOnPassing("Test so and so")]
  • [RunOnFail("Test blah blah blah")]

Please note to which test these are attached to. These attributes should be useable in different levels (and in any): assembly, test fixture, test.

[DependsOnPassing("Test E")]
Test F

  • Test E will automatically be executed if its result is unknown.
  • Only then will Test F be determined if it should run or not.

[RunOnFail("Test N")]
Test I

  • If Test I fails then Test N will automatically be executed.

Test N

  • Test N may be executed on its own.
  • But it will also automatically be executed if Test I fails.

Test E

  • Test E may be executed on its own.
  • But it will also automatically be executed if its result is unknown because Test F depends on this test passing first.

circa1741 commented Jan 15, 2015

How about something like these attributes instead:

  • [DependsOnPassing("Test so and so")]
  • [RunOnFail("Test blah blah blah")]

Please note to which test these are attached to. These attributes should be useable in different levels (and in any): assembly, test fixture, test.

[DependsOnPassing("Test E")]
Test F

  • Test E will automatically be executed if its result is unknown.
  • Only then will Test F be determined if it should run or not.

[RunOnFail("Test N")]
Test I

  • If Test I fails then Test N will automatically be executed.

Test N

  • Test N may be executed on its own.
  • But it will also automatically be executed if Test I fails.

Test E

  • Test E may be executed on its own.
  • But it will also automatically be executed if its result is unknown because Test F depends on this test passing first.
@JeffCave

This comment has been minimized.

Show comment
Hide comment
@JeffCave

JeffCave Nov 17, 2015

Copied from #1031, which obviously duplicates this. Hopefully the example helps and the keywords direct others here...

I have a couple of cases where I would like to mark a test as a prerequisite of another test. It would be nice to have an attribute that indicated tests that were prerequisites of the current test.

In the case where routines depend on one another, it is possible to know that a given test is going to fail because a sub-routine failed its test. If the test is a long running one, there really isn't any point in running the test if the sub-routine is broken anyway.

Contrived Example:

public static class Statistics
{
    public static double Average(IEnumerable<double> values)
    {
        double sum = 0;
        double count = 0;
        foreach (var v in values)
        {
            sum += v;
            count++;
        }
        return sum / count;
    }

    public static double MeanVariance(IEnumerable<double> values)
    {
        var avg = Average(values);
        var variance = new List<double>();
        foreach (var v in values)
        {
            variance.Add(Math.Abs(avg - v));
        }
        avg = Average(variance);
        return avg;
    }
}

[TestFixture]
public class TestStatistics
{
    [Test]
    public void Average()
    {
        var list = new List<double> { 1, 2, 3, 4, 5, 6, 7, 8, 9, 0 };
        var avg = Statistics.Average(list);
        Assert.AreEqual(4.5, avg);
    }

    [Test]
    //[Prerequisite("TestStatistics.Average")]
    public void MeanVariance()
    {
        //try { this.Average(); } catch { Assert.Ignore("Pre-requisite test 'Average' failed."); }
        var list = new List<double> { 1, 2, 3, 4, 5, 6, 7, 8, 9, 0 };
        var variance = Statistics.MeanVariance(list);
        Assert.AreEqual(0, variance);
    }
}

Given the example, if the test Average fails, it makes sense to not bother testing MeanVariance.

I would conceive of this working by chaining the tests:

  • if MeanVariance is run, Average is forced to run first.
  • If Average has already been run, the results can be reused.
  • If Average fails, MeanVariance is skipped.

JeffCave commented Nov 17, 2015

Copied from #1031, which obviously duplicates this. Hopefully the example helps and the keywords direct others here...

I have a couple of cases where I would like to mark a test as a prerequisite of another test. It would be nice to have an attribute that indicated tests that were prerequisites of the current test.

In the case where routines depend on one another, it is possible to know that a given test is going to fail because a sub-routine failed its test. If the test is a long running one, there really isn't any point in running the test if the sub-routine is broken anyway.

Contrived Example:

public static class Statistics
{
    public static double Average(IEnumerable<double> values)
    {
        double sum = 0;
        double count = 0;
        foreach (var v in values)
        {
            sum += v;
            count++;
        }
        return sum / count;
    }

    public static double MeanVariance(IEnumerable<double> values)
    {
        var avg = Average(values);
        var variance = new List<double>();
        foreach (var v in values)
        {
            variance.Add(Math.Abs(avg - v));
        }
        avg = Average(variance);
        return avg;
    }
}

[TestFixture]
public class TestStatistics
{
    [Test]
    public void Average()
    {
        var list = new List<double> { 1, 2, 3, 4, 5, 6, 7, 8, 9, 0 };
        var avg = Statistics.Average(list);
        Assert.AreEqual(4.5, avg);
    }

    [Test]
    //[Prerequisite("TestStatistics.Average")]
    public void MeanVariance()
    {
        //try { this.Average(); } catch { Assert.Ignore("Pre-requisite test 'Average' failed."); }
        var list = new List<double> { 1, 2, 3, 4, 5, 6, 7, 8, 9, 0 };
        var variance = Statistics.MeanVariance(list);
        Assert.AreEqual(0, variance);
    }
}

Given the example, if the test Average fails, it makes sense to not bother testing MeanVariance.

I would conceive of this working by chaining the tests:

  • if MeanVariance is run, Average is forced to run first.
  • If Average has already been run, the results can be reused.
  • If Average fails, MeanVariance is skipped.
@rladeira

This comment has been minimized.

Show comment
Hide comment
@rladeira

rladeira Nov 23, 2015

Are there any previsions about when this feature will be available? I haven't found this information in others threads, maybe it is a duplicate question.

rladeira commented Nov 23, 2015

Are there any previsions about when this feature will be available? I haven't found this information in others threads, maybe it is a duplicate question.

@CharliePoole

This comment has been minimized.

Show comment
Hide comment
@CharliePoole

CharliePoole Nov 23, 2015

Member

It's in the 'Future' milestone, that means after 3.2, which is the latest actual milestone we have. However, we are about to reallocate issues to milestones, so watch for changes.

Member

CharliePoole commented Nov 23, 2015

It's in the 'Future' milestone, that means after 3.2, which is the latest actual milestone we have. However, we are about to reallocate issues to milestones, so watch for changes.

@CharliePoole CharliePoole modified the milestones: Future, Backlog Dec 4, 2015

@CharliePoole

This comment has been minimized.

Show comment
Hide comment
@CharliePoole

CharliePoole Feb 24, 2016

Member

No, not too soon to figure out the syntax anyway. I suggest you create a Specification on the dev wiki. I have some ideas that I would like to contribute... as very briefly outlined in one of the comments above. I think the key distinction is between "hard" dependencies, which NUnit must follow, and "softer" ones, which are basically just hints to the framework.

Member

CharliePoole commented Feb 24, 2016

No, not too soon to figure out the syntax anyway. I suggest you create a Specification on the dev wiki. I have some ideas that I would like to contribute... as very briefly outlined in one of the comments above. I think the key distinction is between "hard" dependencies, which NUnit must follow, and "softer" ones, which are basically just hints to the framework.

@oznetmaster

This comment has been minimized.

Show comment
Hide comment
@oznetmaster

oznetmaster Feb 24, 2016

Contributor

I have a pretty good idea how to implement this as well, including using it to drive the SetUp and TearDown phases discussed in issue #1096.

It would mean changing the test "dispatching" from its current "push" mechanism to a "pull" one.

"Create a Specification on the dev wiki"? No idea how to do that :(

Contributor

oznetmaster commented Feb 24, 2016

I have a pretty good idea how to implement this as well, including using it to drive the SetUp and TearDown phases discussed in issue #1096.

It would mean changing the test "dispatching" from its current "push" mechanism to a "pull" one.

"Create a Specification on the dev wiki"? No idea how to do that :(

@CharliePoole

This comment has been minimized.

Show comment
Hide comment
@CharliePoole

CharliePoole Feb 24, 2016

Member

Not sure why you refer to test dispatching as a "push" mechanism. Workers pull items from the queues when they are ready to execute them. I'm planning to work on #1096 and I welcome any suggestions.

Can you edit a wiki? I'll create an empty page in the right place if you like. Otherwise, it can be text here, of course, but doing it on the wiki would give us a head start on documenting it later.

Member

CharliePoole commented Feb 24, 2016

Not sure why you refer to test dispatching as a "push" mechanism. Workers pull items from the queues when they are ready to execute them. I'm planning to work on #1096 and I welcome any suggestions.

Can you edit a wiki? I'll create an empty page in the right place if you like. Otherwise, it can be text here, of course, but doing it on the wiki would give us a head start on documenting it later.

@oznetmaster

This comment has been minimized.

Show comment
Hide comment
@oznetmaster

oznetmaster Feb 24, 2016

Contributor

CompositeWorkItem basically iterates across its children, and "pushes" each one to be executed.

I see an implementation of dependency which builds a linear "queue" of every work item in the test, sort of like a scheduler queue in an operating system, which specifies the conditions under which each item can be allowed to run. The dispatcher removes the top "runable" work item from the queue to run (if parallel, then each work "thread" would remove the next runable work item that can be parallelized). When a work item completes, it then toggles the "runability" of other items in the queue, perhaps even discarding those that will now never be run due to their dependency.

It may even be possible to express parallelizability as a dependency condition.

I see #1096 as part and parcel of the same process. Once multiple work items are created, they would each be assigned a dependency property which will control when they are executed.

Contributor

oznetmaster commented Feb 24, 2016

CompositeWorkItem basically iterates across its children, and "pushes" each one to be executed.

I see an implementation of dependency which builds a linear "queue" of every work item in the test, sort of like a scheduler queue in an operating system, which specifies the conditions under which each item can be allowed to run. The dispatcher removes the top "runable" work item from the queue to run (if parallel, then each work "thread" would remove the next runable work item that can be parallelized). When a work item completes, it then toggles the "runability" of other items in the queue, perhaps even discarding those that will now never be run due to their dependency.

It may even be possible to express parallelizability as a dependency condition.

I see #1096 as part and parcel of the same process. Once multiple work items are created, they would each be assigned a dependency property which will control when they are executed.

@oznetmaster

This comment has been minimized.

Show comment
Hide comment
@oznetmaster

oznetmaster Feb 24, 2016

Contributor

I have no idea if I can edit a wiki. Never tried, and have no idea how to. Can you give me a starting "push"? :)

Contributor

oznetmaster commented Feb 24, 2016

I have no idea if I can edit a wiki. Never tried, and have no idea how to. Can you give me a starting "push"? :)

@rprouse

This comment has been minimized.

Show comment
Hide comment
@rprouse

rprouse Feb 24, 2016

Member

@oznetmaster, editing the wiki is pretty easy,

  1. Pick a page you want to add a link to your new wiki page from, probably the Specifications page,
  2. Edit the page by clicking the button
  3. Add a link by surrounding text with double square brackets like [[My NUnit Spec]]
  4. Save the page
  5. Viewing the new link, it is red indicating the page does not exist
  6. Click on the red link, it will take you to a create new page
  7. Edit the page as you would an issue using GitHub markdown and save
Member

rprouse commented Feb 24, 2016

@oznetmaster, editing the wiki is pretty easy,

  1. Pick a page you want to add a link to your new wiki page from, probably the Specifications page,
  2. Edit the page by clicking the button
  3. Add a link by surrounding text with double square brackets like [[My NUnit Spec]]
  4. Save the page
  5. Viewing the new link, it is red indicating the page does not exist
  6. Click on the red link, it will take you to a create new page
  7. Edit the page as you would an issue using GitHub markdown and save
@CharliePoole

This comment has been minimized.

Show comment
Hide comment
@CharliePoole

CharliePoole Feb 24, 2016

Member

@oznetmaster What you describe is pretty much how the parallel dispatcher already works. Items are assigned to individual queues depending on their characteristics. Currently, it takes into account parallelizability and Apartment requirements. All items, once queued, are ready to be executed.

It was always planned that dependency queues would be added as a further step. I plan to use #1096 as an "excuse" to implement that infrastructure. Once it's implemented, it can then be further exposed to users as called for in #51. I'll be preparing a spec for the underlying scheduling mechanism (dispatcher) as well and I'd like your comments on it.

Member

CharliePoole commented Feb 24, 2016

@oznetmaster What you describe is pretty much how the parallel dispatcher already works. Items are assigned to individual queues depending on their characteristics. Currently, it takes into account parallelizability and Apartment requirements. All items, once queued, are ready to be executed.

It was always planned that dependency queues would be added as a further step. I plan to use #1096 as an "excuse" to implement that infrastructure. Once it's implemented, it can then be further exposed to users as called for in #51. I'll be preparing a spec for the underlying scheduling mechanism (dispatcher) as well and I'd like your comments on it.

@CharliePoole

This comment has been minimized.

Show comment
Hide comment
@CharliePoole
Member

CharliePoole commented Feb 24, 2016

@oznetmaster

This comment has been minimized.

Show comment
Hide comment
@oznetmaster

oznetmaster Feb 24, 2016

Contributor

Are we committed to calling the attribute DependsOnAttribute? How about something more general like "DependenciesAttribute"?

It is nigling, I know :(

Contributor

oznetmaster commented Feb 24, 2016

Are we committed to calling the attribute DependsOnAttribute? How about something more general like "DependenciesAttribute"?

It is nigling, I know :(

@CharliePoole

This comment has been minimized.

Show comment
Hide comment
@CharliePoole

CharliePoole Feb 24, 2016

Member

You should write it up as you think it should be. Then we'll all fight about it. :-)

Member

CharliePoole commented Feb 24, 2016

You should write it up as you think it should be. Then we'll all fight about it. :-)

@oznetmaster

This comment has been minimized.

Show comment
Hide comment
@oznetmaster

oznetmaster Feb 25, 2016

Contributor

So I have :)

Contributor

oznetmaster commented Feb 25, 2016

So I have :)

@CharliePoole

This comment has been minimized.

Show comment
Hide comment
@CharliePoole

CharliePoole Feb 25, 2016

Member

Suggestion: add a section that explains motivation for each type of dependency. For example, when would a user typically want to use AfterAny, etc.

As a developer, it's always tempting to add things for "completeness." Trying to imagine a user needing each feature is a useful restraint on this tendency. Unfortunately, users generally only tell us when something is missing, not when something is not useful to them.

Member

CharliePoole commented Feb 25, 2016

Suggestion: add a section that explains motivation for each type of dependency. For example, when would a user typically want to use AfterAny, etc.

As a developer, it's always tempting to add things for "completeness." Trying to imagine a user needing each feature is a useful restraint on this tendency. Unfortunately, users generally only tell us when something is missing, not when something is not useful to them.

@Sebazzz

This comment has been minimized.

Show comment
Hide comment
@Sebazzz

Sebazzz Jun 8, 2016

For what its worth, my input:

I'd rather not define a dependency per test method, that is becoming rather tedious (and hard to maintain) if you have more than a few tests. Instead I want to establish an order between test fixtures. This comes from the following case we currently have: We are using MSTest for ordered tests currently. Except that it is MSTest, it works great, because with the test ordering I can express two thing about a test: Certain tests may not execute before another test. Other tests have a dependency on another test and may only be executed after the other test(s) have been executed.

Let's say that the integration test:

  • Uses a test database with several user accounts in it
  • The first few tests execute some tests using the test data, and also create test data themselves to be used in a later test. Note we have a dependency relationship here. Some tests may not execute if earlier tests fail.
  • Then some browser-automation tests happen. They should be executed as late as possible, because they take a lot of time and we want to have feedback from earlier (faster) tests first.
  • Finally, some logic is tested which deletes an entire user account. Note we have a must not execute before relationship here: If this test were to be done before the other tests, the other tests would fail.

With MSTest I can express this case fine: Each 'ordered test' in MSTest can contain ordered tests themselves. Also, ordered tests can have a flag set to abort if one of the tests fail.

             MyPrimaryOrderedTest
              /      |         \
   DomainTests  BrowserTests  DestructiveTests
    /   |  \       /  |   \      |   \ 
   A    B   C     D   E    F     G    H 

For example, MyPrimaryOrderedTest has the fail 'abort on failure' set to false. There is nothing preventing BrowserTests to execute if DomainTests fail. However, DomainTests itself has the flag set to true so test C is not execute if A or B fail. Note that A till H can either be an ordered test definition itself or a test fixture.

To be concrete, if was thinking of a interface like this to express the test fixture ordering:

interface ITestCollection {
    IEnumerable<Type> GetFixtureTypes();
    bool ContinueOnFailure { get; }
}

This is much more maintainable (and obvious) as having dependency attributes on every fixture and scales much better as the number of fixtures increase.

Note for test ordering within fixtures, I would simply use the existing OrderAttribute for that. I think test methods should not have inter-fixture test dependencies, because that makes the test structure too complex and unmaintainable.

For test ordering between fixtures I have set-up a prototype, and I have found that expressing dependencies between fixtures by using attributes becomes messy, even only with a few tests. Please also note that the prototype wouldn't allow ordering beyond the namespace the fixture is defined in because each fixture is part of a parent test with the name of the namespace. I would need to implement my own ITestAssemblyBuilder to work around that but NUnit is hardcoded to use the current DefaultTestAssemblyBuilder.

Sebazzz commented Jun 8, 2016

For what its worth, my input:

I'd rather not define a dependency per test method, that is becoming rather tedious (and hard to maintain) if you have more than a few tests. Instead I want to establish an order between test fixtures. This comes from the following case we currently have: We are using MSTest for ordered tests currently. Except that it is MSTest, it works great, because with the test ordering I can express two thing about a test: Certain tests may not execute before another test. Other tests have a dependency on another test and may only be executed after the other test(s) have been executed.

Let's say that the integration test:

  • Uses a test database with several user accounts in it
  • The first few tests execute some tests using the test data, and also create test data themselves to be used in a later test. Note we have a dependency relationship here. Some tests may not execute if earlier tests fail.
  • Then some browser-automation tests happen. They should be executed as late as possible, because they take a lot of time and we want to have feedback from earlier (faster) tests first.
  • Finally, some logic is tested which deletes an entire user account. Note we have a must not execute before relationship here: If this test were to be done before the other tests, the other tests would fail.

With MSTest I can express this case fine: Each 'ordered test' in MSTest can contain ordered tests themselves. Also, ordered tests can have a flag set to abort if one of the tests fail.

             MyPrimaryOrderedTest
              /      |         \
   DomainTests  BrowserTests  DestructiveTests
    /   |  \       /  |   \      |   \ 
   A    B   C     D   E    F     G    H 

For example, MyPrimaryOrderedTest has the fail 'abort on failure' set to false. There is nothing preventing BrowserTests to execute if DomainTests fail. However, DomainTests itself has the flag set to true so test C is not execute if A or B fail. Note that A till H can either be an ordered test definition itself or a test fixture.

To be concrete, if was thinking of a interface like this to express the test fixture ordering:

interface ITestCollection {
    IEnumerable<Type> GetFixtureTypes();
    bool ContinueOnFailure { get; }
}

This is much more maintainable (and obvious) as having dependency attributes on every fixture and scales much better as the number of fixtures increase.

Note for test ordering within fixtures, I would simply use the existing OrderAttribute for that. I think test methods should not have inter-fixture test dependencies, because that makes the test structure too complex and unmaintainable.

For test ordering between fixtures I have set-up a prototype, and I have found that expressing dependencies between fixtures by using attributes becomes messy, even only with a few tests. Please also note that the prototype wouldn't allow ordering beyond the namespace the fixture is defined in because each fixture is part of a parent test with the name of the namespace. I would need to implement my own ITestAssemblyBuilder to work around that but NUnit is hardcoded to use the current DefaultTestAssemblyBuilder.

@CharliePoole CharliePoole removed this from the Backlog milestone Jul 25, 2016

@Sebazzz

This comment has been minimized.

Show comment
Hide comment
@Sebazzz

Sebazzz Sep 3, 2016

Update from my side: In the mean time I've managed to implement test ordering without the need to fork NUnit. It is "good enough" for me, so I use it now. It is already a lot better than the fragile state of many MSTest ordered tests.

Sebazzz commented Sep 3, 2016

Update from my side: In the mean time I've managed to implement test ordering without the need to fork NUnit. It is "good enough" for me, so I use it now. It is already a lot better than the fragile state of many MSTest ordered tests.

@ravensorb

This comment has been minimized.

Show comment
Hide comment
@ravensorb

ravensorb Mar 19, 2017

Out of curiosity -- any chance the Dependency feature is planned for the next release?

ravensorb commented Mar 19, 2017

Out of curiosity -- any chance the Dependency feature is planned for the next release?

@CharliePoole

This comment has been minimized.

Show comment
Hide comment
@CharliePoole

CharliePoole Mar 19, 2017

Member

No plans at the moment. FYI, you can see that here on GitHub by virtue of the fact that it's not assigned to anyone and has no milestone specified.

For normal priority items, like this one, we don't usually pre-plan it for a particular release. We reserve that for high and critical items. This one will only get into a release plan when somebody decides they want to do it, self-assigns it and brings it to a point where it's ready for merging.

In fact, although not actually dependent on it, this issue does need a bunch of stuff from #164 to be effectively worked on. I'm working on that and expect to push it into the next release.

Member

CharliePoole commented Mar 19, 2017

No plans at the moment. FYI, you can see that here on GitHub by virtue of the fact that it's not assigned to anyone and has no milestone specified.

For normal priority items, like this one, we don't usually pre-plan it for a particular release. We reserve that for high and critical items. This one will only get into a release plan when somebody decides they want to do it, self-assigns it and brings it to a point where it's ready for merging.

In fact, although not actually dependent on it, this issue does need a bunch of stuff from #164 to be effectively worked on. I'm working on that and expect to push it into the next release.

@Flynn1179

This comment has been minimized.

Show comment
Hide comment
@Flynn1179

Flynn1179 May 22, 2017

Relevant: https://stackoverflow.com/questions/44112739/nunit-using-a-separate-test-as-a-setup-for-a-subsequent-test

Got referred here from there. I'm a firm believer that any fault should only ever cause one unit test to fail, having a whole bunch of others fail because they're dependent on that fault not being there is.. undesirable at best, and more than a little time consuming trying to track down which of the failing tests is the relevant one.

Edit: Just looking at my existing code, I've got a 'Prerequisite(Action action)' method in many of my test fixtures that wraps the call to action in a try/catch AssertionException/throw Inconclusive, but it also does some cleanup stuff like 'substitute.ClearReceivedCalls' (from NSubtitute) and empties a list populated by 'testObj.PropertyChanged += (sender,e) => receivedEvents.Add(e.PropertyName)'; otherwise past actions potentially contaminate calls to 'substitute.Received..'

Might be necessary to also include some sort of 'Cleanup' method in the dependency attribute to support things like this.

Flynn1179 commented May 22, 2017

Relevant: https://stackoverflow.com/questions/44112739/nunit-using-a-separate-test-as-a-setup-for-a-subsequent-test

Got referred here from there. I'm a firm believer that any fault should only ever cause one unit test to fail, having a whole bunch of others fail because they're dependent on that fault not being there is.. undesirable at best, and more than a little time consuming trying to track down which of the failing tests is the relevant one.

Edit: Just looking at my existing code, I've got a 'Prerequisite(Action action)' method in many of my test fixtures that wraps the call to action in a try/catch AssertionException/throw Inconclusive, but it also does some cleanup stuff like 'substitute.ClearReceivedCalls' (from NSubtitute) and empties a list populated by 'testObj.PropertyChanged += (sender,e) => receivedEvents.Add(e.PropertyName)'; otherwise past actions potentially contaminate calls to 'substitute.Received..'

Might be necessary to also include some sort of 'Cleanup' method in the dependency attribute to support things like this.

@espenalb

This comment has been minimized.

Show comment
Hide comment
@espenalb

espenalb May 22, 2017

@Flynn1179 - I agree with you when it comes to unit tests. However, NUnit is also a great tool for other kinds of tests. For example, we use it for testing embedded firmware and are really missing this feature...

espenalb commented May 22, 2017

@Flynn1179 - I agree with you when it comes to unit tests. However, NUnit is also a great tool for other kinds of tests. For example, we use it for testing embedded firmware and are really missing this feature...

@CharliePoole

This comment has been minimized.

Show comment
Hide comment
@CharliePoole

CharliePoole May 22, 2017

Member

@Flynn1179 Completely agree with you. There are techniques to prevent spurious failures such as you describe that don't "depend" on having a test dependency feature. In general, use an assumption to test those things that are actually tested in a different test and are required for your test to make sense.

It was a goal of NUnit 3 to extend NUnit for effective use with non-unit tests. We really have not done that yet - it may await another major release. OTOH, users are continuing to use it for those other purposes and trying to find clever ways to deal with the limitations. Here and there we have added small features and enhancements to help them, but it's really still primarily a unit-test framework.

Personally, I doubt I would want to use dependency as part of high-level testing myself. Rather, I'd prefer to have a different kind of fixture that executed a series of steps in a certain order, reporting success or failure of each step and either continuing or stopping based on some property of the step. That, however, is a whole other issue.

@espenalb I'd be interesting to know what you feel is needed particularly for testing embedded firmware.

Member

CharliePoole commented May 22, 2017

@Flynn1179 Completely agree with you. There are techniques to prevent spurious failures such as you describe that don't "depend" on having a test dependency feature. In general, use an assumption to test those things that are actually tested in a different test and are required for your test to make sense.

It was a goal of NUnit 3 to extend NUnit for effective use with non-unit tests. We really have not done that yet - it may await another major release. OTOH, users are continuing to use it for those other purposes and trying to find clever ways to deal with the limitations. Here and there we have added small features and enhancements to help them, but it's really still primarily a unit-test framework.

Personally, I doubt I would want to use dependency as part of high-level testing myself. Rather, I'd prefer to have a different kind of fixture that executed a series of steps in a certain order, reporting success or failure of each step and either continuing or stopping based on some property of the step. That, however, is a whole other issue.

@espenalb I'd be interesting to know what you feel is needed particularly for testing embedded firmware.

@espenalb

This comment has been minimized.

Show comment
Hide comment
@espenalb

espenalb May 22, 2017

We are actually very happy about what NUnit offers.

We use a combination of FixtureSetup/Setup/test attributes for configuring the device (Including flashing firmware)

Then we use different interfaces (serial port, jtag, ethernet) to interact with the devices, typically we send some commands and then observe results. Results can be command response, or in advanced tests we use dedicated hardware equipment for measuring device behavior.

The NUnit assertion macros, and FluentAssertions are then used to verify that everything is ok.
By definition, these are all integration tests - and by nature a lot slower than a regular unit tests. The test dependency issue is therefore sorely missed - it is no point in verifying for example sensor performance if the command to enable sensor was rejected. The ability to pick up one test where another completed is therefore very valuable.

With the test dependency attribute, we would have one failing test, then n ignored/skipped tests where the skipped tests could clearly state that they were not executed because the other tests failed...

Another difference from regular unit testing is heavy use of the log writer. There is one issue there regarding multithreading and log which I will create a separate issue for if it does not allready exist.

Bottom line from us - we are very happy with NUnit as a test harness for integration testing. It gives us excellent support for a lot of advanced scenarios by using C# to interact with Device Under Test and other lab equipment.

With ReportUnit we then get nice html reports and we we also get Jenkins integration by using the rev2 nunit test output.

espenalb commented May 22, 2017

We are actually very happy about what NUnit offers.

We use a combination of FixtureSetup/Setup/test attributes for configuring the device (Including flashing firmware)

Then we use different interfaces (serial port, jtag, ethernet) to interact with the devices, typically we send some commands and then observe results. Results can be command response, or in advanced tests we use dedicated hardware equipment for measuring device behavior.

The NUnit assertion macros, and FluentAssertions are then used to verify that everything is ok.
By definition, these are all integration tests - and by nature a lot slower than a regular unit tests. The test dependency issue is therefore sorely missed - it is no point in verifying for example sensor performance if the command to enable sensor was rejected. The ability to pick up one test where another completed is therefore very valuable.

With the test dependency attribute, we would have one failing test, then n ignored/skipped tests where the skipped tests could clearly state that they were not executed because the other tests failed...

Another difference from regular unit testing is heavy use of the log writer. There is one issue there regarding multithreading and log which I will create a separate issue for if it does not allready exist.

Bottom line from us - we are very happy with NUnit as a test harness for integration testing. It gives us excellent support for a lot of advanced scenarios by using C# to interact with Device Under Test and other lab equipment.

With ReportUnit we then get nice html reports and we we also get Jenkins integration by using the rev2 nunit test output.

@ChrisMaddock

This comment has been minimized.

Show comment
Hide comment
@ChrisMaddock

ChrisMaddock May 22, 2017

Member

we also get Jenkins integration by using the rev2 nunit test output.

@espenalb - Complete aside, but the Jenkins plugin has recently been updated to read NUnit 3 output. 🙂

Member

ChrisMaddock commented May 22, 2017

we also get Jenkins integration by using the rev2 nunit test output.

@espenalb - Complete aside, but the Jenkins plugin has recently been updated to read NUnit 3 output. 🙂

@DannyBraig

This comment has been minimized.

Show comment
Hide comment
@DannyBraig

DannyBraig Jul 28, 2017

Can someone give a small update about the status this feature? Is it planned?
In my department we are doing very long-running tests, which logically really depend on each other.
Some kind of "Test-Dependency" would be really interesting and helping for us...

I heard that you are in general planing to "open" NUnit for "non-unit test" tests as well (which is basically the case for us...). I think this attribute would be one step torwards it :-)

DannyBraig commented Jul 28, 2017

Can someone give a small update about the status this feature? Is it planned?
In my department we are doing very long-running tests, which logically really depend on each other.
Some kind of "Test-Dependency" would be really interesting and helping for us...

I heard that you are in general planing to "open" NUnit for "non-unit test" tests as well (which is basically the case for us...). I think this attribute would be one step torwards it :-)

@Sebazzz

This comment has been minimized.

Show comment
Hide comment
@Sebazzz

Sebazzz Jul 29, 2017

This feature is still in design phase, so other than using external libraries there is no built-in support currently.

Sebazzz commented Jul 29, 2017

This feature is still in design phase, so other than using external libraries there is no built-in support currently.

@aolszowka

This comment has been minimized.

Show comment
Hide comment
@aolszowka

aolszowka Jun 25, 2018

I am sorry to pull up an old thread but during the course of working though NUnit with a friend we stumbled into a case where if we had such a feature we could start to create integration tests (I realize NUnit is a Unit Testing Framework, but it seems like we could get what we want if we had Test Dependency).

First here's an updated link to the proposed spec (the link from CharliePoole here #51 (comment) was dead): https://github.com/nunit/docs/wiki/Test-Dependency-Attribute-Spec

Now for a use case; consider the following toy Program and Associated Tests

namespace ExampleProgram
{
    using System.Collections;
    using NUnit.Framework;

    public static class ExampleClass
    {
        public static int Add(int a, int b)
        {
            return a - b;
        }

        public static int Increment(int a)
        {
            return Add(a, 1);
        }
    }

    public class ExampleClassTests
    {
        [TestCaseSource(typeof(AddTestCases))]
        public void Add_Tests(int a, int b, int expected)
        {
            int actual = ExampleClass.Add(a, b);
            Assert.That(actual, Is.EqualTo(expected));
        }

        [TestCaseSource(typeof(IncrementTestCases))]
        public void Increment_Tests(int a, int expected)
        {
            int actual = ExampleClass.Increment(a);
            Assert.That(actual, Is.EqualTo(expected));
        }
    }

    internal class IncrementTestCases : IEnumerable
    {
        public IEnumerator GetEnumerator()
        {
            yield return new TestCaseData(0, 1);
            yield return new TestCaseData(-1, 0);
            yield return new TestCaseData(1, 2);
        }
    }

    internal class AddTestCases : IEnumerable
    {
        public IEnumerator GetEnumerator()
        {
            yield return new TestCaseData(0, 0, 0);
            yield return new TestCaseData(0, 2, 2);
            yield return new TestCaseData(2, 0, 2);
            yield return new TestCaseData(1, 1, 2);
        }
    }
}

As an implementer I know that if any Unit Tests around Add(int,int) fail there is absolutely no point in running all the additional tests around Increment(int) other than noise. However there does not appear to be a way (short of Test Dependency) to specify this to NUnit (at least in my searches).

Doing a lot of research online it seems like others have worked around this by using a combination of factors (none of which are explicitly clear that Increment(int) depends on Add(int,int)) such ways include:

  • Using Categories
  • Using a Naming Convention To Control the Ordering of Tests

Neither of these seem to scale well, or even work for that matter when you use other features such as Parallel and all require some external "post processing" after the NUnit run has completed.

Is this the best path forward (if we were to use pure NUnit)? Is this feature still being worked on? (In other-words if a PR were submitted would it jam up anyone else working on something related?)

There is lots of good discussion in this thread about cyclic dependencies and other potential issues with this feature, it is obviously not easy to fix otherwise someone would have done it already. I am sure adding Parallel and TestCaseSource into the mix also increase complexity. I intend to dig more at some point, but before doing so wanted to make sure that this was not a solved problem or plans-in-the-works.

aolszowka commented Jun 25, 2018

I am sorry to pull up an old thread but during the course of working though NUnit with a friend we stumbled into a case where if we had such a feature we could start to create integration tests (I realize NUnit is a Unit Testing Framework, but it seems like we could get what we want if we had Test Dependency).

First here's an updated link to the proposed spec (the link from CharliePoole here #51 (comment) was dead): https://github.com/nunit/docs/wiki/Test-Dependency-Attribute-Spec

Now for a use case; consider the following toy Program and Associated Tests

namespace ExampleProgram
{
    using System.Collections;
    using NUnit.Framework;

    public static class ExampleClass
    {
        public static int Add(int a, int b)
        {
            return a - b;
        }

        public static int Increment(int a)
        {
            return Add(a, 1);
        }
    }

    public class ExampleClassTests
    {
        [TestCaseSource(typeof(AddTestCases))]
        public void Add_Tests(int a, int b, int expected)
        {
            int actual = ExampleClass.Add(a, b);
            Assert.That(actual, Is.EqualTo(expected));
        }

        [TestCaseSource(typeof(IncrementTestCases))]
        public void Increment_Tests(int a, int expected)
        {
            int actual = ExampleClass.Increment(a);
            Assert.That(actual, Is.EqualTo(expected));
        }
    }

    internal class IncrementTestCases : IEnumerable
    {
        public IEnumerator GetEnumerator()
        {
            yield return new TestCaseData(0, 1);
            yield return new TestCaseData(-1, 0);
            yield return new TestCaseData(1, 2);
        }
    }

    internal class AddTestCases : IEnumerable
    {
        public IEnumerator GetEnumerator()
        {
            yield return new TestCaseData(0, 0, 0);
            yield return new TestCaseData(0, 2, 2);
            yield return new TestCaseData(2, 0, 2);
            yield return new TestCaseData(1, 1, 2);
        }
    }
}

As an implementer I know that if any Unit Tests around Add(int,int) fail there is absolutely no point in running all the additional tests around Increment(int) other than noise. However there does not appear to be a way (short of Test Dependency) to specify this to NUnit (at least in my searches).

Doing a lot of research online it seems like others have worked around this by using a combination of factors (none of which are explicitly clear that Increment(int) depends on Add(int,int)) such ways include:

  • Using Categories
  • Using a Naming Convention To Control the Ordering of Tests

Neither of these seem to scale well, or even work for that matter when you use other features such as Parallel and all require some external "post processing" after the NUnit run has completed.

Is this the best path forward (if we were to use pure NUnit)? Is this feature still being worked on? (In other-words if a PR were submitted would it jam up anyone else working on something related?)

There is lots of good discussion in this thread about cyclic dependencies and other potential issues with this feature, it is obviously not easy to fix otherwise someone would have done it already. I am sure adding Parallel and TestCaseSource into the mix also increase complexity. I intend to dig more at some point, but before doing so wanted to make sure that this was not a solved problem or plans-in-the-works.

@CharliePoole

This comment has been minimized.

Show comment
Hide comment
@CharliePoole

CharliePoole Jun 25, 2018

Member

@aolszowka
This remains an accepted feature, at least as far as the issue labels go. @nunit/framework-team Am I correct there?

Nobody has assigned it to themselves, which is supposed to mean that nobody is working on it. Smart of you to ask, none the less! If you want to work on it, some team member will probably assign it to themselves and "keep an eye" on you, since GitHub won't let us assign issues to non-members.

I made this a feature and gave it its "normal" priority back when I was project lead. I intended to work on it "some day" but never did and never will now that I'm not active in the project. I'm glad to correspond with you over any issues you find if you take it on.

My advice is to NOT do what I tried to do: write a complete spec and then work toward it. As you can read in the comments, we kept finding things to disagree about in the spec and nobody ever moved it to implementation. AFAIK (or remember) the prerequisite work in how tests are dispatched has been done already. I would pick one of the three types of dependency (see my two+ years ago comment) and just one use case and work on it. We won't want to release something until we are sure the API is correct, so you should probably count on a long-running feature branch that has to be periodically rebased or merged from master. Big job!

Member

CharliePoole commented Jun 25, 2018

@aolszowka
This remains an accepted feature, at least as far as the issue labels go. @nunit/framework-team Am I correct there?

Nobody has assigned it to themselves, which is supposed to mean that nobody is working on it. Smart of you to ask, none the less! If you want to work on it, some team member will probably assign it to themselves and "keep an eye" on you, since GitHub won't let us assign issues to non-members.

I made this a feature and gave it its "normal" priority back when I was project lead. I intended to work on it "some day" but never did and never will now that I'm not active in the project. I'm glad to correspond with you over any issues you find if you take it on.

My advice is to NOT do what I tried to do: write a complete spec and then work toward it. As you can read in the comments, we kept finding things to disagree about in the spec and nobody ever moved it to implementation. AFAIK (or remember) the prerequisite work in how tests are dispatched has been done already. I would pick one of the three types of dependency (see my two+ years ago comment) and just one use case and work on it. We won't want to release something until we are sure the API is correct, so you should probably count on a long-running feature branch that has to be periodically rebased or merged from master. Big job!

@ChrisMaddock

This comment has been minimized.

Show comment
Hide comment
@ChrisMaddock

ChrisMaddock Jun 26, 2018

Member

This remains an accepted feature, at least as far as the issue labels go. @nunit/framework-team Am I correct there?

Yes - as far as I'm concerned!

Member

ChrisMaddock commented Jun 26, 2018

This remains an accepted feature, at least as far as the issue labels go. @nunit/framework-team Am I correct there?

Yes - as far as I'm concerned!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment