Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Run test methods within a fixture in parallel #164

Closed
CharliePoole opened this issue Aug 5, 2014 · 76 comments
Closed

Run test methods within a fixture in parallel #164

CharliePoole opened this issue Aug 5, 2014 · 76 comments

Comments

@CharliePoole
Copy link
Contributor

Currently, we only have implemented parallel execution at the fixture level. It should be possible to run individual methods of a fixture in parallel as well as individual test cases for a parameterized method.

We should be able to specify the exact level of parallelism on the fixture or method.

This issue was previously part of #66

@CharliePoole
Copy link
Contributor Author

CharliePoole commented Apr 12, 2015

Update: Both this comment and the following one were in reply to an individual who apparently removed his own comments. I'm leaving my answers.

Well it's in the spec and scheduled for post-3.0 implementation. If it were so easy, we probably would have done it. Not sure what connection you see with mbunit. The fact that they had it doesn't help us.

@CharliePoole
Copy link
Contributor Author

This is getting a bit tiresome. A few points, already stated but apparently missed...

  1. There is a plan to implement what you are asking for.
  2. We decided as a team to schedule it at a certain point.
  3. When it is scheduled has primarily to do with our assessment of the value of the feature as compared with other features.
  4. There are several people on the NUnit team who could implement the feature.
  5. They would be able to implement it, depending on priorities, out of their heads and would not need to copy it from anywhere.

I find your talk of "reverse engineering" very disturbing. Open source is only possible in a context where copyright is respected. So if you are suggesting that we might ignore the mbunit licensing terms, you are way off base.

@rprouse
Copy link
Member

rprouse commented Apr 19, 2015

While I did contribute many patches to MbUnit and used it for years, I was
never a key contributor. I wouldn't want my status to be misrepresented :)

As for reverse engineering, there isn't really any use here. NUnit works
entirely different than MbUnit did. We will get to this issue in due time,
but are approaching it cautiously because other similar issues might change
the way we need to implement this or even conflict.
On Apr 19, 2015 2:45 AM, "CharliePoole" notifications@github.com wrote:

This is getting a bit tiresome. A few points, already stated but
apparently missed...

There is a plan to implement what you are asking for.
2.

We decided as a team to schedule it at a certain point.
3.

When it is scheduled has primarily to do with our assessment of the
value of the feature as compared with other features.
4.

There are several people on the NUnit team who could implement the
feature.
5.

They would be able to implement it, depending on priorities, out of
their heads and would not need to copy it from anywhere.

I find your talk of "reverse engineering" very disturbing. Open source is
only possible in a context where copyright is respected. So if you are
suggesting that we might ignore the mbunit licensing terms, you are way off
base.


Reply to this email directly or view it on GitHub
#164 (comment).

@CharliePoole
Copy link
Contributor Author

A much more tactful comment than mine!

Our current priorities in the world of parallel execution are:

  1. Parallel process execution
  2. Exclusion groups
  3. Parallel method (in one fixture) execution.

This seems to represent the order of greatest usefulness to users.

I'll also mention that marking something as post-3.0 does not necessarily mean the feature comes at a later time than it would if we made it part of 3.0. Rather, it may mean that 3.0 comes at an earlier point in time.

@rprouse
Copy link
Member

rprouse commented Dec 23, 2015

When we do this, we are going to have to decide how to handle setup and teardown commands. The easiest option would likely be to construct the test class for each test so that there is no contention for data.

@CharliePoole
Copy link
Contributor Author

I would favor making that an option at some point anyway, but I'm not sure whether it's a run option or a fixture-by-fixture (attributed) option or both.

@CharliePoole CharliePoole modified the milestones: 3.4, 3.2 Feb 20, 2016
@rprouse rprouse modified the milestones: 3.4, Backlog Mar 10, 2016
@julianhaslinger
Copy link

julianhaslinger commented Jun 15, 2016

Hi!
The addition of this feature to the next version of NUnit would be great, since it is the only thing that prevents me from switching to NUnit. Is this feature still planned for 3.4?

@CharliePoole
Copy link
Contributor Author

CharliePoole commented Jun 15, 2016

@julianhaslinger NUnit 3.4 will be out at the end of the month, so no, this feature will not be included.

FYI, this issue is in our Backlog milestone (or pseudo-milestone) rather than 3.4 because we are following a practice of only adding a small number of key defining issues to each numbered milestone in advance.

The next milestone, 3.6, is scheduled to drop in 3 more months, which probably sounds discouraging to you. :-( However, if you see this issue being merged to master, you will be able to get an earlier drop from our MyGet feed.

@CharliePoole
Copy link
Contributor Author

@chris-smith-zocdoc Yes, that's exactly what I'm doing. I created a new type of work item, OneTimeTearDownWorkItem, which is nested in CompositeWorkItem and is dispatched when the last child test is run. Later on, we might look at some efficiencies when the OneTimeSetUp and all the tests have been run on the same thread.

@gzak
Copy link

gzak commented Feb 16, 2017

Haven't read everything too carefully, but one thing I'd like to request as part of this feature is that the parallelism be "smart", especially for I/O bound tests. For example, if you have 10 threads executing 100 parallelizable tests, it shouldn't be the case that the 10 threads sit and wait for the first 10 tests to complete before moving on to the next 10 tests. If the first 10 tests start awaiting very long-running I/O tasks, then the threads should be free to move on to other tests. When the I/O tasks complete, threads will resume the awaiting tests as threads free up.

Basically, I'm asking for smart throughput management for I/O bound tests that make extensive use of async/await. This is our number one bottleneck in tests, by far.

@CharliePoole
Copy link
Contributor Author

@chris-smith-zocdoc In fact, that's pretty much what I'm doing. I'm essentially using the existing countdown mechanism to trigger dispatch of a one time teardown task. The trick is to get it dispatched on the proper queue.

@gzak Bear in mind that the mechanism for parallel execution already exists. It depends on workers independently pulling tasks rather than having a controller that pushes tasks to workers. So if one worker is busy with a task for a while, other workers will continue to execute other tasks independently. The trick is to set the number of workers based on the nature of the tests being run. NUnit does fairly well by default with normal, compute-bound unit tests but other sorts of tests may require the user setting an appropriate level of parallelism.

@LirazShay
Copy link

LirazShay commented Jun 13, 2017

Can someone explain me how does it work?
I have a test class
When I run the tests in this class NOT in parallel - all tests passed
But when I run them with [Parallelizable(ParallelScope.Children)]
So they do run in parallel (multiple methods in the same class)
but for some reason some tests are failing.
I have instance fields in this class that are used across the tests and it seems that those fields are shared between threads
Am I right?
Do you create only 1 instance of that class and call the methods concurrently on this single object?
CharliePoole

@CharliePoole
Copy link
Contributor Author

You figured it out! Yes, all tests in a fixture use the same instance. This is historical with NUnit, which has always worked that way. You must choose between running the test cases in parallel and having any state that is modified per test. There is no way around it currently.

That said, if you have a decent proportion of fixtures, simply running fixtures in parallel can give you good performance.

@agray
Copy link
Contributor

agray commented Jun 14, 2017 via email

@jnm2
Copy link
Contributor

jnm2 commented Jun 14, 2017

@agray Yes, in fact that's the only reason I have an AssemblyInfo.cs now.

@agray
Copy link
Contributor

agray commented Jun 14, 2017 via email

@jnm2
Copy link
Contributor

jnm2 commented Jun 14, 2017

@agray The one you asked about:

[Parallelizable(ParallelScope.Children)]

@MaceWindu
Copy link

don't forget target

[assembly: Parallelizable(ParallelScope.Children)]

@tparikka
Copy link

@LirazShay I use NUnit to drive Selenium tests, and was using fixture-level fields to hold things like references to user accounts and to the Selenium WebDriver instance that I was working with and so was unable to run tests in parallel in-fixture. The way I worked around that was to write a "factory" (I use quotes because I'm not sure it's the right term) that implements IDisposable that for each test encapsulates all my test needs and cleanly tears them down at the end of the test with no need for [TearDown] or [OneTimeTearDown] kind of like so:

 public class TestFactory : IDisposable
    {
    // Instantiate a new SafeHandle instance.
    private readonly System.Runtime.InteropServices.SafeHandle handle = new Microsoft.Win32.SafeHandles.SafeFileHandle(IntPtr.Zero, true);

    // Flag: Has Disposed been called?
    private bool disposed = false;

    public TestFactory()
    {
        this.UserRepository = new List<UserAccount>();
        this.DU = new DataUtility();
    }

	// A list of users created for this test
    public List<UserAccount> UserRepository { get; private set; }

	// A very simple data layer utility that uses Dapper to interact with the database in my application 
    public DataUtility DU { get; private set; }

	// Gets a new user and adds it to the repository
    public UserAccount GetNewUser()
    {
        var ua = new UserAccount();
        this.UserRepository.Add(ua);
        return ua;
    }


    public void Dispose()
    {
        this.Dispose(true);
        GC.SuppressFinalize(this);
    }

    protected virtual void Dispose(bool disposing)
    {
        if (this.disposed)
        {
            return;
        }

        if (disposing)
        {
			// Deletes all user accounts created during the test
            foreach (UserAccount ua in this.UserRepository)
            {
                try
                {
                    ua.Delete();
                }
                catch (Exception)
                {
                    // Take no action if delete fails.
                }
            }

            this.DU.DeleteNullLoginFailures(); // Cleans up the database after tests
            Thread.Sleep(1500);
        }

        this.disposed = true;
    }
}

Then, within a test I can do this:

[TestFixture]
public class UserConfigureTests
{
    [Test]
    public void MyExampleTest()
    {
        using (TestFactory tf = new TestFactory())
        {
            var testUser = tf.GetNewUser();
			
	tf.DU.DoSomethingInTheDatabase(myParameter);
			
	// Test actions go here, and when we exit this using block the TestFactory cleans
	// up after itself using the Dispose method which calls whatever cleanup logic you've written into it
        }
    }
}

This way, I can avoid a lot of code duplication, and if I ever need to change the dependencies of my test I just do it once in the factory. If anyone has feedback on the strategy I took I'd appreciate it!

@jnm2
Copy link
Contributor

jnm2 commented Jun 14, 2017

@tparikka I highly recommend exactly that approach myself.

@agray
Copy link
Contributor

agray commented Jun 16, 2017 via email

@jnm2
Copy link
Contributor

jnm2 commented Jun 16, 2017

I haven't looked into using LevelOfParallelism yet. It defaults to the number of cores you have.

If your tests are not CPU-bound, higher makes sense. But as always with perf- the answer is so dependent on your scenario that it's better to measure rather than guess.

@masaeedu
Copy link

masaeedu commented Oct 4, 2017

@CharliePoole I'm using TestcaseSource, but it looks like the resulting testcases aren't actually being executed in parallel. Is something like this expected to work:

    [TestFixture]
    class Deserialization
    {
        public static IEnumerable<TimeSpan> ShouldDeserializeAllCases() => Enumerable.Repeat(0, 5).Select(x => TimeSpan.FromSeconds(2));

        [TestCaseSource("ShouldDeserializeAllCases"), Parallelizable(ParallelScope.Children)]
        public void ShouldDeserializeAll(TimeSpan t)
        {
            Thread.Sleep(t);
            Assert.AreEqual(1, 1);
        }
    }

The overall time taken is 10 seconds instead of ~2.

@ParanoikCZE
Copy link

I'll think that there are no children, so in this case, you could better use
[Parallelizable(ParallelScope.All)]
or move your attribute on class level.

@masaeedu
Copy link

masaeedu commented Oct 4, 2017

@ParanoikCZE Thanks. I'm actually flying blind with respect to what that attribute means, so I've tried all enum values on there. Regardless of which of All, Children, Fixture or Self I use, I get a 10 second execution time (at least in Visual Studio).

I just tried moving it to the class, but this doesn't seem to help either.

@ParanoikCZE
Copy link

Try this is source of inspiration :)

class Deserialization
{
    public static IEnumerable<TestCaseData> ShouldDeserializeAllCases
    {
        get
        {
            for (int i = 1; i <= 5; i++)
                yield return new TestCaseData(TimeSpan.FromSeconds(i)).SetName($"Thread_worker_{i}");
        }
    }

    [TestCaseSource(nameof(ShouldDeserializeAllCases)), Parallelizable(ParallelScope.Children)]
    public void ShouldDeserializeAll(TimeSpan t) => System.Threading.Thread.Sleep(t);
}

@masaeedu
Copy link

masaeedu commented Oct 4, 2017

@ParanoikCZE Thanks again. I tested this out in Visual Studio and the visualization is much clearer, but the tests are still running sequentially. Easier to see this if you use a constant sleep for each testcase instead of increasing steps.

@ParanoikCZE
Copy link

Try adding [assembly: LevelOfParallelism(5)] into AssemblyInfo, I think there is some default value, but maybe it doesn't work for you somehow. Anyway, I'm out of ideas. :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests