Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add benchmarks #388

Closed
JohanLarsson opened this issue Jun 21, 2017 · 16 comments
Closed

Add benchmarks #388

JohanLarsson opened this issue Jun 21, 2017 · 16 comments

Comments

@JohanLarsson
Copy link
Contributor

As it is now it is not uncommon for moq to show up in the profiler. It will probably never be a fast library with all the use of expressions but it can be good to track performance as it is pretty relevant when used in tests.

nodatime has a nice setup I think. Infrastructure for running them is perhaps tricky, not sure running them on CI is worth much.

@JohanLarsson
Copy link
Contributor Author

Stuff like this is not helping perf:

public static void NotNull<T>(Expression<Func<T>> reference, T value)
{
	if (value == null)
	{
		throw new ArgumentNullException(GetParameterName(reference));
	}
}

Using C#7 it can be replaced by:

this.underlyingCreateMocks = underlyingCreateMocks ?? throw new ArgumentNullException(nameof(underlyingCreateMocks));

@stakx
Copy link
Contributor

stakx commented Jun 21, 2017

(Yep. Now that we have nameof, the whole Guard helper class could be rewritten as well.)

@stakx
Copy link
Contributor

stakx commented Jun 21, 2017

The idea of setting up some kind of strategy for measuring the effects of optimizations is a good one, btw.!

Speaking about priorities, I think right now it's most important to first deal with extant bug reports and long-standing feature requests. Next, there are some convoluted bits of code that need to be rewritten to make their intent more clear. Finally, optimizations. ("Make it work, make it right, make it fast.")

@jatouma
Copy link

jatouma commented Oct 6, 2017

Tried updating from Moq 4.2.1402 to Moq 4.99 hoping for a performance improvement, but our unit tests now take 31 minutes, up from 17 minutes.

@stakx
Copy link
Contributor

stakx commented Oct 7, 2017

@jatouma: Yes, unfortunately performance has suffered in more recent versions. This could be due to a variety of reasons, such as the introduction of locks to improve thread safety, or possibly the increased use of expression tree lambda compilation. I think it would be great to see optimizations (targeting both execution time and memory footprint) being applied to Moq.

@stakx
Copy link
Contributor

stakx commented Nov 11, 2017

@JohanLarsson - sorry for not getting back to you sooner regarding benchmarks.

I've been spending some time using PerfView lately. I essentially profiled several runs of Moq's own unit test suite, then checked where execution time is spent. It seems that >98% of total execution time is spent on (a) proxy type & object creation via Castle DynamicProxy (which in turn relies on System.Reflection.Emit), and (b) expression tree compilation via LambdaExpression.Compile. There's no way we can make these things faster, as it's external code. Time spent inside Moq's codebase that isn't related to these things is pretty much insignificant.

So for me, the shocking conclusion is that we can try optimizing Moq's codebase all we want, but since most time isn't actually spent inside Moq, we're not going to see any significant speed gains. I'm guessing the best we could do would be a <1% speed increase.

Anything better than that will require some major rethinking and re-architecting of how Moq currently works. Probably the three most important things to change would be (1) how Moq deals with multi-dot setup expressions, (2) what mock.SetupAllProperties does and when it gets used, and (3) how property setter setups happen. (This summary comes more from my personal experience with the codebase than from the PerfView analysis directly, but PerfView data supports it. The likely solution would be to make Moq perform some operations lazily / on-demand rather than eagerly / right away).

Until that re-architecting has happened, I fear that using benchmarks as a guide to better performance will be a futile exercise in micro-optimization. Things like avoiding unnecessary .ToArray() calls, using string builders instead of repeated string concatenation, etc. (while being perfectly good and welcome code improvements) simply won't make a big difference, execution speed-wise.

I'd be happy to be proven wrong on this. If you have any experience with profiling, perhaps you could run your own analysis and either refute or confirm the above? I'd also value your opinion on whether it would still make sense to benchmark if my above assessment turns out to be correct, and what purpose the benchmarks would then serve.

@JohanLarsson
Copy link
Contributor Author

I think benchmarks is a good idea for most libraries. Does not need to be anything huge, just a couple of checks for common use cases is nice for tracking performance regressions. Also there is not a huge downside to benchmarks, don't think they will add much maintenance pain.

Expressions ~can~ be cached but it requires really nasty comparers.

I have not used Castle for anything but maybe there are pooling opportunities? From my experience there are usually a relatively small number of types that are mocked.

@stakx
Copy link
Contributor

stakx commented Nov 11, 2017

Thanks for the reply Johan. Don't get me wrong, I like the idea of having benchmarks. I'm not even worried about their maintenance cost. I just want to make sure that we know what we can expect from them.

I'll get back to them shortly. To answer a few of your other points first:

Caching is something worth thinking about. I probably lack the experience in coming up with a cache that will hold on to the right amount and right selection of expressions. Given that Moq is perhaps most frequently used in unit tests, I have no idea what cache hit / miss rates could be expected in real world use cases (and whether caching would make a major difference)... but perhaps someone else has experience with that. It's also worth noting that Moq already has a (basic) class for comparing expression trees, it might be possible to build on top of that.

Castle, AFAIK, already performs caching of the types it creates. Not sure though whether it also caches information about the types those refer to (base classes and interfaces). I'll try to find out more about this.

Getting back to the benchmarks, though: if you're not discouraged by my profiling analysis, and if you'd like to set up a few simple benchmarks for some common use cases, using BenchmarkDotNet, I'd happily review a PR against develop.

  • It would be nice if the first benchmarks focused on functionality / features that have been around for a while, so that ideally one could run the benchmarks against previous versions of Moq, too, for comparisons.

  • One word of warning: To make it easier for the reviewer (i.e. me :-), it might be better to start out simple, e.g. with a plain standalone project that can be run manually. Complexity, such as scripts and automation, can still be added later on. My main point here is that it's probably easier to understand the final setup when reviewed in stages than all at once, so PRs will probably go through quicker that way.)

@JohanLarsson
Copy link
Contributor Author

I'll try to find time to create a small PR tomorrow.

@stakx
Copy link
Contributor

stakx commented Nov 11, 2017

Looking forward to it. Thank you!

@jakubozga
Copy link

#504
Is this going to be fixed in 4.8 ?

@stakx
Copy link
Contributor

stakx commented Dec 1, 2017

@jakubozga: If you're interested in #504, why not post there? What about it should be fixed?

@JohanLarsson
Copy link
Contributor Author

Sorry about not getting a PR done, been crazy busy.

@informatorius
Copy link
Contributor

informatorius commented Feb 23, 2018

I added a lambda expression compile cache and it helps performance.
#188

@stakx
Copy link
Contributor

stakx commented Mar 19, 2018

@JohanLarsson - This issue has been dormant for a while. At this point, I suggest we close it. While Moq 5 (https://github.com/moq/moq) is not yet published, its initial release is definitely getting closer, and I have the feeling that it'll take over the reins from Moq 4 pretty soon. I don't want to prematurely kill off Moq 4—in fact, I'd like to maintain it a while longer for those folks who can't migrate to Moq 5 right away—but bigger investments, such as setting up proper benchmarking, should perhaps better be made against Moq 5 once it's released.

Please let me know if that's OK with you.

@JohanLarsson
Copy link
Contributor Author

Yes, it is ok.

@stakx stakx closed this as completed Mar 19, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

5 participants