Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add separate benchmark.net project to run against code and not nuget packages #349

Open
wants to merge 9 commits into
base: benchmarkdotnet
Choose a base branch
from

Conversation

theolivenbaum
Copy link
Contributor

@NightOwl888 found it useful to be able to run the same benchmarks against the code instead of against the nuget packages

@NightOwl888
Copy link
Contributor

Thanks for putting together this PR. We definitely need benchmarks in the master branch. However, what we ought to shoot for are benchmarks that can be run by developers, added to a nightly builds and release builds, and also plan for future expansion to benchmark individual components of individual assemblies. I like the precedent set forth in NodaTime.Benchmarks.

A few things to consider:

  1. Since we don't have separate directories for src and tests, the build scripts rely on naming conventions to determine which assemblies are test projects. Specifically, it relies on the convention Lucene.Net.Tests.<name>. We shouldn't use that convention here, as these will be handled differently in the build.
  2. We should have a top level directory for benchmarks (a sibling of src) so we can expand to include benchmarks on each separate assembly (including the test framework). This organization makes it possible to have Directory.Build.props and Directory.Build.targets files that apply specifically to benchmarks.
  3. Within the benchmarks folder, we should have a separate .sln file that includes all of the main assemblies and benchmarks, but not the tests.
  4. Some of the "tests" in Lucene are actually benchmarks as they only record the amount of time the operation takes but don't actually have any asserts that can fail. These are wasting time (about 5% of the total) during normal testing with little benefit. We should aim to remove these from the tests and refactor them as benchmarks that run nightly and during releases. Some examples:

Of course, a key part of getting this into the build is to set up the instrumentation so the benchmark results are easily available for viewing (ideally without having to download the artifacts). This was dirt simple in TeamCity by including an HTML file as a tab in the portal, but we haven't explored the options yet in Azure DevOps.

We are running into a realistic limit of number of projects Visual Studio can load in a reasonable timeframe, so ideally we would keep them in a separate .sln. This is the approach that Microsoft takes to manage large projects.

You don't need to include all of these changes in this PR, this is just to outline the plan. But could you please set a precedent by putting the benchmarks in a top level benchmarks directory and a Lucene.Net.Benchmarks.sln file to manage them?

If someone with experience with benchmarks in CI sees a flaw in this layout, please do provide your valuable input.

@rclabo
Copy link
Contributor

rclabo commented Feb 25, 2021

@NightOwl888 seems like a good vision for benchmarking.

NightOwl888 added a commit to NightOwl888/lucenenet that referenced this pull request Apr 20, 2021
NightOwl888 added a commit that referenced this pull request Apr 20, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants