You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
[GlobalSetup(Target = nameof(ConflictBetweenBackAndForeVersionsNotCopyLocal))]
public void ConflictBetweenBackAndForeVersionsNotCopyLocalSetup()
{
t = new ResolveAssemblyReference
{
Assemblies = new ITaskItem[]
{
new TaskItem("D, Version=2.0.0.0, Culture=neutral, PublicKeyToken=aaaaaaaaaaaaaaaa"),
new TaskItem("D, Version=1.0.0.0, Culture=neutral, PublicKeyToken=aaaaaaaaaaaaaaaa")
},
BuildEngine = new MockEngine(),
SearchPaths = new string[] {
s_myLibrariesRootPath, s_myLibraries_V2Path, s_myLibraries_V1Path
}
};
}
[Benchmark]
public void ConflictBetweenBackAndForeVersionsNotCopyLocal()
{
t.Execute();
}
I've been looking at existing unit tests as a base to start this work. I think that time consuming tasks, like RAR (even more so with the current perf-focused RAR refactoring), are good candidate for writing benchmarks.
If you have any opinion about this, your thoughts are welcome.
Regarding the infra, I'm not familiar with helix/arcade but it seems to be favored for perf tests in other dotnet projects. Feel free to let me know what would be the plan regarding this, I'm fine working on this locally for now though.
I'm intrigued! I see this as a very complicated issue because it could have a lot of nice benefits including easy perf testing, but I don't want our CI builds (or local builds) to take a lot longer because it's running perf tests, especially perf tests on code paths we didn't touch.
I added the untriaged label because I think we need to talk about it. This could be a very impactful change, and I don't want to go in blind.
I don't want our CI builds (or local builds) to take a lot longer because it's running perf tests
Yes I agree, not all of these tests would/should necessarily be run on every commit. Once every few days or even weekly could already bring some feedback though :-)
Hey, sorry I missed this when you filed it. Having perf tests is a great goal, but it's a high-complexity area and the core MSBuild team isn't planning to invest heavily in it in the near future. We're concerned about reliability, noise in measurements, and having a good testbed for measurements. We'd also like to make sure we use the .NET performance infrastructure as much as possible to avoid duplication and increase consistency with other dotnet repos. Unfortunately none of us know much about what that means at the moment!
We're glad you're excited to work on this but wanted to warn you that it may not get quick attention and we might be pretty picky about how it's implemented.
Hi there,
I am creating this issue to track the ideas and progress of adding BenchmarkDotNet-based benchmarks to msbuild.
The idea is to be able to benchmark MSBuild tasks independently and help catch regressions and improve performance.
I started experimenting with this here https://github.com/mfkl/msbuild/tree/benchmark. Here's a basic example:
I've been looking at existing unit tests as a base to start this work. I think that time consuming tasks, like RAR (even more so with the current perf-focused RAR refactoring), are good candidate for writing benchmarks.
If you have any opinion about this, your thoughts are welcome.
Regarding the infra, I'm not familiar with helix/arcade but it seems to be favored for perf tests in other dotnet projects. Feel free to let me know what would be the plan regarding this, I'm fine working on this locally for now though.
Benchmarks can be ran with
dotnet run -c Release
Relevant:
https://github.com/xamarin/xamarin-android/blob/master/tests/MSBuildDeviceIntegration/Tests/PerformanceTest.cs
https://github.com/xamarin/xamarin-android/blob/master/tests/msbuild-times-reference/MSBuildDeviceIntegration.csv
The text was updated successfully, but these errors were encountered: