-
Notifications
You must be signed in to change notification settings - Fork 256
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Remove inner iteration counts from the benchmarks #126
Remove inner iteration counts from the benchmarks #126
Conversation
This reverts commit dfa095d.
… to make it consumable
…y* benchmarks which are not needed with BDN
…eturn the result to make it consumable
…esult to make it consumable, simplify the setup
…lt to make it consumable
… the result to make it consumable
…he result to make it consumable
…turn the result to make it consumable
…ong benchmark where creating the tasks has huge overhead
[Benchmark] | ||
public float LengthSquaredJitOptimizeCanaryBenchmark() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@jorive I have removed the *Canary*
benchmarks because their purpose was to avoid the following pattern:
var result = VectorTests.Vector2Value;
for (var iteration = 0; iteration < VectorTests.DefaultInnerIterationsCount; iteration++)
{
result = Vector2.Multiply(result, VectorTests.Vector2Delta);
}
return result;
To be optimized to:
return Vector2.Multiply(VectorTests.Vector2Value, VectorTests.Vector2Delta);
With BDN we have this out of the box, because it has it's own loop one level above and prevents from such elimination by preventing benchmark method inlining by using delegates.
Moreover, if one day JIT will become so smart that it will be able that Vector2.Multiply(VectorTests.Vector2Value, VectorTests.Vector2Delta)
is a constant, BDN will report 0 in the results and it will be obvious to us what have happened
XmlDocument doc = _doc; | ||
|
||
for (int i = 0; i < innerIterations; i++) | ||
doc.LoadXml("<elem1 child1='' child2='duu' child3='e1;e2;' child4='a1' child5='goody'> text node two e1; text node three </elem1>"); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this was a bug: this benchmark was reusing the same instance of XmlDoc
for all iterations and growing it with provided string in every iteration. So the time was raising with every benchmark invocation.
@valenis I am removing the InnerIterationCount here (see the description above for full explanation) |
@adamsitnik With these removal, do we know approx. how long does it take to run all benchmarks (CoreClr, CoreFx)? What's the difference before/after? Will these changes make the benchmarks "nano-benchmarks"? |
|
||
[GlobalSetup(Target = nameof(ObjectGetTypeNoBoxing))] | ||
public void SetupObjectGetTypeNoBoxing() => blackObject = Color.Black; | ||
public void EnumCompareTo(Color color) => color.CompareTo(Color.White); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
void [](start = 15, length = 4)
Shouldn't the return type be int
?
Console.BackgroundColor = ConsoleColor.DarkGray; | ||
Console.BackgroundColor = ConsoleColor.Red; | ||
Console.BackgroundColor = ConsoleColor.DarkGreen; | ||
Console.BackgroundColor = ConsoleColor.White; | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Aren't these two benchmarks testing assignment operation of an enum? Do we need them?
nvm... it's testing the PInvoke to the Pal layer. #Closed
Given that we are renaming the benchmarks (creating new ones), wouldn't it be better to make this a generic? Refers to: src/benchmarks/corefx/System.Numerics.Vectors/Perf_Vector2.cs:1 in 4000d50. [](commit_id = 4000d50, deletion_comment = False) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM modulo comments.
For the vector operations it's not possible as of today:
public class VectorTests<T> where T : VectorBase
{
private T left = new T(), right = new T();
[Benchmark] public T Multiply() => left.Multiply(right); // where Multiply would come from VectorBase
}
|
I think that the goal here was to compare |
…ionCount # Conflicts: # src/benchmarks/corefx/System.Text.Encoding/Perf.Encoding.cs
When I was porting the benchmarks I decided to keep the InnerIterationCount to prevent from chart scaling issue.
However now when @jorive wrote BenchView importer I realized that it does not make any sense to keep the InnerIterationCount.
Why is that? Let's consider following benchmark:
xunit style:
bdn style:
When the data from xunit-performance gets reported to BenchView it's: "70 Duration (ms)" - so the metric is duration of entire iteration. InnerIterationCount does not matter, the result is not scaled.
BenchmarkDotNet scales the result and reports following results:
Why the duration of single iteration is different for both tools? Because BenchmarkDotNet scales the number of operations per iteration according to the
IterationTime
setting, which for our repo is currently 250ms.Summary: If I keep
InnerIterationCount
in the code the duration of single iteration will still be different. So I can remove it and don't worry about the scaling because we introduce new metric, so the historical data won't be affected.