-
-
Notifications
You must be signed in to change notification settings - Fork 2.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
PerfTickLogger, reduce overhead of logging long ticks. #20159
Conversation
This should get a rebase as it's 237 commits behind. |
1639ce7
to
7ad5e61
Compare
Are there any profiling results showing the benefit here? |
There wasn't any profiling done I think, but this should be quite a significant yet simple improvement. So much less memory would be used on monitoring activities |
ping. |
RAM usageRed Alert shellmapbleed: 404 MB |
Needs a rebase. |
7ad5e61
to
99cd852
Compare
Weird This should definitely be faster & take less memory. Instead of constantly creating new classes and a bunch of variables, you only save current time. |
I was also surprised and repeated the measurements. My method is probably not perfect. |
If you change the order of the tests, do the results change? |
I tested
truetick <- read.csv("bleed/truetick_time.csv", sep = ",", header = TRUE)
bleed <- truetick[-c(1:25), ]
bleed <- bleed[1:500, ]
mean_time <- mean(bleed$time..ms)
median_time <- median(bleed$time..ms)
sd_time <- sd(bleed$time..ms)
plot(bleed$tick, bleed$time..ms, type = "l", xlab = "Tick", ylab = "Time (ms)",
main = "Red Alert Shellmap (bleed)")
grid()
abline(h = mean_time, col = "red", lty = 2)
abline(h = mean_time + sd_time, col = "darkred", lty = 3)
truetick <- read.csv("perftickerlogger/truetick_time.csv", sep = ",", header = TRUE)
perftickerlogger <- truetick[-c(1:25), ]
perftickerlogger <- perftickerlogger[1:500, ]
mean_time <- mean(perftickerlogger$time..ms)
median_time <- median(perftickerlogger$time..ms)
sd_time <- sd(perftickerlogger$time..ms)
plot(perftickerlogger$tick, perftickerlogger$time..ms, type = "l", xlab = "Tick", ylab = "Time (ms)",
main = "Red Alert Shellmap (perftickerlogger)")
grid()
abline(h = mean_time, col = "red", lty = 2)
abline(h = mean_time + sd_time, col = "darkred", lty = 3)
plot(bleed, type = "l", xlab = "Tick", ylab = "Time (ms)", col = "blue",
main = "Red Alert Shellmap")
lines(perftickerlogger, col = "red")
legend("topright", legend = c("bleed", "perf_ticker_logger"),
col = c("blue", "red"), lty = 1, cex = 0.8)
grid() |
can you please check to see if a lot of long ticks were logged? seems |
99cd852
to
43f5c02
Compare
i.e. i see. before last push/fix, these Effects being logged as long ticks.
They are not there in last pushed version. |
that at least explained it then. |
bleedFirst measurement takes long, then it is fast:
thisFirst instantiation is significantly reduced, but it stays jumpy
Logging some more as the standard deviation is far too high:
My code #region Copyright & License Information
/*
* Copyright (c) The OpenRA Developers and Contributors
* This file is part of OpenRA, which is free software. It is made
* available to you under the terms of the GNU General Public License
* as published by the Free Software Foundation, either version 3 of
* the License, or (at your option) any later version. For more
* information, see COPYING.
*/
#endregion
using System;
using System.Diagnostics;
using System.Globalization;
using OpenRA.Support;
namespace OpenRA.Mods.Common.UtilityCommands
{
class CheckPerformance : IUtilityCommand
{
string IUtilityCommand.Name => "--check-performance";
bool IUtilityCommand.ValidateArguments(string[] args)
{
return true;
}
[Desc("Clock functions.")]
void IUtilityCommand.Run(Utility utility, string[] args)
{
const int Iterations = 1000;
Console.WriteLine($"{Iterations} iterations of PerfTimer.LogLongTick");
var stopwatch = new Stopwatch();
stopwatch.Restart();
PerfTickLogger.LogLongTick(0, "", null);
Console.WriteLine($"First tick: {stopwatch.ElapsedTicks}");
var elapsedTimes = new long[Iterations];
for (var i = 0; i < Iterations; i++)
{
stopwatch.Restart();
PerfTickLogger.LogLongTick(0, "", null);
stopwatch.Stop();
elapsedTimes[i] = stopwatch.ElapsedTicks;
}
var average = CalculateAverage(elapsedTimes);
Console.WriteLine($"Average Elapsed Time: {average.ToString(CultureInfo.InvariantCulture)}");
var standardDeviation = CalculateStandardDeviation(elapsedTimes);
Console.WriteLine($"Standard Deviation: {standardDeviation.ToString(CultureInfo.InvariantCulture)}");
}
static double CalculateAverage(long[] values)
{
long sum = 0;
foreach (var value in values)
sum += value;
return (double)sum / values.Length;
}
static double CalculateStandardDeviation(long[] values)
{
var average = CalculateAverage(values);
double sumOfSquares = 0;
foreach (var value in values) {
var difference = value - average;
sumOfSquares += difference * difference;
}
var variance = sumOfSquares / values.Length;
return Math.Sqrt(variance);
}
}
} |
Can you please try with passing current timestamp. Now you pass zero as start time which will be compared against the current timestamp - which will be greater than the threshold and there will actually be seen as a long tick? |
So i suspect that you are now measuring how effecient the logging is. |
replaced it with
which seems to make it worse. |
When I cache var timestamp = Stopwatch.GetTimestamp();
PerfTickLogger.LogLongTick(timestamp, "", null); it is still worse
|
i'm fine with it being worse, just that i would like to try to understand why. |
This is how it should be called. Initialize timestamp once. After set timestamp to the return value of log long tick. This will cause only one var start = PerfTickLogger.GetTimestamp();
for (var i = 0; i < Iterations; i++)
{
stopwatch.Restart();
start = PerfTickLogger.LogLongTick(start, "", null);
stopwatch.Stop();
elapsedTimes[i] = stopwatch.ElapsedTicks;
} |
const int Iterations = 1000;
Console.WriteLine($"{Iterations} iterations of PerfTimer.LogLongTick");
var stopwatch = new Stopwatch();
stopwatch.Restart();
var start = PerfTickLogger.GetTimestamp();
stopwatch.Stop();
Console.WriteLine($"First tick: {stopwatch.ElapsedTicks}");
var elapsedTimes = new long[Iterations];
for (var i = 0; i < Iterations; i++)
{
stopwatch.Restart();
start = PerfTickLogger.LogLongTick(start, "", null);
stopwatch.Stop();
elapsedTimes[i] = stopwatch.ElapsedTicks;
} gives me
but not at all in a reproducible way. I think I am measuring something else. |
but do you agree that the body of |
what might be better is to now restart your stopwatch in the loop. just measure the total loop time. you now may be testing your stopwatch. you could uncomment LogLogTick. see the diff. |
Yes, I am testing the stopwatch when I try to measure in the loop. I am doing 10.000 iterations now and stop in the end. The overhead of the stopwatch or the empty loop is still around 2000 ms. this
bleed
So we clearly have a winner, although I am not sure how impactful given how hard it was to measure. |
using System;
using System.Diagnostics;
using OpenRA.Support;
namespace OpenRA.Mods.Common.UtilityCommands
{
class BenchmarkPerfTickLogger : IUtilityCommand
{
string IUtilityCommand.Name => "--perf";
bool IUtilityCommand.ValidateArguments(string[] args)
{
return true;
}
void IUtilityCommand.Run(Utility utility, string[] args)
{
const int Iterations = 10000;
Console.WriteLine($"{Iterations} iterations of perfTickLogger.LogTickAndRestartTimer");
var stopwatch = new Stopwatch();
stopwatch.Restart();
var perfTickLogger = new PerfTickLogger();
perfTickLogger.Start();
stopwatch.Stop();
Console.WriteLine($"First tick: {stopwatch.ElapsedTicks}");
stopwatch.Restart();
for (var i = 0; i < Iterations; i++)
perfTickLogger.LogTickAndRestartTimer("", null);
stopwatch.Stop();
Console.WriteLine($"Elapsed Time {stopwatch.ElapsedTicks}");
}
}
} |
using System;
using System.Diagnostics;
using OpenRA.Support;
namespace OpenRA.Mods.Common.UtilityCommands
{
class BenchmarkPerfTickLogger : IUtilityCommand
{
string IUtilityCommand.Name => "--perf";
bool IUtilityCommand.ValidateArguments(string[] args)
{
return true;
}
void IUtilityCommand.Run(Utility utility, string[] args)
{
const int Iterations = 10000;
Console.WriteLine($"{Iterations} iterations of PerfTimer.LogLongTick");
var stopwatch = new Stopwatch();
stopwatch.Restart();
var start = PerfTickLogger.GetTimestamp();
stopwatch.Stop();
Console.WriteLine($"First tick: {stopwatch.ElapsedTicks}");
stopwatch.Restart();
for (var i = 0; i < Iterations; i++)
start = PerfTickLogger.LogLongTick(start, "", null);
stopwatch.Stop();
Console.WriteLine($"Elapsed Time {stopwatch.ElapsedTicks}");
}
}
} |
Future improvement might be; https://stackoverflow.com/questions/55686928/using-stopwatch-in-c-sharp
|
Make PerfTickLogger static. Do not allocate PerfLogTicker object on hot paths.
Would be nice if settings could send out notifications on changes of either any, sections of or specific setting values.