Skip to content

Loading…

Move counter and timer calculations out of graphite backend and into separate module #167

Merged
merged 17 commits into from

2 participants

@draco2003

An initial attempt at solving Issue #103

This replicates the graphite timer and counter calculations, but separates them out into a module.

It adds the counter_rates and timer_data to the metrics_hash prior to it being passed to the backends.

By merging it into the metrics_hash it keeps the function signature and events the same, though by putting them into their own data sets it keeps backwards compatibility with an easy migration path.

The backends are still required to iterate over the different types of metrics in order to format them as needed per backend, but they no longer need to do any heavy lifting as far as calculations go (unless they want to).

Also added an initial set of tests to run against the processedmetrics module to help get better test coverage for the calculations.

@mrtazz mrtazz commented on an outdated diff
backends/graphite.js
((13 lines not shown))
for (key in counters) {
- var value = counters[key];
- var valuePerSecond = value / (flushInterval / 1000); // calculate "per second" rate
-
- statString += 'stats.' + key + ' ' + valuePerSecond + ' ' + ts + "\n";
- statString += 'stats_counts.' + key + ' ' + value + ' ' + ts + "\n";
+ statString += 'stats.' + key + ' ' + counter_rates[key] + ' ' + ts + "\n";
@mrtazz Etsy, Inc. member
mrtazz added a note

minor pet peeve, please keep the alignment of the string concatenation.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
@mrtazz mrtazz commented on an outdated diff
backends/graphite.js
((13 lines not shown))
for (key in counters) {
- var value = counters[key];
- var valuePerSecond = value / (flushInterval / 1000); // calculate "per second" rate
-
- statString += 'stats.' + key + ' ' + valuePerSecond + ' ' + ts + "\n";
- statString += 'stats_counts.' + key + ' ' + value + ' ' + ts + "\n";
+ statString += 'stats.' + key + ' ' + counter_rates[key] + ' ' + ts + "\n";
+ statString += 'stats_counts.' + key + ' ' + counters[key] + ' ' + ts + "\n";
numStats += 1;
}
for (key in timers) {
if (timers[key].length > 0) {
@mrtazz Etsy, Inc. member
mrtazz added a note

shouldn't this be for (key in timer_data) and if (timer_data[key].length > 0)?

@mrtazz Etsy, Inc. member
mrtazz added a note

in this case it might be the same. But in general if you're acquiring keys from a different array than you're using them on, you're gonna have a bad time. And if we ever decide to drop timers from calculations or anything like that, this will bite us.

I was thinking more along the lines for things like in the librato library where they might want to calculate non-standard calculations in a single timer loop (similar to the counters loop above where its using the same key but different data sources. Though I can change it to timer_data easy enough.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
@mrtazz mrtazz commented on an outdated diff
lib/processedmetrics.js
@@ -0,0 +1,88 @@
+var ProcessedMetrics = function (metrics, flushInterval) {
@mrtazz Etsy, Inc. member
mrtazz added a note

I think process_metrics would be a better name here.

easy enough. Would you want the filename to follow the same naming?

@mrtazz Etsy, Inc. member
mrtazz added a note

yeah I think that makes sense.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
@mrtazz mrtazz commented on an outdated diff
stats.js
@@ -69,6 +71,8 @@ function flushMetrics() {
}
});
+ metrics_hash = pm.ProcessedMetrics(metrics_hash, flushInterval)
+
// Flush metrics to each backend.
backendEvents.emit('flush', time_stamp, metrics_hash);
@mrtazz Etsy, Inc. member
mrtazz added a note

This shouldn't be blocking. The metrics processing function should take a callback, which then emits to the backend.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
@mrtazz
Etsy, Inc. member

I took a quick glance and commented on what came to mind. Gonna do a test run of it as a whole, once the changes are in.

@draco2003

Did the quick changes. Will post the non-blocking metric processing call next.

@mrtazz mrtazz commented on an outdated diff
lib/process_metrics.js
((69 lines not shown))
+ current_timer_data["std"] = stddev;
+ current_timer_data["upper"] = max;
+ current_timer_data["lower"] = min;
+ current_timer_data["count"] = count;
+ current_timer_data["sum"] = sum;
+ current_timer_data["mean"] = mean;
+
+ timer_data[key] = current_timer_data;
+
+ }
+ }
+
+ //add processed metrics to the metrics_hash
+ metrics.counter_rates = counter_rates;
+ metrics.timer_data = timer_data;
+ flushCallback();
@mrtazz Etsy, Inc. member
mrtazz added a note

We should pass the metrics object to the callback as a parameter and not rely on a global ref.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
@mrtazz mrtazz commented on an outdated diff
stats.js
@@ -69,8 +71,11 @@ function flushMetrics() {
}
});
- // Flush metrics to each backend.
- backendEvents.emit('flush', time_stamp, metrics_hash);
+ pm.process_metrics(metrics_hash, flushInterval, time_stamp, function emitFlush() {
@mrtazz Etsy, Inc. member
mrtazz added a note

the callback should take the metrics object as a parameter, since we really don't want to access the global metrics object here. Also it might make sense to also pass in an exitcode/status parameter so that the callback can act on any errors that happen during metrics processing. And at least log it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
@draco2003

Based on the fact that we always have 2 counter_rates keys this should never be true. We could change the counter_rates to check for less than 3 to log when we aren't getting any non-statsd metrics to calculate.

Let me know if there are any other checks/catches you had in mind for triggering an error based on.

The calculations are bounds checked fairly well. Other place i could think of was type checking of parameters coming into the function if we think that makes sense.

@mrtazz mrtazz commented on an outdated diff
lib/logger.js
((6 lines not shown))
}
- this.util.log(type + msg);
+ this.util.log(type + ": " + msg);
@mrtazz Etsy, Inc. member
mrtazz added a note

Why is this added here? It seems unrelated to the pull request.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
@mrtazz mrtazz commented on an outdated diff
stats.js
@@ -69,8 +72,14 @@ function flushMetrics() {
}
});
- // Flush metrics to each backend.
- backendEvents.emit('flush', time_stamp, metrics_hash);
+ pm.process_metrics(metrics_hash, flushInterval, time_stamp, function emitFlush(err, metrics) {
+ // Flush metrics to each backend.
+ if (err) {
+ l.log("Errored processing metrics with: " + err, 'debug');
+ }
@mrtazz Etsy, Inc. member
mrtazz added a note

Thinking about whether we want to wrap the backends emit into a an else clause or let the backend somehow know that the metrics processing failed. Just passing the potentially erroneous metrics to the backend is probably not the right thing to do.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
@draco2003

Reverted the log tweak commit. I'll submit that as a separate pull request.

Changed the error handling to be a fatal error.
If processing metrics errors something else is most likely wrong and should be identified quickly.
I have it emit the error to all backends so they can subscribe to it and cleanup prior to the application exiting.

if we did an else, the flush once wouldn't be triggered and the metrics wouldn't be cleand up and most likely cause the same error after the next interval. It would also cause any of the flushInterval based calculations to be off due to it potentially being two or more intervals before it processed successfully.

If we called the clear_metrics and clean it up we're probably going to have gaps in our data and we'll most likely have the same error the next flush interval.

@mrtazz mrtazz commented on an outdated diff
stats.js
@@ -69,8 +72,21 @@ function flushMetrics() {
}
});
- // Flush metrics to each backend.
- backendEvents.emit('flush', time_stamp, metrics_hash);
+ pm.process_metrics(metrics_hash, flushInterval, time_stamp, function emitFlush(err, metrics) {
+ // Flush metrics to each backend only if the metrics processing was sucessful.
+ // Add processing_errors counter to allow for monitoring
+ if (err) {
+ l.log("Exiting due to error processing metrics with: " + err);
+ // Send metrics to backends for any last minute processing
+ // and give backends a chance to cleanup before exiting.
+ backendEvents.emit('error', time_stamp, metrics, err);
@mrtazz Etsy, Inc. member
mrtazz added a note

After thinking about this some more I think the best way to handle this is to just call backendEvents.emit("error", time_stamp, metrics, err). This will work with the existing backends for now, but provides a possibility to check for the error and handle it accordingly.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
@mrtazz mrtazz commented on an outdated diff
stats.js
@@ -69,8 +72,21 @@ function flushMetrics() {
}
});
- // Flush metrics to each backend.
- backendEvents.emit('flush', time_stamp, metrics_hash);
+ pm.process_metrics(metrics_hash, flushInterval, time_stamp, function emitFlush(err, metrics) {
+ // Flush metrics to each backend only if the metrics processing was sucessful.
+ // Add processing_errors counter to allow for monitoring
+ if (err) {
+ l.log("Exiting due to error processing metrics with: " + err);
+ // Send metrics to backends for any last minute processing
+ // and give backends a chance to cleanup before exiting.
+ backendEvents.emit('error', time_stamp, metrics, err);
+ // Only needed if other backends override the standard stacktrace/exit functionality
+ process.exit(1);
+ } else {
+ backendEvents.emit('flush', time_stamp, metrics);
@mrtazz Etsy, Inc. member
mrtazz added a note

I don't think we want to kill the whole daemon just because a calculation went wrong. It could be for metrics you don't even care about for example. I also don't think we need an extra event just for errors. The flush case in the backend should be able to handle this just fine. But I think we should update a counter statsd.calculation_error or something like that to actually record the failure.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
@mrtazz
Etsy, Inc. member

I tested the changes locally, can you update the README with the new additional metrics that get passed through the backend API and then I'll merge it into master.

@mrtazz mrtazz merged commit c9e09f8 into etsy:master

1 check passed

Details default The Travis build passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Commits on Oct 13, 2012
  1. @draco2003
  2. @draco2003
Commits on Oct 15, 2012
  1. @draco2003
  2. @draco2003
  3. @draco2003
Commits on Oct 18, 2012
  1. @draco2003
  2. @draco2003
Commits on Oct 19, 2012
  1. @draco2003

    Simplify calculation

    draco2003 committed
Commits on Oct 22, 2012
  1. @draco2003
  2. @draco2003

    Change Error handling Logic

    draco2003 committed
Commits on Oct 31, 2012
  1. @draco2003

    Merge remote branch 'upstream/master'

    draco2003 committed with root
    Conflicts:
    	backends/graphite.js
    Merge with etsy/master
Commits on Nov 2, 2012
  1. @draco2003
  2. @draco2003
  3. @draco2003
  4. @draco2003

    Merge upstream

    draco2003 committed
  5. @draco2003
Commits on Nov 4, 2012
  1. @draco2003
Showing with 236 additions and 66 deletions.
  1. +5 −2 README.md
  2. +3 −1 backends/console.js
  3. +11 −59 backends/graphite.js
  4. +85 −0 lib/process_metrics.js
  5. +11 −3 stats.js
  6. +1 −1 test/graphite_tests.js
  7. +120 −0 test/process_metrics_tests.js
View
7 README.md
@@ -270,12 +270,15 @@ metrics: {
gauges: gauges,
timers: timers,
sets: sets,
+ counter_rates: counter_rates,
+ timer_data: timer_data,
pctThreshold: pctThreshold
}
```
- Each backend module is passed the same set of statistics, so a
- backend module should treat the metrics as immutable
+ The counter_rates and timer_data are precalculated statistics to simplify
+ the creation of backends. Each backend module is passed the same set of
+ statistics, so a backend module should treat the metrics as immutable
structures. StatsD will reset timers and counters after each
listener has handled the event.
View
4 backends/console.js
@@ -31,9 +31,11 @@ ConsoleBackend.prototype.flush = function(timestamp, metrics) {
});
var out = {
- counter: this.statsCache.counters,
+ counters: this.statsCache.counters,
timers: this.statsCache.timers,
gauges: metrics.gauges,
+ timer_data: metrics.timer_data,
+ counter_rates: metrics.counter_rates,
sets: function (vals) {
var ret = {};
for (val in vals) {
View
70 backends/graphite.js
@@ -55,88 +55,40 @@ var flush_stats = function graphite_flush(ts, metrics) {
var statString = '';
var numStats = 0;
var key;
-
+ var timer_data_key;
var counters = metrics.counters;
var gauges = metrics.gauges;
var timers = metrics.timers;
var sets = metrics.sets;
- var pctThreshold = metrics.pctThreshold;
+ var counter_rates = metrics.counter_rates;
+ var timer_data = metrics.timer_data;
for (key in counters) {
- var value = counters[key];
- var valuePerSecond = value / (flushInterval / 1000); // calculate "per second" rate
-
- statString += 'stats.' + key + ' ' + valuePerSecond + ' ' + ts + "\n";
- statString += 'stats_counts.' + key + ' ' + value + ' ' + ts + "\n";
+ statString += 'stats.' + key + ' ' + counter_rates[key] + ' ' + ts + "\n";
+ statString += 'stats_counts.' + key + ' ' + counters[key] + ' ' + ts + "\n";
numStats += 1;
}
- for (key in timers) {
- if (timers[key].length > 0) {
- var values = timers[key].sort(function (a,b) { return a-b; });
- var count = values.length;
- var min = values[0];
- var max = values[count - 1];
-
- var cumulativeValues = [min];
- for (var i = 1; i < count; i++) {
- cumulativeValues.push(values[i] + cumulativeValues[i-1]);
+ for (key in timer_data) {
+ if (Object.keys(timer_data).length > 0) {
+ for (timer_data_key in timer_data[key]) {
+ statString += 'stats.timers.' + key + '.' + timer_data_key + ' ' + timer_data[key][timer_data_key] + ' ' + ts + "\n";
}
- var sum = min;
- var mean = min;
- var maxAtThreshold = max;
-
- var message = "";
-
- var key2;
-
- for (key2 in pctThreshold) {
- var pct = pctThreshold[key2];
- if (count > 1) {
- var numInThreshold = Math.round(pct / 100 * count);
-
- maxAtThreshold = values[numInThreshold - 1];
- sum = cumulativeValues[numInThreshold - 1];
- mean = sum / numInThreshold;
- }
-
- var clean_pct = '' + pct;
- clean_pct.replace('.', '_');
- message += 'stats.timers.' + key + '.mean_' + clean_pct + ' ' + mean + ' ' + ts + "\n";
- message += 'stats.timers.' + key + '.upper_' + clean_pct + ' ' + maxAtThreshold + ' ' + ts + "\n";
- message += 'stats.timers.' + key + '.sum_' + clean_pct + ' ' + sum + ' ' + ts + "\n";
- }
-
- sum = cumulativeValues[count-1];
- mean = sum / count;
-
- var sumOfDiffs = 0;
- for (var i = 0; i < count; i++) {
- sumOfDiffs += (values[i] - mean) * (values[i] - mean);
- }
- var stddev = Math.sqrt(sumOfDiffs / count);
-
- message += 'stats.timers.' + key + '.std ' + stddev + ' ' + ts + "\n";
- message += 'stats.timers.' + key + '.upper ' + max + ' ' + ts + "\n";
- message += 'stats.timers.' + key + '.lower ' + min + ' ' + ts + "\n";
- message += 'stats.timers.' + key + '.count ' + count + ' ' + ts + "\n";
- message += 'stats.timers.' + key + '.sum ' + sum + ' ' + ts + "\n";
- message += 'stats.timers.' + key + '.mean ' + mean + ' ' + ts + "\n";
- statString += message;
-
numStats += 1;
}
}
for (key in gauges) {
statString += 'stats.gauges.' + key + ' ' + gauges[key] + ' ' + ts + "\n";
+
numStats += 1;
}
for (key in sets) {
statString += 'stats.sets.' + key + '.count ' + sets[key].values().length + ' ' + ts + "\n";
+
numStats += 1;
}
View
85 lib/process_metrics.js
@@ -0,0 +1,85 @@
+var process_metrics = function (metrics, flushInterval, ts, flushCallback) {
+ var key;
+ var counter_rates = {};
+ var timer_data = {};
+ var counters = metrics.counters;
+ var timers = metrics.timers;
+ var pctThreshold = metrics.pctThreshold;
+
+ for (key in counters) {
+ var value = counters[key];
+
+ // calculate "per second" rate
+ var valuePerSecond = value / (flushInterval / 1000);
+ counter_rates[key] = valuePerSecond;
+ }
+
+ for (key in timers) {
+ if (timers[key].length > 0) {
+ timer_data[key] = {};
+ var current_timer_data = {};
+
+ var values = timers[key].sort(function (a,b) { return a-b; });
+ var count = values.length;
+ var min = values[0];
+ var max = values[count - 1];
+
+ var cumulativeValues = [min];
+ for (var i = 1; i < count; i++) {
+ cumulativeValues.push(values[i] + cumulativeValues[i-1]);
+ }
+
+ var sum = min;
+ var mean = min;
+ var maxAtThreshold = max;
+
+ var message = "";
+
+ var key2;
+
+ for (key2 in pctThreshold) {
+ var pct = pctThreshold[key2];
+ if (count > 1) {
+ var numInThreshold = Math.round(pct / 100 * count);
+
+ maxAtThreshold = values[numInThreshold - 1];
+ sum = cumulativeValues[numInThreshold - 1];
+ mean = sum / numInThreshold;
+ }
+
+ var clean_pct = '' + pct;
+ clean_pct.replace('.', '_');
+ current_timer_data["mean_" + clean_pct] = mean;
+ current_timer_data["upper_" + clean_pct] = maxAtThreshold;
+ current_timer_data["sum_" + clean_pct] = sum;
+
+ }
+
+ sum = cumulativeValues[count-1];
+ mean = sum / count;
+
+ var sumOfDiffs = 0;
+ for (var i = 0; i < count; i++) {
+ sumOfDiffs += (values[i] - mean) * (values[i] - mean);
+ }
+ var stddev = Math.sqrt(sumOfDiffs / count);
+ current_timer_data["std"] = stddev;
+ current_timer_data["upper"] = max;
+ current_timer_data["lower"] = min;
+ current_timer_data["count"] = count;
+ current_timer_data["sum"] = sum;
+ current_timer_data["mean"] = mean;
+
+ timer_data[key] = current_timer_data;
+
+ }
+ }
+
+ //add processed metrics to the metrics_hash
+ metrics.counter_rates = counter_rates;
+ metrics.timer_data = timer_data;
+
+ flushCallback(metrics);
+ }
+
+exports.process_metrics = process_metrics
View
14 stats.js
@@ -6,6 +6,8 @@ var dgram = require('dgram')
, events = require('events')
, logger = require('./lib/logger')
, set = require('./lib/set')
+ , pm = require('./lib/process_metrics')
+
// initialize data structures with defaults for statsd stats
var keyCounter = {};
@@ -16,6 +18,8 @@ var counters = {
var timers = {};
var gauges = {};
var sets = {};
+var counter_rates = {};
+var timer_data = {};
var pctThreshold = null;
var debugInt, flushInterval, keyFlushInt, server, mgmtServer;
var startup_time = Math.round(new Date().getTime() / 1000);
@@ -45,6 +49,8 @@ function flushMetrics() {
gauges: gauges,
timers: timers,
sets: sets,
+ counter_rates: counter_rates,
+ timer_data: timer_data,
pctThreshold: pctThreshold
}
@@ -66,14 +72,16 @@ function flushMetrics() {
}
});
- // Flush metrics to each backend.
- backendEvents.emit('flush', time_stamp, metrics_hash);
+ pm.process_metrics(metrics_hash, flushInterval, time_stamp, function emitFlush(metrics) {
+ backendEvents.emit('flush', time_stamp, metrics);
+ });
+
};
var stats = {
messages: {
last_msg_seen: startup_time,
- bad_lines_seen: 0,
+ bad_lines_seen: 0
}
};
View
2 test/graphite_tests.js
@@ -240,7 +240,7 @@ module.exports = {
var mykey = 'statsd.numStats';
return _.include(_.keys(post),mykey) && (post[mykey] == 3);
};
- test.ok(_.any(hashes,numstat_test), 'statsd.numStats should be 1');
+ test.ok(_.any(hashes,numstat_test), 'statsd.numStats should be 3');
var testavgvalue_test = function(post){
var mykey = 'stats.a_test_value';
View
120 test/process_metrics_tests.js
@@ -0,0 +1,120 @@
+var pm = require('../lib/process_metrics')
+
+module.exports = {
+ setUp: function (callback) {
+ this.time_stamp = Math.round(new Date().getTime() / 1000);
+
+ var counters = {};
+ var gauges = {};
+ var timers = {};
+ var sets = {};
+ var pctThreshold = null;
+
+ this.metrics = {
+ counters: counters,
+ gauges: gauges,
+ timers: timers,
+ sets: sets,
+ pctThreshold: pctThreshold
+ }
+ callback();
+ },
+ counters_has_stats_count: function(test) {
+ test.expect(1);
+ this.metrics.counters['a'] = 2;
+ pm.process_metrics(this.metrics, 1000, this.time_stamp, function(){});
+ test.equal(2, this.metrics.counters['a']);
+ test.done();
+ },
+ counters_has_correct_rate: function(test) {
+ test.expect(1);
+ this.metrics.counters['a'] = 2;
+ pm.process_metrics(this.metrics, 100, this.time_stamp, function(){});
+ test.equal(20, this.metrics.counter_rates['a']);
+ test.done();
+ },
+ timers_handle_empty: function(test) {
+ test.expect(1);
+ this.metrics.timers['a'] = [];
+ pm.process_metrics(this.metrics, 100, this.time_stamp, function(){});
+ //potentially a cleaner way to check this
+ test.equal(undefined, this.metrics.counter_rates['a']);
+ test.done();
+ },
+ timers_single_time: function(test) {
+ test.expect(6);
+ this.metrics.timers['a'] = [100];
+ pm.process_metrics(this.metrics, 100, this.time_stamp, function(){});
+ timer_data = this.metrics.timer_data['a'];
+ test.equal(0, timer_data.std);
+ test.equal(100, timer_data.upper);
+ test.equal(100, timer_data.lower);
+ test.equal(1, timer_data.count);
+ test.equal(100, timer_data.sum);
+ test.equal(100, timer_data.mean);
+ test.done();
+ },
+ timers_multiple_times: function(test) {
+ test.expect(6);
+ this.metrics.timers['a'] = [100, 200, 300];
+ pm.process_metrics(this.metrics, 100, this.time_stamp, function(){});
+ timer_data = this.metrics.timer_data['a'];
+ test.equal(81.64965809277261, timer_data.std);
+ test.equal(300, timer_data.upper);
+ test.equal(100, timer_data.lower);
+ test.equal(3, timer_data.count);
+ test.equal(600, timer_data.sum);
+ test.equal(200, timer_data.mean);
+ test.done();
+ },
+ timers_single_time_single_percentile: function(test) {
+ test.expect(3);
+ this.metrics.timers['a'] = [100];
+ this.metrics.pctThreshold = [90];
+ pm.process_metrics(this.metrics, 100, this.time_stamp, function(){});
+ timer_data = this.metrics.timer_data['a'];
+ test.equal(100, timer_data.mean_90);
+ test.equal(100, timer_data.upper_90);
+ test.equal(100, timer_data.sum_90);
+ test.done();
+ },
+ timers_single_time_multiple_percentiles: function(test) {
+ test.expect(6);
+ this.metrics.timers['a'] = [100];
+ this.metrics.pctThreshold = [90, 80];
+ pm.process_metrics(this.metrics, 100, this.time_stamp, function(){});
+ timer_data = this.metrics.timer_data['a'];
+ test.equal(100, timer_data.mean_90);
+ test.equal(100, timer_data.upper_90);
+ test.equal(100, timer_data.sum_90);
+ test.equal(100, timer_data.mean_80);
+ test.equal(100, timer_data.upper_80);
+ test.equal(100, timer_data.sum_80);
+ test.done();
+ },
+ timers_multiple_times_single_percentiles: function(test) {
+ test.expect(3);
+ this.metrics.timers['a'] = [100, 200, 300];
+ this.metrics.pctThreshold = [90];
+ pm.process_metrics(this.metrics, 100, this.time_stamp, function(){});
+ timer_data = this.metrics.timer_data['a'];
+ test.equal(200, timer_data.mean_90);
+ test.equal(300, timer_data.upper_90);
+ test.equal(600, timer_data.sum_90);
+ test.done();
+ },
+ timers_multiple_times_multiple_percentiles: function(test) {
+ test.expect(6);
+ this.metrics.timers['a'] = [100, 200, 300];
+ this.metrics.pctThreshold = [90, 80];
+ pm.process_metrics(this.metrics, 100, this.time_stamp, function(){});
+ timer_data = this.metrics.timer_data['a'];
+ test.equal(200, timer_data.mean_90);
+ test.equal(300, timer_data.upper_90);
+ test.equal(600, timer_data.sum_90);
+ test.equal(150, timer_data.mean_80);
+ test.equal(200, timer_data.upper_80);
+ test.equal(300, timer_data.sum_80);
+ test.done();
+ }
+}
Something went wrong with that request. Please try again.