Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Performance refactor of running_output buffers #1096

Merged
merged 1 commit into from Apr 27, 2016
Merged

Conversation

sparrc
Copy link
Contributor

@sparrc sparrc commented Apr 26, 2016

this is a WIP, todo:

  • Unit test ordering sanity in running_output_test.go
  • Write unit tests for buffer.go
  • Determine best way to log metric drops
  • Track internal buffer metrics, and log occasionally

benchmarks are looking good so far, ~1.8x faster performance on Add & Write:

this PR:

% go test ./internal/models/... -bench=. -run=Benchmark
PASS
BenchmarkRunningOutputAddWrite-4            10000000           222 ns/op

same benchmark on master:

% go test ./internal/models/... -bench=. -run=Benchmark
PASS
BenchmarkRunningOutputAddWrite-4     5000000           395 ns/op

@sparrc sparrc force-pushed the metric-buffer-refactor branch 4 times, most recently from f4210f7 to 6b74b99 Compare April 26, 2016 17:45
@sparrc sparrc changed the title Significant performance refactor of running_output buffers Performance refactor of running_output buffers Apr 26, 2016
@sparrc sparrc force-pushed the metric-buffer-refactor branch 3 times, most recently from b0b43bf to 92563e4 Compare April 26, 2016 22:12
@sparrc
Copy link
Contributor Author

sparrc commented Apr 26, 2016

@PierreF Could you review these changes?

externally the usage of running_output is the same as the changes you made, but I've changed it to utilize a circular buffer using channels. This not only reduces the processing time (no locks are required), but it also reduces allocations, as slices only need to be allocated when a user requests a batch to be written.

@sparrc sparrc force-pushed the metric-buffer-refactor branch 3 times, most recently from 6d17e56 to 97a2c2f Compare April 26, 2016 23:44
@PierreF
Copy link
Contributor

PierreF commented Apr 27, 2016

I didn't got any issues with this PR, I'm using it since this morning (UTC+2 here). Metrics were always ordered and capacity matched configuration.

The most strange thing I got was "buffer fullness: 1083 / 1000 metrics". Since in fact it could buffered up to metric_batch_size + metric_buffer_limit. But i'm don't think it worth changing anything.

@sparrc
Copy link
Contributor Author

sparrc commented Apr 27, 2016

@PierreF yes, that was by design, I think it makes it more clear that we are currently over the buffer limit, and if a write fails then there will be dropped metrics.

I originally set the denominator to be MetricBufferLimit + MetricBatchSize, but I felt that was even more alarming, because then metrics could be dropping even though it appeared that the buffer was not full.

@sparrc sparrc merged commit 4de75ce into master Apr 27, 2016
@sparrc sparrc deleted the metric-buffer-refactor branch April 27, 2016 21:23
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants