Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improve InfluxDB through-put performance #2251

Merged
merged 2 commits into from
Jan 26, 2017
Merged

Conversation

sparrc
Copy link
Contributor

@sparrc sparrc commented Jan 11, 2017

This changes the current use of the InfluxDB client to instead use a
baked-in client that uses the fasthttp library.

This allows for significantly smaller allocations, the re-use of http
body buffers, and the re-use of the actual bytes of the line-protocol
metric representations.

Required for all PRs:

  • CHANGELOG.md updated (we recommend not updating this until the PR has been approved by a maintainer)
  • Sign CLA (if not already signed)
  • README.md updated (if adding a new plugin)

@sebito91
Copy link
Contributor

This looks awesome! By chance did you note what kind of improvement this library gave you? Is it really worthwhile to replace net/http request with fasthttp?

@sparrc
Copy link
Contributor Author

sparrc commented Jan 11, 2017

Yes, I'm working on getting together some performance benchmarks right now :)

@sparrc
Copy link
Contributor Author

sparrc commented Jan 11, 2017

Ah, I misread your comment, I have not directly compared using net/http vs. fasthttp when writing this influxdb client. I was largely relying on ad-hoc testing done while we made the write client for https://github.com/influxdata/influx-stress.

Also, the first question in the fasthttp faq is very relevant. The 1st and 3rd bullet points create significant allocation overhead, whereas fasthttp solves this overhead easily by providing buffer re-use functions, see here and here.

That being said, it's the request body that has the memory overhead and I'm not sure how much each "request" structure itself really matters. I'm going to try to refactor to use net/http to see what the exact difference would be in performance, as I don't really like adding another third-party dependency if we don't have to.

@sparrc
Copy link
Contributor Author

sparrc commented Jan 11, 2017

This change currently doubles the performance compared with master (writing to a single influxdb instance):

fasthttp:

$ influx-stress insert --fast --batch-size 1000 --points 10000000 --host http://localhost:8186
...
Write Throughput: 242383
Points Written: 10000000

master:

$ influx-stress insert --fast --batch-size 1000 --points 10000000 --host http://localhost:8186
...
Write Throughput: 122554
Points Written: 10000000

@sparrc
Copy link
Contributor Author

sparrc commented Jan 11, 2017

Surprisingly I'm actually getting only very slightly improved performance using the fasthttp library, but much worse GC performance.

With fasthttp telegraf needs to run ~105 GC cycles to handle 4,000,000 metrics.

with net/http telegraf only needs to run ~31

Not sure why that is exactly, but I suspect that the standard library might have a more efficient way of reusing buffers when handling io.Reader bodies.

@sebito91
Copy link
Contributor

Hmm, thanks for the detailed feedback. Increased GC seems somewhat counterintuitive using fasthttp since it's meant to reuse buffers all day long. The doubled throughput is incredibly interesting and would warrant (IMHO) discussion with valyala.

If you run a flamegraph (thanks @jwilder) with fasthttp, do you see lots of activity in (*Response).AppendBodyString like in valyala/fasthttp#155

@sparrc
Copy link
Contributor Author

sparrc commented Jan 11, 2017

@sebito91 that was comparing it with current master, which uses a different client and spends a lot of CPU time marshalling points from bytes to the influxdb point object and back to bytes.

@sparrc sparrc changed the title Improve the InfluxDB through-put performance Improve InfluxDB through-put performance Jan 11, 2017
@sparrc sparrc added this to the 1.3.0 milestone Jan 11, 2017
This changes the current use of the InfluxDB client to instead use a
baked-in client that uses the fasthttp library.

This allows for significantly smaller allocations, the re-use of http
body buffers, and the re-use of the actual bytes of the line-protocol
metric representations.
@sparrc sparrc force-pushed the influxdb-fasthttp-client branch 2 times, most recently from 87d58b9 to 1b17842 Compare January 25, 2017 22:51
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants