Join GitHub today
GitHub is home to over 31 million developers working together to host and review code, manage projects, and build software together.
Sign upPrometheus cannot send > ~25k/s to Influxdb #1199
Comments
This comment has been minimized.
This comment has been minimized.
|
Running without the changes to |
brian-brazil
added
the
bug
label
Dec 16, 2015
This comment has been minimized.
This comment has been minimized.
|
I think it's fair to say that we won't pursue improving the existing InfluxDB integration any further – especially in light of clustering being closed source now. |
fabxc
closed this
Apr 6, 2016
This comment has been minimized.
This comment has been minimized.
|
I note that I managed to get 100k/s out of the generic write path, so it's unclear that there's a problem here on the Prometheus end. |
This comment has been minimized.
This comment has been minimized.
lock
bot
commented
Mar 24, 2019
|
This thread has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs. |
dan-cleinmark commentedNov 4, 2015
Following up from IRC -
I have a 4 node influx cluster behind an ELB. When I configure a single instance of Prometheus to send to this cluster, I'm able to send a max of around 25k events per-second to the cluster before the remote storage queue blocks and events start being dropped. Increasing 'maxSamplesPerSend' (10 -> 50) and 'maxConcurrentSends' (100 -> 1000) in storage/remote/queue_manager.go didn't seem to help (master...dan-cleinmark:increase_remote_maxSamplesPerSend)
The bottleneck does not seem to be on the Influx side as I'm able to send 75k messages per-second to the same Influx cluster (15k/s from 5x prometheus instances). Looking at the ELB stats while running 0086d48, I'm seeing ~2300 HTTP requests/minute through the ELB with an average latency of ~75ms.
Are there other parameters that could be tweaked to improve performance when sending to Influx from a single prometheus instance?