Join GitHub today
GitHub is home to over 31 million developers working together to host and review code, manage projects, and build software together.
Sign upImplement federation (timeseries streaming) #9
Comments
juliusv
added a commit
that referenced
this issue
Apr 9, 2014
matttproud
added a commit
that referenced
this issue
Apr 9, 2014
This comment has been minimized.
This comment has been minimized.
|
One idea would be to do this via console templates, we could add a function that'd take the output of a query and produce text/protobuf format. We'd need some hook to set the content type too. |
This comment has been minimized.
This comment has been minimized.
|
@brian-brazil This would certainly be possible, but since this is an integral feature, it arguably deserves its own specialized and optimized implementation and endpoint, no? |
This comment has been minimized.
This comment has been minimized.
|
A separate endpoint would be best. |
This comment has been minimized.
This comment has been minimized.
|
A way to do this via console templates, until we've got a full-on solution: https://github.com/prometheus/prometheus/blob/master/consoles/federation_template_example.txt |
This comment has been minimized.
This comment has been minimized.
|
The solution might include "streaming" as in "transfer more than one timestamped sample per time series during one scrape by a higher-level Prometheus server of a lower-level Prometheus server". |
This comment has been minimized.
This comment has been minimized.
|
I'm a little wary of doing more than one value. The main reason you'd need that would be if a previous scrape failed, and requesting more data from a server that failed last time may lead to a cascading failure. |
This comment has been minimized.
This comment has been minimized.
multilinear
commented
May 27, 2015
|
There are two common use-cases for federation:
It's generally important to monitor a target from "nearby". You want to run prometheus as close to the target in the network sense as possible. It's actually generally a good idea to run it in the same failure domain as well, as then your monitoring goes down exactly when your system goes down, instead of alternate with it, this helps avoid your system being up while your monitoring is down, minimizes netsplits impacting monitoring, etc. In the case of multiple zones though it's often useful to cross-correlate data across those zones. So you'd use the federation to pull the data in to a "global" level prometheus. In this case it'd be fairly common for a scrape to fail due to a network-level event (fiber cut, router failure, etc.)... and it kind of sucks to just lose that data from your global level prometheus instance when it still exists in the lower level monitoring. I should note here that in the prometheus model there isn't a global store to pull from, so if the data isn't in that top-level right now, you'll never get it there. You'd end up having to do periodic dumps and imports from your lower-level promethei to fill in holes for network outages... ick :(. I'd suggest pulling data in in a more "streaming" fashion with a bounding the window. The default bound can be relatively small to avoid the cascading problem, this way it should at least be able to bridge small network "glitches" like those frequently seen in intercontinental links. If someone wants to expose themselves to cascade failures to handle a cruddy network, they could extend the window if desired. |
This comment has been minimized.
This comment has been minimized.
multilinear
commented
May 27, 2015
|
Oh, also, this way you can handle high-frequency data without having a high-frequency poll at the federation layer. |
This comment has been minimized.
This comment has been minimized.
|
I don't think a bounding window is sufficient to prevent cascading failures, even if it now requests at most two data points that means that the load on the slave prometheus server could double in an outage - which would be bad. My experience is that gaps due to small blips due to network fun don't usually cause problems in practice. I'd try to avoid putting anything critical in a global prometheus, due to the fundamental unreliability of the WAN (and data appearing a bit back in time may cause weirdness with rules) - it's more for general information with the per-cluster/failure domain prometheus servers being the place you usually go to first. |
This comment has been minimized.
This comment has been minimized.
multilinear
commented
May 27, 2015
|
What about higher frequency data? It seems the scrapes will have to happen at least as fast as the fastest scrape that the lower-level prometheus is doing. Which, assuming prometheus is as well written as I think it is (I'm new to the community)... could be very very fast. |
This comment has been minimized.
This comment has been minimized.
|
At the global level, high frequency data is much less useful than at a local level. High-frequency data (on the order of seconds) is primarily useful for debugging things like microbursts for which you usually want to look at a handful of variables in roughly one datacenter at a time to figure things out, and reduce the impact of the various race conditions inherent in monitoring. At a global level you tend to want a wide range of metrics at no more than a minute granularity. A well instrumented server will tend to have hundreds to thousands of metrics, and many thousands of time series. Doing scrapes more often will make you run into performance problems sooner without much benefit from the increased frequency, rather it's the breadth of instrumentation that helps you pin down all bar the microbust-level issues. If anything you'd be looking at downsampling a bit at the global level. |
beorn7
referenced this issue
Jun 12, 2015
Closed
Consistent backups of a running Prometheus should be possible #651
This comment has been minimized.
This comment has been minimized.
|
This has been implemented. http://prometheus.io/docs/operating/federation/ |
brian-brazil
closed this
Aug 20, 2015
ChaoticMind
referenced this issue
Aug 27, 2015
Merged
Remove 'Hierarchical federation' from roadmap (it's now implemented) #185
juliusv
pushed a commit
that referenced
this issue
Jul 3, 2016
simonpasquier
referenced
this issue
in simonpasquier/prometheus
Oct 12, 2017
cofyc
added a commit
to cofyc/prometheus
that referenced
this issue
Jun 5, 2018
simonpasquier
referenced
this issue
in simonpasquier/prometheus
Jul 20, 2018
bobmshannon
pushed a commit
to bobmshannon/prometheus
that referenced
this issue
Nov 19, 2018
This comment has been minimized.
This comment has been minimized.
lock
bot
commented
Mar 24, 2019
|
This thread has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs. |
juliusv commentedJan 4, 2013
It should be possible to efficiently stream timeseries from one Prometheus instance to another, with exchanged series determined based on a federation configuration.