Join GitHub today
GitHub is home to over 31 million developers working together to host and review code, manage projects, and build software together.
Sign upfetching data from pinba #1566
Comments
This comment has been minimized.
This comment has been minimized.
|
Don't avoid it, embrace it. For Prometheus to work well, export these as counters and simply increment for any new request. "Requests since the last scrape" is not a concept Prometheus understands at all. Contrarily, Prometheus wants to see "requests over the lifetime of this instance". Scraping should not ever change metrics (not concerned with the scrape itself). This has several advantages:
In this model, if there has only ever been one request to the rare endpoint, that's fine – Prometheus still knows and should know that this endpoint exists. |
This comment has been minimized.
This comment has been minimized.
|
Or put differently, Graphite forces you to deal with "current values", but for Prometheus these are not the right way to think about your metrics. Counters work much better, and the query language makes them much easier to deal with than Graphite's. Use them. |
grobie
added
the
question
label
Apr 19, 2016
fabxc
added
kind/question
and removed
question
labels
Apr 28, 2016
brian-brazil
closed this
Jul 13, 2016
This comment has been minimized.
This comment has been minimized.
lock
bot
commented
Mar 24, 2019
|
This thread has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs. |
mkabischev commentedApr 19, 2016
•
edited
Hi. Now I
ve got an php application and it sends metrics to [pinba](http://pinba.org/). As it in memory engine for mysql and its stores only hot data its required to put in somewhere else. Now we use graphite, but we really dont like it. And now I`m trying to migrate to prometheus, but i faced a problem.For example we have two endpoints: /popular_endpoint and /rare_endpoint. First one is really popular and has hundreds of hits per second and second is so rare that it can be hit once per hour. For transfering data from pinba to prometheus we wrote exporter with go client using gauge client. So we export two metrics:
In next scrape (30s) we don
t export second export because there is no data about /rare_endpoint in pinba, but in prometheus its still present with last value (1) for about 5 minutes. And it`s not cool at all. How can I change this time (5m) or what should I do to avoid this problem?