Join GitHub today
GitHub is home to over 31 million developers working together to host and review code, manage projects, and build software together.
Sign upFederation errors logged so often disk was filled #1042
Comments
This comment has been minimized.
This comment has been minimized.
|
The error seems to appear when a metric is not being updated on the leaf prometheus server, because the target is down. Unless I am missing something here, this looks like a bug, since it is not that the timestamp is going back in time, it is always the same value... |
This comment has been minimized.
This comment has been minimized.
|
Are you running 0.15.1? This should no longer happen in master. On Sun, Aug 30, 2015, 6:15 PM Martín Ferrari notifications@github.com
|
This comment has been minimized.
This comment has been minimized.
|
On 30/08/15 20:02, Fabian Reinartz wrote:
Yes, 0.15.1 in all machines. Martín Ferrari (Tincho) |
This comment has been minimized.
This comment has been minimized.
|
That indicates a misconfiguration then, could you be pulling in the same timeseries from two targets? |
This comment has been minimized.
This comment has been minimized.
|
Ah, that might be the problem then. I am scraping the leaf prometheus, its node_exporter and federating it, while the leaf prometheus also scrapes itself and its node_exporter... I did not think this would be a problem. In any case, I believe the logging issue might deserve some attention, as the growth rate is alarming :) |
This comment has been minimized.
This comment has been minimized.
|
Do you mean in other places too, or just this instance. As I said, this On Sun, Aug 30, 2015, 10:11 PM Martín Ferrari notifications@github.com
|
This comment has been minimized.
This comment has been minimized.
|
No, I just meant this, if it is already fixed post-0.15.1, then this can be closed. |
This comment has been minimized.
This comment has been minimized.
|
To be clear, this is not necessarily a misconfiguration. If your federation frequency is higher than that of some federated time series, this warning would still be logged (in 0.15.1). Just ping again should this not be resolved with the next version for some reason. |
fabxc
closed this
Aug 31, 2015
This comment has been minimized.
This comment has been minimized.
lock
bot
commented
Mar 24, 2019
|
This thread has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs. |
TheTincho commentedAug 30, 2015
Hi,
Since I enabled federation in a group of servers, the log started to fill up pretty fast with errors like this one:
On one side, I don't know why I get this error, but the main problem here is that these messages were logged a few times per second, and my /var partition got full in no time. I think they should be rate limited, or somehow bounded so this does not happen again.
Thanks.