New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Client tries to sync config (downtimes) back to Master #4969
Comments
Removing the 'discard' messages by changing the log level will happen with #4930. |
Your messages should silently be discarded, instead of checking |
Re-opening this issue as discussed with @Thomas-Gelf @lazyfrosch @lippserd - the original problem is that the satellite sends those downtime/comment messages to the parent and does not discard them locally. |
Thank you for the quick response. Why does a client or satellite even try to send config updates to the upper / parent cluster zone? From my understanding, in top-down config mode, there is only one way for configuration to flow - downwards. What did I miss here? |
Objects can be created at runtime, e.g. a Downtime object. Such a config object may still be synced to the top (at least it was planned to fix the current non-working behaviour). We agreed on not allowing to send such objects to the parent zone members, and maybe enabling comments/downtimes sync to the top later on with specific ACLs if any. |
The issue with the non-working behaviour is described here: #3719 |
TODO notes:
|
Okay, thank you for clarification. I can see the need to have icingaweb2 running on a satellite which uses the local commandpipe to create ACKs, Downtimes, etc. that get synced to the master zone and pop up in the icingaweb there. Did I get this right, that ignoring those config updates was introduced in 2.4 and worked before (#3719)? |
Yep. Unfortunately nothing was changed or fixed every since, so those config updates to parent zones remain useless for the time being. |
Hi @dnsmichi, we have 10000 downtimes and more because we put dev / setup phase hosts and services into permanent downtime, which actually creates a downtime for every day for each host and service. the logfile is nearly unusable due to those "discarding config update" messages when icinga2 master zone starts. |
The logging should be gone in 2.6.x, the rest is an open todo. |
i have this error on the Master Log where 'hugin-munin.kozo.ch" would be the client.
|
I have the same behavior in my logs |
Which versions are involved for both instances? |
2.9.1-1 |
The bottom-up sync functionality will be required for future autodiscovery/inventory features and will be existing for the time being. When we iterate on the aforementioned functionality, we will evalutate possible optimizations. |
Hi everyone,
I noticed a very strange behaviour. When we create downtimes (via icingaweb2 or configured recurring downtimes) the downtime is written in the _api config package (e.g.
/var/lib/icinga2/api/packages/_api/icinga-server-1485934526-1/conf.d/downtimes
) and then it's also synced to the affected client zone via api and save in/var/lib/icinga2/api/packages/_api/icinga-client-1485934667-0/conf.d/downtimes
on the client.I tried to not have configured downtimes in a global template zone (of my zones.d on the master) instead I put it in the master zone. But it still gets synced to the client.
First, is this intended? Why does the downtime get synced as a runtime config to the client, the client could apply the downtime on its own via a global-template apply.
But the strange thing actually is, that the client tries to sync back the config to master via the API. As I'm using top-down config mode, so I can't see the point here:
in the log of the master:
the message appears for each downtime that is currently active / scheduled for the client and for which there is a file in:
/var/lib/icinga2/api/packages/_api/icinga-client-1485934667-0/conf.d/downtimes
When you have
accept_config
enabled on the master (which is not need afaik) the message changes from "Ignoring" to "Discarding". This tells me that is actually does not want / need this config update at all.I did never see those messages in Icinga2 2.5, but I can't tell if this is something introduced in 2.6 for sure.
the actual problem with this is, that I seems to slow down icinga2 restarts dramatically on the master and spams the log as we have around 50 host and 3000 service downtimes (configured downtimes by puppet, for non-production systems).
The text was updated successfully, but these errors were encountered: