-
Notifications
You must be signed in to change notification settings - Fork 574
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Duplicate Key on MySQL after upgrading to v2.7.1 #5623
Comments
This seems to be the same issue as seen in #5603. |
Workaround: Drop the unique constraint for the |
Will this cause problems in our live cluster? :) |
No, the workaround won't cause problems. This constraint is never triggered, only on a programmer's fault inside the application. It does not add any extra security to prevent duplicated inserts here. This is coming from the old schema which used the If you fear performance issues, drop the unique key, and re-apply it as additional index (with a new name, The problem you're describing is hard to reproduce. It comes from the original fix with #5585 where the Say, you'll have two downtimes:
Now the The DB IDO feature fires an update against the scheduleddowntimes/comments table. Or, so to speak, two updates. These downtimes share some details. The The WHERE condition includes the A short "simulation":
The This does not work with a unique constraint here. The latter is not needed for Icinga 2, only the index is important for better performance. We will be changing the constraint for 2.8, but cannot for minor releases (the impact on updates and re-indexing huge tables, especially the history tables, is enormous). The quick fix is to restore the old behaviour with matching the constraint in the where condition for updates. This will update the tables the other way around - those rows where the changed internal_downtime_id column matches, will receive their updates. Since you're having a range here with say 1..100, a config reload will always fire 100 update queries. The previous patch tried to ensure that using the Anyways, the legacy_id/internal_downtime_id originates from standalone Nagios design, and is not unique over clusters and distributed environments. Names are unique, and that is where we're heading with Icinga 2. |
Thanks for that explanation! We removed the |
Just to mention: we have the same problem with postgresql as ido-backend. |
Upgrade to v2.7.2 which fixes this regression. |
already planning the change ;) Thanks. |
2.8 is out this week too, which includes better indexes for the original patch. Reasoning: Minor updates should not result in db maintenance cycles. |
Sure this is fixed in 2.8? We, again, had the issue after upgrading to 2.8:
|
I'm meeting this error while trying to install Icinga2 r2.10.1-1 using the puppet module, any idea how to solve it? |
After upgrading to the latest v2.7.1 IdoMysqlConnection throws a duplicate key error every 10 seconds. Because of this no updates are written to the MySQL DB.
Expected Behavior
Icinga should run without duplicate key errors.
Current Behavior
Icinga2 throws the following error every 10s and does not populate the DB after restart:
The very first startupo with a fresh database worked, but after restart the problem occoured again!
Possible Solution
None, we needed to downgrade to 2.7.0 which startet afterwards without problems.
Before we tried to start Icinga2 without the icinga2.state file and with a clean database, but the problem persisted after restart of Icinga2.
Steps to Reproduce (for bugs)
We could not reproduce the issue in our test-instance, so we sadly cannot provide steps to reproduce. We can provide futher informations in request, if this helps to investigate this issue.
Context
Your Environment
icinga2 --version
): 2.7.1icinga2 feature list
): api checker command compatlog ido-mysql influxdb livestatus mainlog notificationicinga2 daemon -C
): workszones.conf
file (oricinga2 object list --type Endpoint
andicinga2 object list --type Zone
) from all affected nodes: 2 Cluster Node + 1 SatelliteThe text was updated successfully, but these errors were encountered: