New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Netbox plugin: vm duplicates at plugin sync #6038
Comments
Hi, I'm not sure this is something that we should do because:
We're also trying to avoid deleting things from a pool that isn't reachable in case you need to retrieve some information about that pool using Netbox. |
well, we have vm's UUID, so 99% we know that they now exist on another pool, no matter why. And if transfer already happens, problem only to confirm it and move data, instead create a clone. pool can be disabled for a weeks, if h\w replacement required. |
VM migration is a usual action indeed, but removing an entire pool isn't. And the behavior you described only happens if you remove/disconnect a pool. If the 2 pools are still there, the old VMs will be removed from Netbox. You can also manually trigger a Netbox synchronization right before disabling the pool to remove the VMs from Netbox. |
Fixes #6038, Fixes #6135, Fixes #6024, Fixes #6036 See https://xcp-ng.org/forum/topic/6070 See zammad#5695 See https://xcp-ng.org/forum/topic/6149 See https://xcp-ng.org/forum/topic/6332 Complete rewrite of the plugin. Main functional changes: - Synchronize VM description - Fix duplicated VMs in Netbox after disconnecting one pool - Migrating a VM from one pool to another keeps VM data added manually - Fix largest IP prefix being picked instead of smallest - Fix synchronization not working if some pools are unavailable - Better error messages
Describe the bug
items with same UID duplicated.
To Reproduce
Expected behavior
move VM from 1st pool to 2nd based on unique VM ID.
Screenshots
Desktop (please complete the following information):
Additional context
NetBox version 3.0.11
The text was updated successfully, but these errors were encountered: