Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Better cluster compatibility / Idempotence #168

Closed
tabnul opened this issue Apr 18, 2020 · 5 comments
Closed

Better cluster compatibility / Idempotence #168

tabnul opened this issue Apr 18, 2020 · 5 comments
Labels

Comments

@tabnul
Copy link

tabnul commented Apr 18, 2020

Hi, first of all: great work on this plugin.
I do think there might be improvements in the cluster area.

The nature of a virtual machine cluster is that you are able to migrate workloads and able to cope with downtime of a specific hypervisor.
It looks like the current implementation of this proxmox provider does not cope with this scenario; it assumes that the created VMs are on the same hypervisor they were initially created on (statefile has a hard link to the host and checks on this host for the VM when it needs to change something)

I think this can be fixed by changing the UID in the statefile and/or change some logic.

Of course you will still (initially) have to give a host parameter to create the resource, but afterwards in the statefile or logic behind it should look om the cluster where the VMID is and dynamically use that hypervisor when adjusting the resource. (VMID is still a unique id within the cluster)

@ghost
Copy link

ghost commented Apr 19, 2020

Absolutely, much appreciate your work on this great plugin.
Just to add, this is also true for all optional network parameters. If they're not specified, the next run of terraform apply/plan will pick the default values assigned and change them.

I am currently using lifecycle to get around this.

lifecycle { 
     ignore_changes = [
                   network,
                   target_node,
     ]
}

@gostega
Copy link

gostega commented Jul 23, 2021

I would like this functionality too. I have a proxmox cluster and would like the provider to check for the VM existence on any node of the cluster. At the moment, I have to specify a single node. If a VM, let's say VM1 migrates to a different node, NODE2, the terraform provider thinks it has been destroyed and tries to create it again on NODE1. It also tries to destroy the VM1 on NODE2 for the same reason. I know that proxmox itself requires the exact node as an argument when creating the VM, but I want the provider to be 'cluster-aware' and basically decouple the node name from the VM state. When checking for the existence of a VM, it should check all nodes (in a list, provided by the user) to see if the VM exists on any of them. If the VM does not exist on any node, choose a node at random from the list. Likewise for the clone source. Example syntax

# list of VMs to create
# list of terraform managed VMs in our environment
variable "vm_deploy_list" {
  description = "list of VMs to provision for XX deploy environment"
  type        = map
  default     = {
    XXFRONTENDDEV1 = {
      ip    = "ip=192.168.1.23/24,gw=192.168.1.1",
      desc  = "Dev deploy server for xx app",
      env   = "master"
    },
    XXFRONTENDDEVPUB1 = {
      ip    = "ip=192.168.1.24/24,gw=192.168.1.1",
      desc  = "temp public dev deploy server for XXapp-frontend",
      env   = "dev"
    }
}
# Create the VM(s)
resource "proxmox_vm_qemu" "bbx_vms" {

    ## Wait for the cloud-config file to exist
    #depends_on = [
    #  null_resource.cloud_init_deb10
    #]

    for_each = var.vm_deploy_list

    # Per VM settings (from variable)
    name        = each.key
    ipconfig0   = each.value.ip
    desc        = each.value.desc

    # specify a list, or something.
    target_node = var.cluster

    # The template name to clone this vm from
    clone = "deb10-current"
.....etc
}

@ghost
Copy link

ghost commented Jul 28, 2021

@gostega and @tabnul, I ended up creating an Ansible playbook to do exactly that, you can find it here https://gitlab.com/itnoobs-automation/ansible/proxmox-vm

@github-actions
Copy link

This issue is stale because it has been open for 60 days with no activity. Please update the provider to the latest version and, in the issue persist, provide full configuration and debug logs

@github-actions github-actions bot added the stale label Jun 15, 2023
@github-actions
Copy link

This issue was closed because it has been inactive for 5 days since being marked as stale.

@github-actions github-actions bot closed this as not planned Won't fix, can't repro, duplicate, stale Jun 20, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants