Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

cloud-init no longer working. #944

Open
JGHLab opened this issue Feb 22, 2024 · 20 comments
Open

cloud-init no longer working. #944

JGHLab opened this issue Feb 22, 2024 · 20 comments

Comments

@JGHLab
Copy link

JGHLab commented Feb 22, 2024

Hi guys, I've been reading the issues and can't get my head around the fix needed to be able to run cloudinit templates. Luckily by old system still works, but I'm trying to implement it on a second promox system and I guess since some update it has broken. Could anyone please give feedback on what I am doing wrong?

I was able to make it so it at least starts creating the VMs, but now I get the error "Boot failed: no bootable disk." I can't seem to figure out what I am doing wrong.

terraform {
  required_providers {
    proxmox = {
      source  = "telmate/proxmox"
      version = "3.0.1-rc1"
    }
  }
}

provider "proxmox" {
  # url is the hostname Add /api2/json at the end for the API
  pm_api_url = "http://198.18.202.11:8006/api2/json"

  # api token id is in the form of: <username>@pam!<tokenId>
  pm_api_token_id = "terraform@pam!new_token_id"

  # this is the full secret wrapped in quotes.
  pm_api_token_secret = "x"

  # leave tls_insecure set to true unless proxmox SSL
  pm_tls_insecure = true
}

variable "vm_configs" {
  description = "Configuration for VMs"
  type        = map

  default = {
    velociraptor = {
      name        = "velociraptor"
      memory      = 2048
      ip_address  = "198.18.202.210"
      cores       = 2
      storage     = "local-lvm"
      storage_size = "10G"
    },
    plaso = {
      name        = "plaso"
      memory      = 4096
      ip_address  = "198.18.202.211"
      cores       = 2
      storage     = "local-lvm"
      storage_size = "20G"
    }
    # Add other VM configurations as needed
  }
}

resource "proxmox_vm_qemu" "vms" {
  for_each = var.vm_configs

  name       = each.value["name"]
  target_node = var.proxmox_host
  clone       = var.template_name

  agent    = 1
  os_type  = "cloud-init"
  cores    = each.value["cores"]
  sockets  = 1
  cpu      = "host"
  memory   = each.value["memory"]
  scsihw   = "virtio-scsi-pci"
  bootdisk = "scsi0"

   disks {   
    virtio {
      virtio0 {
        disk {
          size            = 20
          cache           = "writeback"
          storage         = "local-lvm"
        }
      }
    }
  }

  network {
    model  = "virtio"
    bridge = "vmbr0"
  }

  lifecycle {
    ignore_changes = [
      network,
    ]
  }

  ipconfig0 = "ip=${each.value["ip_address"]}/24,gw=198.18.201.1"

  sshkeys = <<EOF
  ${var.ssh_key}
  EOF
}
@solairen
Copy link

@JGHLab check #915. I had the same issue.

@Tinyblargon answer:

resource "proxmox_vm_qemu" "instance" {
// omitted for brevity
  cloudinit_cdrom_storage = "local-lvm"
  ciuser = "root"
// omitted for brevity
}

@Tinyblargon
Copy link
Collaborator

@JGHLab virtio0 is a disk in the template as well?

@jaket91-1
Copy link

@JGHLab looks like you have a mismatch between the bootdisk and the disks definition. You probably need to change the bootdisk to virtio0 or change the disks definition to be scsi like below. I had the same problem with cloning, the template disk was scsi0 but TF was creating a virtio0 disk:

  disks {   
    scsi {
      scsi0 {
        disk {
          size            = 20
          cache           = "writeback"
          storage         = "local-lvm"
        }
      }
    }
  }

@hanley-development
Copy link

hanley-development commented Feb 25, 2024

I seem to be having the same issue, it looks like it is deleting the cloud-init drive and adding 2 cdrom drives

resource "proxmox_vm_qemu" "brick-layer00" {
    name = "brick-layer00"
    desc = "Automation Node"
    vmid = "400"
    target_node = "pve1"
    bios = "ovmf"
    scsihw = "virtio-scsi-pci"
    boot = "order=scsi0;ide1"
    agent = 1
    bootdisk = "scsi0" 
    hotplug  = "network,disk,usb"
    clone = "ubuntu-2204-template"
    full_clone = true
    cores = 2
    sockets = 1
    cpu = "host"
    memory = 2048

    network {
      bridge = "vmbr0"
      model = "virtio"
    }



    disks {
      scsi {
        scsi0 {
          disk {
            size = 32
            storage = "ceph-replicate"
            iothread = false
            emulatessd = true
            discard = true
            backup = true
           }
        }
      }
    }

    os_type = "cloud-init"
    ciuser = "mhanl"
    cloudinit_cdrom_storage = "ceph-replicate"
    cipassword = var.cipassword_var
    ipconfig0 = "ip=192.168.5.54/24,gw=192.168.5.1"
    sshkeys = var.ssh_key_var


}


image

│ Error: error updating VM: 500 Internal Server Error, error status: {"data":null} (params: map[agent:1 bios:ovmf boot:order=scsi0;ide1 cipassword:<> ciuser:mhanl cores:2 cpu:host delete:ide1 description:Automation Node hotplug:network,disk,usb ide2:none,media=cdrom ide3:ceph-replicate:cloudinit,format=raw ipconfig0:ip=192.168.5.54/24,gw=192.168.5.1 kvm:true memory:2048 name:brick-layer00 net0:virtio=7E:A1:F2:D8:76:D1,bridge=vmbr0 numa:false onboot:false scsi0:ceph-replicate:vm-400-disk-1,discard=on,replicate=0,ssd=1 scsihw:virtio-scsi-pci sockets:1 tablet:true vmid:400])

│ with proxmox_vm_qemu.brick-layer00,
│ on brick-layer00.tf line 1, in resource "proxmox_vm_qemu" "brick-layer00":
│ 1: resource "proxmox_vm_qemu" "brick-layer00" {

@hestiahacker
Copy link
Contributor

@JGHLab Did adding cloudinit_cdrom_storage = "local-lvm" to your terraform fix the issue?

@hanley-development I see you already have the cloudini_cdrom_storage, are you using version 3.0.1-rc1 of the provider? You should have something like this in your terraform definitions:

terraform {         
  required_providers {
    proxmox = {
      source = "Telmate/proxmox"
      version = "=3.0.1-rc1"
    }
  }  
  required_version = ">= 0.14"
}

I suspect this is an issue from 2.9.14 or earlier that was fixed in 3.0.1-rc1 because IIRC the cloudinit drive was switched from using ide1 to using ide2 in that release.

@MixedOne
Copy link

hey @hestiahacker
I've already mentioned it in #935 (comment)
it's existing issue, and workarounds unfortunately doesn't help, seems like I am forced to wait for a new release which hopefully fixes it.

@hanley-development
Copy link

@hestiahacker

yes I have it in my provider.tf

terraform {
  required_version = ">=0.13.0"
  required_providers {
    proxmox = {
    source = "Telmate/proxmox"
    version = "3.0.1-rc1"
    }
  }
}

@hanley-development
Copy link

After updating my clone with the following the issue was resolved

boot = "order=scsi0;ide2"

resource "proxmox_vm_qemu" "brick-layer00" {
    name = "brick-layer00"
    desc = "Automation Node"
    vmid = "400"
    target_node = "pve1"
    bios = "ovmf"
    scsihw = "virtio-scsi-pci"
    boot = "order=scsi0;ide2"
    agent = 1
    bootdisk = "scsi0"
    hotplug  = "network,disk,usb"
    clone = "ubuntu-2204-template2"
    full_clone = true
    cores = 2
    sockets = 1
    cpu = "host"
    memory = 2048

    network {
      bridge = "vmbr0"
      model = "virtio"
    }



    disks {
      scsi {
        scsi0 {
          disk {
            size = 32
            storage = "ceph-replicate"
            iothread = false
            emulatessd = true
            discard = true
            backup = true
           }
        }
      }
    }

    os_type = "cloud-init"
    ciuser = "mhanl"
    cloudinit_cdrom_storage = "ceph-replicate"
    cipassword = var.cipassword_var
    ipconfig0 = "ip=192.168.5.54/24,gw=192.168.5.1"
    sshkeys = var.ssh_key_var


}

@electropolis
Copy link

After updating my clone with the following the issue was resolved

boot = "order=scsi0;ide2"

resource "proxmox_vm_qemu" "brick-layer00" {
    name = "brick-layer00"
    desc = "Automation Node"
    vmid = "400"
    target_node = "pve1"
    bios = "ovmf"
    scsihw = "virtio-scsi-pci"
    boot = "order=scsi0;ide2"
    agent = 1
    bootdisk = "scsi0"
    hotplug  = "network,disk,usb"
    clone = "ubuntu-2204-template2"
    full_clone = true
    cores = 2
    sockets = 1
    cpu = "host"
    memory = 2048

    network {
      bridge = "vmbr0"
      model = "virtio"
    }



    disks {
      scsi {
        scsi0 {
          disk {
            size = 32
            storage = "ceph-replicate"
            iothread = false
            emulatessd = true
            discard = true
            backup = true
           }
        }
      }
    }

    os_type = "cloud-init"
    ciuser = "mhanl"
    cloudinit_cdrom_storage = "ceph-replicate"
    cipassword = var.cipassword_var
    ipconfig0 = "ip=192.168.5.54/24,gw=192.168.5.1"
    sshkeys = var.ssh_key_var


}

When it comes to my setup, adding boot = "order=scsi0;ide2" didn't helped. Still cloudinit is deleted and receive empty cdrom without cloudinit.

image

local_file.cloud_init_network-config_file[0]: Creating...
local_file.cloud_init_user_data_file[0]: Creating...
local_file.cloud_init_user_data_file[0]: Creation complete after 0s [id=bbbb0db273574b7267fed45ea998655ffc26a2e0]
local_file.cloud_init_network-config_file[0]: Creation complete after 0s [id=584397ad1c649775dfef64cff2484d781972e34c]
proxmox_vm_qemu.cloudinit["srv-app-1"]: Creating...
proxmox_vm_qemu.cloudinit["srv-app-1"]: Still creating... [10s elapsed]
proxmox_vm_qemu.cloudinit["srv-app-1"]: Still creating... [20s elapsed]
proxmox_vm_qemu.cloudinit["srv-app-1"]: Still creating... [30s elapsed]
proxmox_vm_qemu.cloudinit["srv-app-1"]: Still creating... [40s elapsed]
proxmox_vm_qemu.cloudinit["srv-app-1"]: Still creating... [50s elapsed]
proxmox_vm_qemu.cloudinit["srv-app-1"]: Still creating... [1m0s elapsed]
proxmox_vm_qemu.cloudinit["srv-app-1"]: Still creating... [1m10s elapsed]
proxmox_vm_qemu.cloudinit["srv-app-1"]: Still creating... [1m20s elapsed]
proxmox_vm_qemu.cloudinit["srv-app-1"]: Still creating... [1m30s elapsed]
proxmox_vm_qemu.cloudinit["srv-app-1"]: Still creating... [1m40s elapsed]
proxmox_vm_qemu.cloudinit["srv-app-1"]: Still creating... [1m50s elapsed]
proxmox_vm_qemu.cloudinit["srv-app-1"]: Still creating... [2m0s elapsed]
proxmox_vm_qemu.cloudinit["srv-app-1"]: Still creating... [2m10s elapsed]
proxmox_vm_qemu.cloudinit["srv-app-1"]: Still creating... [2m20s elapsed]

https://github.com/sonic-networks/terraform/tree/master/proxmox/sonic

@ssbackwards
Copy link

After updating my clone with the following the issue was resolved
boot = "order=scsi0;ide2"

resource "proxmox_vm_qemu" "brick-layer00" {
    name = "brick-layer00"
    desc = "Automation Node"
    vmid = "400"
    target_node = "pve1"
    bios = "ovmf"
    scsihw = "virtio-scsi-pci"
    boot = "order=scsi0;ide2"
    agent = 1
    bootdisk = "scsi0"
    hotplug  = "network,disk,usb"
    clone = "ubuntu-2204-template2"
    full_clone = true
    cores = 2
    sockets = 1
    cpu = "host"
    memory = 2048

    network {
      bridge = "vmbr0"
      model = "virtio"
    }



    disks {
      scsi {
        scsi0 {
          disk {
            size = 32
            storage = "ceph-replicate"
            iothread = false
            emulatessd = true
            discard = true
            backup = true
           }
        }
      }
    }

    os_type = "cloud-init"
    ciuser = "mhanl"
    cloudinit_cdrom_storage = "ceph-replicate"
    cipassword = var.cipassword_var
    ipconfig0 = "ip=192.168.5.54/24,gw=192.168.5.1"
    sshkeys = var.ssh_key_var


}

When it comes to my setup, adding boot = "order=scsi0;ide2" didn't helped. Still cloudinit is deleted and receive empty cdrom without cloudinit.

image

local_file.cloud_init_network-config_file[0]: Creating...
local_file.cloud_init_user_data_file[0]: Creating...
local_file.cloud_init_user_data_file[0]: Creation complete after 0s [id=bbbb0db273574b7267fed45ea998655ffc26a2e0]
local_file.cloud_init_network-config_file[0]: Creation complete after 0s [id=584397ad1c649775dfef64cff2484d781972e34c]
proxmox_vm_qemu.cloudinit["srv-app-1"]: Creating...
proxmox_vm_qemu.cloudinit["srv-app-1"]: Still creating... [10s elapsed]
proxmox_vm_qemu.cloudinit["srv-app-1"]: Still creating... [20s elapsed]
proxmox_vm_qemu.cloudinit["srv-app-1"]: Still creating... [30s elapsed]
proxmox_vm_qemu.cloudinit["srv-app-1"]: Still creating... [40s elapsed]
proxmox_vm_qemu.cloudinit["srv-app-1"]: Still creating... [50s elapsed]
proxmox_vm_qemu.cloudinit["srv-app-1"]: Still creating... [1m0s elapsed]
proxmox_vm_qemu.cloudinit["srv-app-1"]: Still creating... [1m10s elapsed]
proxmox_vm_qemu.cloudinit["srv-app-1"]: Still creating... [1m20s elapsed]
proxmox_vm_qemu.cloudinit["srv-app-1"]: Still creating... [1m30s elapsed]
proxmox_vm_qemu.cloudinit["srv-app-1"]: Still creating... [1m40s elapsed]
proxmox_vm_qemu.cloudinit["srv-app-1"]: Still creating... [1m50s elapsed]
proxmox_vm_qemu.cloudinit["srv-app-1"]: Still creating... [2m0s elapsed]
proxmox_vm_qemu.cloudinit["srv-app-1"]: Still creating... [2m10s elapsed]
proxmox_vm_qemu.cloudinit["srv-app-1"]: Still creating... [2m20s elapsed]

https://github.com/sonic-networks/terraform/tree/master/proxmox/sonic

boot = "order=scsi0;ide3"

The above worked for me

@electropolis
Copy link

electropolis commented Mar 4, 2024

After updating my clone with the following the issue was resolved
boot = "order=scsi0;ide2"

resource "proxmox_vm_qemu" "brick-layer00" {
    name = "brick-layer00"
    desc = "Automation Node"
    vmid = "400"
    target_node = "pve1"
    bios = "ovmf"
    scsihw = "virtio-scsi-pci"
    boot = "order=scsi0;ide2"
    agent = 1
    bootdisk = "scsi0"
    hotplug  = "network,disk,usb"
    clone = "ubuntu-2204-template2"
    full_clone = true
    cores = 2
    sockets = 1
    cpu = "host"
    memory = 2048

    network {
      bridge = "vmbr0"
      model = "virtio"
    }



    disks {
      scsi {
        scsi0 {
          disk {
            size = 32
            storage = "ceph-replicate"
            iothread = false
            emulatessd = true
            discard = true
            backup = true
           }
        }
      }
    }

    os_type = "cloud-init"
    ciuser = "mhanl"
    cloudinit_cdrom_storage = "ceph-replicate"
    cipassword = var.cipassword_var
    ipconfig0 = "ip=192.168.5.54/24,gw=192.168.5.1"
    sshkeys = var.ssh_key_var


}

When it comes to my setup, adding boot = "order=scsi0;ide2" didn't helped. Still cloudinit is deleted and receive empty cdrom without cloudinit.
image

local_file.cloud_init_network-config_file[0]: Creating...
local_file.cloud_init_user_data_file[0]: Creating...
local_file.cloud_init_user_data_file[0]: Creation complete after 0s [id=bbbb0db273574b7267fed45ea998655ffc26a2e0]
local_file.cloud_init_network-config_file[0]: Creation complete after 0s [id=584397ad1c649775dfef64cff2484d781972e34c]
proxmox_vm_qemu.cloudinit["srv-app-1"]: Creating...
proxmox_vm_qemu.cloudinit["srv-app-1"]: Still creating... [10s elapsed]
proxmox_vm_qemu.cloudinit["srv-app-1"]: Still creating... [20s elapsed]
proxmox_vm_qemu.cloudinit["srv-app-1"]: Still creating... [30s elapsed]
proxmox_vm_qemu.cloudinit["srv-app-1"]: Still creating... [40s elapsed]
proxmox_vm_qemu.cloudinit["srv-app-1"]: Still creating... [50s elapsed]
proxmox_vm_qemu.cloudinit["srv-app-1"]: Still creating... [1m0s elapsed]
proxmox_vm_qemu.cloudinit["srv-app-1"]: Still creating... [1m10s elapsed]
proxmox_vm_qemu.cloudinit["srv-app-1"]: Still creating... [1m20s elapsed]
proxmox_vm_qemu.cloudinit["srv-app-1"]: Still creating... [1m30s elapsed]
proxmox_vm_qemu.cloudinit["srv-app-1"]: Still creating... [1m40s elapsed]
proxmox_vm_qemu.cloudinit["srv-app-1"]: Still creating... [1m50s elapsed]
proxmox_vm_qemu.cloudinit["srv-app-1"]: Still creating... [2m0s elapsed]
proxmox_vm_qemu.cloudinit["srv-app-1"]: Still creating... [2m10s elapsed]
proxmox_vm_qemu.cloudinit["srv-app-1"]: Still creating... [2m20s elapsed]

https://github.com/sonic-networks/terraform/tree/master/proxmox/sonic

boot = "order=scsi0;ide3"

The above worked for me

And how you set the cloudinit in that case? On the template when setting qm set <id> --ide2 local-lvm:cloudinit, you put --ide2 or --ide3 ?
Because when changed, having template previously generated with --ide2 setup I received an error

╷
│ Error: error updating VM: 500 invalid bootorder: device 'ide3' does not exist', error status: {"data":null} (params: map[agent:1 bios:seabios boot:order=scsi0;ide3 cicustom:user=local:snippets/user-data_vm-srv-app-1.yaml,network=local:snippets/network-config_vm-srv-app-1.yaml cores:4 cpu:host hotplug:network,disk,usb ide2:none,media=cdrom kvm:true memory:8192 name:srv-app-1 net0:virtio=00:1E:67:01:10:01,bridge=skynet numa:false onboot:false scsi0:vms:vm-100-disk-0,replicate=0,ssd=1 scsihw:virtio-scsi-pci sockets:1 tablet:true vmid:100])
│
│   with proxmox_vm_qemu.cloudinit["srv-app-1"],
│   on main.tf line 1, in resource "proxmox_vm_qemu" "cloudinit":
│    1: resource "proxmox_vm_qemu" "cloudinit" {
│
╵

That helped me to make screenshot of the changes it wanted to do in WebGUI view.
So I guess I need to generate template again to set it up on --ide3 ?

    - name: Settings for Cloudinit
      tags: cloudinit
      block:

        - name: Set fact for cloudinit disk to check if exists
          ansible.builtin.set_fact:
            cloudinit_image: "{{ cloudinit_image | default([]) + [item] }}"
          loop: "{{ qm_config.results }}"
          when: not item.stdout is search('vm-{{ item.item.template_id}}-cloudinit')

        - name: CloudInit results
          ansible.builtin.debug:
            var: cloudinit_image
            verbosity: 2

        - name: Add cloud-init image as CDROM
          ansible.builtin.command: "qm set {{ item.item.template_id }} --ide2->3 // <- Here // local-lvm:cloudinit"
          loop: "{{ cloudinit_image }}"
          when: cloudinit_image is defined

image
image

@electropolis
Copy link

After updating my clone with the following the issue was resolved
boot = "order=scsi0;ide2"

resource "proxmox_vm_qemu" "brick-layer00" {
    name = "brick-layer00"
    desc = "Automation Node"
    vmid = "400"
    target_node = "pve1"
    bios = "ovmf"
    scsihw = "virtio-scsi-pci"
    boot = "order=scsi0;ide2"
    agent = 1
    bootdisk = "scsi0"
    hotplug  = "network,disk,usb"
    clone = "ubuntu-2204-template2"
    full_clone = true
    cores = 2
    sockets = 1
    cpu = "host"
    memory = 2048

    network {
      bridge = "vmbr0"
      model = "virtio"
    }



    disks {
      scsi {
        scsi0 {
          disk {
            size = 32
            storage = "ceph-replicate"
            iothread = false
            emulatessd = true
            discard = true
            backup = true
           }
        }
      }
    }

    os_type = "cloud-init"
    ciuser = "mhanl"
    cloudinit_cdrom_storage = "ceph-replicate"
    cipassword = var.cipassword_var
    ipconfig0 = "ip=192.168.5.54/24,gw=192.168.5.1"
    sshkeys = var.ssh_key_var


}

When it comes to my setup, adding boot = "order=scsi0;ide2" didn't helped. Still cloudinit is deleted and receive empty cdrom without cloudinit.
image

local_file.cloud_init_network-config_file[0]: Creating...
local_file.cloud_init_user_data_file[0]: Creating...
local_file.cloud_init_user_data_file[0]: Creation complete after 0s [id=bbbb0db273574b7267fed45ea998655ffc26a2e0]
local_file.cloud_init_network-config_file[0]: Creation complete after 0s [id=584397ad1c649775dfef64cff2484d781972e34c]
proxmox_vm_qemu.cloudinit["srv-app-1"]: Creating...
proxmox_vm_qemu.cloudinit["srv-app-1"]: Still creating... [10s elapsed]
proxmox_vm_qemu.cloudinit["srv-app-1"]: Still creating... [20s elapsed]
proxmox_vm_qemu.cloudinit["srv-app-1"]: Still creating... [30s elapsed]
proxmox_vm_qemu.cloudinit["srv-app-1"]: Still creating... [40s elapsed]
proxmox_vm_qemu.cloudinit["srv-app-1"]: Still creating... [50s elapsed]
proxmox_vm_qemu.cloudinit["srv-app-1"]: Still creating... [1m0s elapsed]
proxmox_vm_qemu.cloudinit["srv-app-1"]: Still creating... [1m10s elapsed]
proxmox_vm_qemu.cloudinit["srv-app-1"]: Still creating... [1m20s elapsed]
proxmox_vm_qemu.cloudinit["srv-app-1"]: Still creating... [1m30s elapsed]
proxmox_vm_qemu.cloudinit["srv-app-1"]: Still creating... [1m40s elapsed]
proxmox_vm_qemu.cloudinit["srv-app-1"]: Still creating... [1m50s elapsed]
proxmox_vm_qemu.cloudinit["srv-app-1"]: Still creating... [2m0s elapsed]
proxmox_vm_qemu.cloudinit["srv-app-1"]: Still creating... [2m10s elapsed]
proxmox_vm_qemu.cloudinit["srv-app-1"]: Still creating... [2m20s elapsed]

https://github.com/sonic-networks/terraform/tree/master/proxmox/sonic

boot = "order=scsi0;ide3"
The above worked for me

And how you set the cloudinit in that case? On the template when setting qm set <id> --ide2 local-lvm:cloudinit, you put --ide2 or --ide3 ? Because when changed, having template previously generated with --ide2 setup I received an error

╷
│ Error: error updating VM: 500 invalid bootorder: device 'ide3' does not exist', error status: {"data":null} (params: map[agent:1 bios:seabios boot:order=scsi0;ide3 cicustom:user=local:snippets/user-data_vm-srv-app-1.yaml,network=local:snippets/network-config_vm-srv-app-1.yaml cores:4 cpu:host hotplug:network,disk,usb ide2:none,media=cdrom kvm:true memory:8192 name:srv-app-1 net0:virtio=00:1E:67:01:10:01,bridge=skynet numa:false onboot:false scsi0:vms:vm-100-disk-0,replicate=0,ssd=1 scsihw:virtio-scsi-pci sockets:1 tablet:true vmid:100])
│
│   with proxmox_vm_qemu.cloudinit["srv-app-1"],
│   on main.tf line 1, in resource "proxmox_vm_qemu" "cloudinit":
│    1: resource "proxmox_vm_qemu" "cloudinit" {
│
╵

That helped me to make screenshot of the changes it wanted to do in WebGUI view. So I guess I need to generate template again to set it up on --ide3 ?

    - name: Settings for Cloudinit
      tags: cloudinit
      block:

        - name: Set fact for cloudinit disk to check if exists
          ansible.builtin.set_fact:
            cloudinit_image: "{{ cloudinit_image | default([]) + [item] }}"
          loop: "{{ qm_config.results }}"
          when: not item.stdout is search('vm-{{ item.item.template_id}}-cloudinit')

        - name: CloudInit results
          ansible.builtin.debug:
            var: cloudinit_image
            verbosity: 2

        - name: Add cloud-init image as CDROM
          ansible.builtin.command: "qm set {{ item.item.template_id }} --ide2->3 // <- Here // local-lvm:cloudinit"
          loop: "{{ cloudinit_image }}"
          when: cloudinit_image is defined

image image

For me none of those solutions doesn't work. Still waiting for someone who will have cicustom with snippet approach instead of those limited ciuser, cipassword and so on.

@hestiahacker
Copy link
Contributor

@electropolis This issue is about the "Boot failed: no bootable disk." error, not anything related to cicustom. If you're having some issue specific to using cicustom, can you open a ticket about it with a minimal example that others can use to replicate the issue?

I looked at the terraform code you linked to and it seemed to have a lot of things that were not portable, such as running some security command (which others don't have) to search for a password to set an environment variable. We all have our own, deployment-specific things like this that are not portable, but when posting on the issue tracker, the common convention is to make a small, portable example to make it easy for people to provide help. Very frequently, just the process of trying to create this minimal example will allow you to determine the issue and solve it without ever having to open or comment on a ticket.

I recently added a template to help with bug reporting, which you can find here: https://github.com/Telmate/terraform-provider-proxmox/pull/950/files

You should be able to change the vars.tf file and deploy that example. If that works as expected, add just the part that is causing the problem and post that. I'll pull down those files, reproduce the issue, and we can move closer to a solution.

Please understand that I'm not doubting your having a problem, or that there very well may be a bug in the code that is causing it. But I'm not able to recreate it, so I'm asking for your help in doing so and trying to keep the issue tracker organized to reduce everyone's frustration as best I can. ❤️

@electropolis
Copy link

electropolis commented Mar 4, 2024

I already crated the issue.

I looked at the terraform code you linked to and it seemed to have a lot of things that were not portable, such as running some security command (which others don't have) to search for a password to set an environment variable.

He? It's authorisation using keychain if you are talking about

export TF_VAR_TF_PROXMOX_API_SECRET=$(security find-generic-password -l 'terraform_proxmox_pve1' -w)
export TF_VAR_TF_PROXMOX_API_ID="hashicorp@pve!terraform"

security is CLI command for macOS keychain

image

where I store my password. Instead of pasting them in command line they are pulled out to env using keychain query.

@electropolis This issue is about the "Boot failed: no bootable disk." error, not anything related to cicustom.

No the error is with missing cloudinit drive that is deleted and cloudinit was always on ide2 in previous releases, not on ide1 as you said. Now blank cdrom was attached to ide2 that's why it forces cloudinit to move ide3. But That's not the case. Something different is happening during that process when changing the VM parameters (cpu, disk size) suddenly something is deleting the cloudinit and also ide2 appears with blank cdrom that wasn't even defined in terraform config. That's unusual behaviour.

@Tinyblargon
Copy link
Collaborator

This should be resolved in the latest build #959 has an example.

@LeoShivas
Copy link

With this code, it works :
proxmox_vm_qemu.tf :

resource "null_resource" "cloud_init_user_data_file" {
  connection {
    user        = "root"
    private_key = var.root_private_key
    host        = var.prx_node
    port        = 22
  }

  provisioner "file" {
    content = templatefile(
      "../files/cloud-init/cloud-init.cloud_config_user.tftpl",
      {
        hostname            = var.vm_name,
        user                = var.adm_username,
        password            = var.adm_pwd,
        ssh_authorized_keys = var.ssh_authorized_keys,
      }
    )
    destination = "/var/lib/vz/snippets/user_data_vm-${var.vm_name}.yml"
  }

  triggers = {
    hostname            = var.vm_name,
    user                = var.adm_username,
    ssh_authorized_keys = var.ssh_authorized_keys,
    password            = var.adm_pwd,
  }
}

resource "null_resource" "cloud_init_network_data_file" {
  connection {
    user        = "root"
    private_key = var.root_private_key
    host        = var.prx_node
    port        = 22
  }

  provisioner "file" {
    content = templatefile(
      "../files/cloud-init/cloud-init.cloud_config_network.tftpl",
      {
        ip_dns = var.ip_dns,
      }
    )
    destination = "/var/lib/vz/snippets/network_data_vm-${var.vm_name}.yml"
  }

  triggers = {
    ip_dns = var.ip_dns,
  }
}

resource "proxmox_vm_qemu" "main" {
  name                   = var.vm_name
  target_node            = var.prx_node
  clone                  = "cirocky9tpl"
  desc                   = "Rocky Linux 9 VM fully cloned from cirocky9tpl"
  agent                  = 1
  cores                  = 4
  define_connection_info = false
  force_create           = true
  memory                 = var.vm_memory
  onboot                 = true
  qemu_os                = "l26"
  scsihw                 = "virtio-scsi-single"

  disks {
    scsi {
      scsi0 {
        disk {
          size    = 50
          storage = "local"
          replicate            = true
        }
      }
    }
  }

  network {
    bridge  = "vmbr1"
    model   = "virtio"
    macaddr = var.vm_mac
  }

  os_type = "cloud-init"

  ciuser                = var.adm_username
  cipassword            = var.adm_pwd
  sshkeys = var.ssh_authorized_keys
  cloudinit_cdrom_storage = "local"
  cicustom = "user=local:snippets/user_data_vm-${var.vm_name}.yml,network=local:snippets/network_data_vm-${var.vm_name}.yml"

  provisioner "remote-exec" {
    connection {
      user                = var.adm_username
      private_key         = var.adm_private_key
      host                = self.name
      bastion_host        = var.bind_ip_address
      bastion_port        = var.bind_ssh_port
      bastion_user        = var.bind_ssh_user
      bastion_private_key = var.bind_ssh_private_key
    }
    inline = [
      "ip a"
    ]
  }

  depends_on = [
    null_resource.cloud_init_user_data_file,
    null_resource.cloud_init_network_data_file,
  ]

  lifecycle {
    replace_triggered_by = [
      null_resource.cloud_init_user_data_file,
      null_resource.cloud_init_network_data_file,
    ]
  }
}

cloud-init.cloud_config_user.tftpl :

#cloud-config
hostname: ${hostname}
fqdn: ${hostname}
manage_resolv_conf: true
user: ${user}
password: ${password}
ssh_authorized_keys:
  - ${ssh_authorized_keys}
chpasswd:
  expire: False
timezone: Europe/Paris
locale: fr_FR.UTF-8
keyboard:
  layout: fr
  variant: oss
package_upgrade: false
packages:
  - qemu-guest-agent
  - firewalld
  - bash-completion
runcmd:
  - sudo nmcli d disconnect eth0 && sleep 2 && sudo nmcli d connect eth0
  - sudo systemctl enable --now firewalld
output:
  init:
    output: "> /var/log/cloud-init.out"
    error: "> /var/log/cloud-init.err"
  config: "tee -a /var/log/cloud-config.log"
  final:
    - ">> /var/log/cloud-final.out"
    - "/var/log/cloud-final.err"

@electropolis
Copy link

@LeoShivas strange combination of using cicustom with ci from proxmox. Why do you use ssh-keys twice ? One in user-data and second in sshkeys = var.ssh_authorized_keys

@aancw
Copy link

aancw commented Mar 19, 2024

@JGHLab looks like you have a mismatch between the bootdisk and the disks definition. You probably need to change the bootdisk to virtio0 or change the disks definition to be scsi like below. I had the same problem with cloning, the template disk was scsi0 but TF was creating a virtio0 disk:

  disks {   
    scsi {
      scsi0 {
        disk {
          size            = 20
          cache           = "writeback"
          storage         = "local-lvm"
        }
      }
    }
  }

Thank you @jaket91-1, after matching disk with my template, now it work and no duplicate unused disk.

@LeoShivas
Copy link

@LeoShivas strange combination of using cicustom with ci from proxmox. Why do you use ssh-keys twice ? One in user-data and second in sshkeys = var.ssh_authorized_keys

Because I tried to remove it and it doesn't work without. On previous version of terraform-provider-proxmox, I hadn't to add ciuser, cipassword and sshkeys (as i use cicustom). But with this new version, it doesn't work anymore.

@electropolis
Copy link

@LeoShivas ok I understand. So you can use the link provided by @Tinyblargon and it works with manual compilation of the provider from master branch. Than you can use pure cicustom

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests