New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
cloud-init no longer working. #944
Comments
@JGHLab check #915. I had the same issue. @Tinyblargon answer: resource "proxmox_vm_qemu" "instance" {
// omitted for brevity
cloudinit_cdrom_storage = "local-lvm"
ciuser = "root"
// omitted for brevity
} |
@JGHLab |
@JGHLab looks like you have a mismatch between the bootdisk and the disks definition. You probably need to change the bootdisk to disks {
scsi {
scsi0 {
disk {
size = 20
cache = "writeback"
storage = "local-lvm"
}
}
}
} |
I seem to be having the same issue, it looks like it is deleting the cloud-init drive and adding 2 cdrom drives
|
@JGHLab Did adding @hanley-development I see you already have the
I suspect this is an issue from 2.9.14 or earlier that was fixed in 3.0.1-rc1 because IIRC the cloudinit drive was switched from using ide1 to using ide2 in that release. |
hey @hestiahacker |
yes I have it in my provider.tf
|
After updating my clone with the following the issue was resolved boot = "order=scsi0;ide2"
|
When it comes to my setup, adding
https://github.com/sonic-networks/terraform/tree/master/proxmox/sonic |
boot = "order=scsi0;ide3" The above worked for me |
And how you set the cloudinit in that case? On the template when setting
That helped me to make screenshot of the changes it wanted to do in WebGUI view.
|
For me none of those solutions doesn't work. Still waiting for someone who will have |
@electropolis This issue is about the "Boot failed: no bootable disk." error, not anything related to cicustom. If you're having some issue specific to using cicustom, can you open a ticket about it with a minimal example that others can use to replicate the issue? I looked at the terraform code you linked to and it seemed to have a lot of things that were not portable, such as running some I recently added a template to help with bug reporting, which you can find here: https://github.com/Telmate/terraform-provider-proxmox/pull/950/files You should be able to change the vars.tf file and deploy that example. If that works as expected, add just the part that is causing the problem and post that. I'll pull down those files, reproduce the issue, and we can move closer to a solution. Please understand that I'm not doubting your having a problem, or that there very well may be a bug in the code that is causing it. But I'm not able to recreate it, so I'm asking for your help in doing so and trying to keep the issue tracker organized to reduce everyone's frustration as best I can. ❤️ |
I already crated the issue.
He? It's authorisation using keychain if you are talking about
where I store my password. Instead of pasting them in command line they are pulled out to env using keychain query.
No the error is with missing cloudinit drive that is deleted and cloudinit was always on ide2 in previous releases, not on ide1 as you said. Now blank cdrom was attached to ide2 that's why it forces cloudinit to move ide3. But That's not the case. Something different is happening during that process when changing the VM parameters (cpu, disk size) suddenly something is deleting the cloudinit and also ide2 appears with blank cdrom that wasn't even defined in terraform config. That's unusual behaviour. |
This should be resolved in the latest build #959 has an example. |
With this code, it works : resource "null_resource" "cloud_init_user_data_file" {
connection {
user = "root"
private_key = var.root_private_key
host = var.prx_node
port = 22
}
provisioner "file" {
content = templatefile(
"../files/cloud-init/cloud-init.cloud_config_user.tftpl",
{
hostname = var.vm_name,
user = var.adm_username,
password = var.adm_pwd,
ssh_authorized_keys = var.ssh_authorized_keys,
}
)
destination = "/var/lib/vz/snippets/user_data_vm-${var.vm_name}.yml"
}
triggers = {
hostname = var.vm_name,
user = var.adm_username,
ssh_authorized_keys = var.ssh_authorized_keys,
password = var.adm_pwd,
}
}
resource "null_resource" "cloud_init_network_data_file" {
connection {
user = "root"
private_key = var.root_private_key
host = var.prx_node
port = 22
}
provisioner "file" {
content = templatefile(
"../files/cloud-init/cloud-init.cloud_config_network.tftpl",
{
ip_dns = var.ip_dns,
}
)
destination = "/var/lib/vz/snippets/network_data_vm-${var.vm_name}.yml"
}
triggers = {
ip_dns = var.ip_dns,
}
}
resource "proxmox_vm_qemu" "main" {
name = var.vm_name
target_node = var.prx_node
clone = "cirocky9tpl"
desc = "Rocky Linux 9 VM fully cloned from cirocky9tpl"
agent = 1
cores = 4
define_connection_info = false
force_create = true
memory = var.vm_memory
onboot = true
qemu_os = "l26"
scsihw = "virtio-scsi-single"
disks {
scsi {
scsi0 {
disk {
size = 50
storage = "local"
replicate = true
}
}
}
}
network {
bridge = "vmbr1"
model = "virtio"
macaddr = var.vm_mac
}
os_type = "cloud-init"
ciuser = var.adm_username
cipassword = var.adm_pwd
sshkeys = var.ssh_authorized_keys
cloudinit_cdrom_storage = "local"
cicustom = "user=local:snippets/user_data_vm-${var.vm_name}.yml,network=local:snippets/network_data_vm-${var.vm_name}.yml"
provisioner "remote-exec" {
connection {
user = var.adm_username
private_key = var.adm_private_key
host = self.name
bastion_host = var.bind_ip_address
bastion_port = var.bind_ssh_port
bastion_user = var.bind_ssh_user
bastion_private_key = var.bind_ssh_private_key
}
inline = [
"ip a"
]
}
depends_on = [
null_resource.cloud_init_user_data_file,
null_resource.cloud_init_network_data_file,
]
lifecycle {
replace_triggered_by = [
null_resource.cloud_init_user_data_file,
null_resource.cloud_init_network_data_file,
]
}
}
#cloud-config
hostname: ${hostname}
fqdn: ${hostname}
manage_resolv_conf: true
user: ${user}
password: ${password}
ssh_authorized_keys:
- ${ssh_authorized_keys}
chpasswd:
expire: False
timezone: Europe/Paris
locale: fr_FR.UTF-8
keyboard:
layout: fr
variant: oss
package_upgrade: false
packages:
- qemu-guest-agent
- firewalld
- bash-completion
runcmd:
- sudo nmcli d disconnect eth0 && sleep 2 && sudo nmcli d connect eth0
- sudo systemctl enable --now firewalld
output:
init:
output: "> /var/log/cloud-init.out"
error: "> /var/log/cloud-init.err"
config: "tee -a /var/log/cloud-config.log"
final:
- ">> /var/log/cloud-final.out"
- "/var/log/cloud-final.err" |
@LeoShivas strange combination of using cicustom with ci from proxmox. Why do you use ssh-keys twice ? One in user-data and second in |
Thank you @jaket91-1, after matching disk with my template, now it work and no duplicate unused disk. |
Because I tried to remove it and it doesn't work without. On previous version of terraform-provider-proxmox, I hadn't to add |
@LeoShivas ok I understand. So you can use the link provided by @Tinyblargon and it works with manual compilation of the provider from master branch. Than you can use pure |
Hi guys, I've been reading the issues and can't get my head around the fix needed to be able to run cloudinit templates. Luckily by old system still works, but I'm trying to implement it on a second promox system and I guess since some update it has broken. Could anyone please give feedback on what I am doing wrong?
I was able to make it so it at least starts creating the VMs, but now I get the error "Boot failed: no bootable disk." I can't seem to figure out what I am doing wrong.
The text was updated successfully, but these errors were encountered: