Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cannot use cloud_config while creating VM #227

Closed
sMteX opened this issue Feb 28, 2023 · 16 comments
Closed

Cannot use cloud_config while creating VM #227

sMteX opened this issue Feb 28, 2023 · 16 comments

Comments

@sMteX
Copy link

sMteX commented Feb 28, 2023

I'm currently facing an issue, where we can create a cloud config in XO (or have it created with resource "xenorchestra_cloud_config") but when used to initialize a new VM, it doesn't get applied.

I've verified that:

  • the desired template is working (when I create a VM manually in XO and use the template + the given cloud config, it works)
  • the cloud config itself must be working as well (due to previous point)
  • Terraform can create a working cloud config

What I've tried:

  • manually creating the cloud config and fetching it with data "xenorchestra_cloud_config" "cc" and using that
  • creating the cloud config with Terraform (resource "xenorchestra_cloud_config" "cc") and using that
  • using hashicorp/cloudinit provider (data "cloudinit_config" "cloudinit_config"), passing the config into part.content and using that
  • inlining the cloud config directly into xenorchestra_vm.cloud_config with newlines or <<EOF ... EOF

Nothing seems to work. I've tried running terraform apply with TF_LOG_PROVIDER=DEBUG and I've noticed that the inputted cloud config made it all the way to the RPC call:

(I tried to format this with line breaks)

2023-02-26T01:31:42.221+0100 [INFO]  provider.terraform-provider-xenorchestra_v0.24.0: 2023/02/26 01:31:42 [TRACE] Made rpc call `vm.create` with params: map[CPUs:1 VDIs:[] VIFs:[map[mac: network:93f364f7-387a-8565-08c6-5f775d9fc95c]] 
affinityHost: bootAfterCreate:true 
cloudConfig:#cloud-config
hostname: {name}%
packages:
- htop
growpart:
  mode: auto
  devices: ['/']
  ignore_growroot_disabled: false 
coreOs:false 
cpuCap:<nil> 
cpuWeight:<nil> 
existingDisks:map[0:map[$SR:6a1ecde0-b5a2-bc27-91bc-49b463570450 SR:6a1ecde0-b5a2-bc27-91bc-49b463570450 name_description:OS name_label:hdd size:50212254720 type:user]] 
expNestedHvm:false 
hvmBootFirmware:bios 
memoryMax:1073733632 
name_description:popisek, lalala laalla 
name_label:krolda - novější2 
tags:[terraform-test] 
template:8db93d20-ee94-9b52-d899-6d27128b41e4 vga:std videoram:8] 
and received 80f72175-162a-887d-4162-5ce9d325140c: result with error: <nil>: timestamp=2023-02-26T01:31:42.221+0100

However a bit later when we're waiting for the VM to be created, I'm receiving logs like this (notice CloudConfig: ResourceSet:<nil>):

2023-02-26T01:31:42.376+0100 [INFO]  provider.terraform-provider-xenorchestra_v0.24.0: 2023/02/26 01:31:42 [DEBUG] Found the following objects for type 'client.Vm' from xo.getAllObjects:
 [{Addresses:map[0/ipv4/0:192.168.1.168 0/ipv6/0:fe80::9451:90ff:fe02:6b07] 
BlockedOperations:map[] 
Boot:{Firmware:bios} Type:VM Id:80f72175-162a-887d-4162-5ce9d325140c 
AffinityHost: 
NameDescription:popisek, lalala laalla
 NameLabel:krolda - novější2 
CPUs:{Number:1 Max:2} 
ExpNestedHvm:false 
Memory:{Dynamic:[1073733632 1073733632] Static:[536870912 2147483648] Size:1073733632} 
PowerState:Halted 
VIFs:[a183688b-d4d8-3a50-05d3-4d0a775eca5d] 
VBDs:[34f04a3c-c06d-2058-60ea-65189e6d7ff9 34af4ff6-b17b-0be3-93a2-466f8e84d162] 
VirtualizationMode:hvm 
PoolId:601f08a4-0b01-6f12-1b89-1b6346f09ddf 
Template: 
AutoPoweron:true 
HA: 
CloudConfig: ResourceSet:<nil> 
Tags:[terraform-test] 
Videoram:{Value:8} 
Vga:std 
StartDelay:0 
Host:601f08a4-0b01-6f12-1b89-1b6346f09ddf 
Disks:[] 
CloudNetworkConfig: 
VIFsMap:[] 
WaitForIps:false 
Installation:{Method: Repository:} 
ManagementAgentDetected:false 
PVDriversDetected:false}]: timestamp=2023-02-26T01:31:42.376+0100

The result is:

  • the machine is created properly
  • the XO CloudConfigDrive is created
  • the cloud config never runs during the boot and what is written in it doesn't get executed
  • log files /var/log/cloud-init[-output].log obviously don't exist either

System info:

  • Terraform version v1.3.8
  • terra-farm/xenorchestra version 0.24.0
  • XenOrchestra is self hosted
    • xo-server 5.109.3
    • xo-web 5.111.1
    • commit ee837

Attempted Cloud config:

#cloud-config
hostname: {name}%
packages:
  - htop
growpart:
  mode: auto
  devices: ['/']
  ignore_growroot_disabled: false

Terraform file:

# Instruct terraform to download the provider on `terraform init`
terraform {
  required_providers {
    xenorchestra = {
      source = "terra-farm/xenorchestra"
    }
    cloudinit = {
      source = "hashicorp/cloudinit"
    }
  }
}

# https://registry.terraform.io/providers/terra-farm/xenorchestra/latest/docs

# Configure the XenServer Provider
provider "xenorchestra" {
  #   # Must be ws or wss
  #   url      = "ws://hostname-of-server" # Or set XOA_URL environment variable
  #   username = "<username>"              # Or set XOA_USER environment variable
  #   password = "<password>"              # Or set XOA_PASSWORD environment variable

  #   # This is false by default and
  #   # will disable ssl verification if true.
  #   # This is useful if your deployment uses
  #   # a self signed certificate but should be
  #   # used sparingly!
  insecure = true # Or set XOA_INSECURE environment variable to any value
}

data "xenorchestra_pool" "pool" {
  name_label = "..."
}

data "xenorchestra_template" "template" {
  name_label = "..."
  pool_id    = data.xenorchestra_pool.pool.id
}

data "xenorchestra_network" "net" {
  name_label = "Pool-wide network associated with eth0"
  pool_id    = data.xenorchestra_pool.pool.id

}

data "xenorchestra_sr" "local_storage" {
  name_label = "Local storage"
  pool_id    = data.xenorchestra_pool.pool.id
  tags       = ["s4"]
}

data "xenorchestra_cloud_config" "cloud_config" {
  name = "test123"
}

data "cloudinit_config" "cloudinit_config" {
  gzip = false
  base64_encode = false
  part {
    content_type = "text/cloud-config"
    content = templatefile("cloud_config.tftpl", {
      hostname = "your-hostname"
      domain = "your.domain.com"
    })
  }
}

resource "xenorchestra_cloud_config" "bar" {
  name = "cloud config name"
  # Template the cloudinit if needed
  template = templatefile("cloud_config.tftpl", {
    hostname = "your-hostname"
    domain = "your.domain.com"
  })
}

resource "xenorchestra_vm" "bar" {
  memory_max = 1073733632
  cpus       = 1
  cloud_config = xenorchestra_cloud_config.bar.template
  name_label       = "name"
  name_description = "label"
  template         = data.xenorchestra_template.template.id
  auto_poweron  = true

  # Prefer to run the VM on the primary pool instance
  # affinity_host = data.xenorchestra_pool.pool.master
  network {
    network_id = data.xenorchestra_network.net.id
  }

  disk {
    sr_id            = data.xenorchestra_sr.local_storage.id
    name_label       = "hdd"
    name_description = "OS"
    size             = 50212254720
  }

  tags = [
    "terraform-test",
  ]

  // Override the default create timeout from 5 mins to 20.
  timeouts {
    create = "20m"
  }
}
@ddelnano
Copy link
Collaborator

ddelnano commented Mar 6, 2023

Hey @sMteX, sorry for the late reply.

Can you please share the content of the cloud_config.tftpl file?

Xen Orchestra handles cloud init configuration slightly differently from the terraform provider because it performs some templating on the client side (javascript in the web UI). I noticed the cloud config you referenced is using those client side features ({name} and %).

Have you tried a cloud config that doesn't use those features? That might result in cloud-init thinking the config file is malformed.

log files /var/log/cloud-init[-output].log obviously don't exist either

As for the cloud-init logs missing, I'm surprised that these files don't exist. The times I've debugged issues with cloud-init not running the way I expect, the logs would be there but wouldn't contain the output that I expected.

Can you confirm that it's running on boot in this failed scenario? Even if the nocloud data drive is missing, it should still be able to run.

@sMteX
Copy link
Author

sMteX commented Mar 6, 2023

Hey @ddelnano, thanks for getting back!

Can you please share the content of the cloud_config.tftpl file?

In fact, what I posted there is the content of the cloud_config.tftpl file, but regarding your concern, I've tried also cloud config which only had something like

#cloud-config
hostname: test-machine-1

(and fetching that with Terraform and using in the VM) with the same result, so while that may be broken, I don't think it's the culprit.

Can you confirm that it's running on boot in this failed scenario? Even if the nocloud data drive is missing, it should still be able to run.

Like I've mentioned, I originally tried to create a VM manually in XO with the same template and config, just to see how it looks while booting. Looks like it gets past the Loading ramdisk screen and just before showing the login prompt, it shows some cloud-init output.

That was on the manually created VM which seemed to work just fine. None of that showed on the VMs created with Terraform. No errors, nothing, just straight into login prompt. I don't think the cloud-init even runs on the Terraform VMs and I can't figure out why.

My only other thought is that even though I've installed cloud-init on the template with apt update && apt install cloud-init cloud-utils cloud-initramfs-growpart, maybe the default config isn't correct (but then again, it worked with the manually created VM). I'll try one more time and post the /etc/cloud/config.cfg here.

@sMteX
Copy link
Author

sMteX commented Mar 6, 2023

I've tried creating a new template, this time also with cloud-guest-utils and cloud-image-utils (also I don't know if I've mentioned, I'm trying Debian 11.6 for the template). After installing these packages, the contents of /etc/cloud/cloud.cfg are:

# The top level settings are used as module
# and system configuration.

# A set of users which may be applied and/or used by various modules
# when a 'default' entry is found it will reference the 'default_user'
# from the distro configuration specified below
users:
   - default

# If this is set, 'root' will not be able to ssh in and they 
# will get a message to login instead as the above $user (debian)
disable_root: true

# This will cause the set+update hostname module to not operate (if true)
preserve_hostname: false

# This prevents cloud-init from rewriting apt's sources.list file,
# which has been a source of surprise.
apt_preserve_sources_list: true

# Example datasource config
# datasource: 
#    Ec2: 
#      metadata_urls: [ 'blah.com' ]
#      timeout: 5 # (defaults to 50 seconds)
#      max_wait: 10 # (defaults to 120 seconds)

# The modules that run in the 'init' stage
cloud_init_modules:
 - migrator
 - seed_random
 - bootcmd
 - write-files
 - growpart
 - resizefs
 - disk_setup
 - mounts
 - set_hostname
 - update_hostname
 - update_etc_hosts
 - ca-certs
 - rsyslog
 - users-groups
 - ssh

# The modules that run in the 'config' stage
cloud_config_modules:
# Emit the cloud config ready event
# this can be used by upstart jobs for 'start on cloud-config'.
 - emit_upstart
 - ssh-import-id
 - locale
 - set-passwords
 - grub-dpkg
 - apt-pipelining
 - apt-configure
 - ntp
 - timezone
 - disable-ec2-metadata
 - runcmd
 - byobu

# The modules that run in the 'final' stage
cloud_final_modules:
 - package-update-upgrade-install
 - fan
 - puppet
 - chef
 - salt-minion
 - mcollective
 - rightscale_userdata
 - scripts-vendor
 - scripts-per-once
 - scripts-per-boot
 - scripts-per-instance
 - scripts-user
 - ssh-authkey-fingerprints
 - keys-to-console
 - phone-home
 - final-message
 - power-state-change

# System and/or distro specific settings
# (not accessible to handlers/transforms)
system_info:
   # This will affect which distro class gets used
   distro: debian
   # Default user name + that default users groups (if added/used)
   default_user:
     name: debian
     lock_passwd: True
     gecos: Debian
     groups: [adm, audio, cdrom, dialout, dip, floppy, netdev, plugdev, sudo, video]
     sudo: ["ALL=(ALL) NOPASSWD:ALL"]
     shell: /bin/bash
   # Other config here will be given to the distro class and/or path classes
   paths:
      cloud_dir: /var/lib/cloud/
      templates_dir: /etc/cloud/templates/
      upstart_dir: /etc/init/
   package_mirrors:
     - arches: [default]
       failsafe:
         primary: http://deb.debian.org/debian
         security: http://security.debian.org/
   ssh_svcname: ssh

One thing I've yet to try is if Debian isn't the culprit, so I'm trying to set up an Ubuntu image to see if the problem isn't there maybe.

@ddelnano
Copy link
Collaborator

ddelnano commented Mar 6, 2023

In fact, what I posted there is the content of the cloud_config.tftpl file, but regarding your concern, I've tried also cloud config which only had something like

Thanks for confirming. I thought that was the case, but didn't want to assume.

My only other thought is that even though I've installed cloud-init on the template with apt update && apt install cloud-init cloud-utils cloud-initramfs-growpart, maybe the default config isn't correct (but then again, it worked with the manually created VM). I'll try one more time and post the /etc/cloud/config.cfg here.

I thought in both cases you were using a cloudinit ready VM image? Is that not the case?

Have you checked the /etc/cloud/cloud.cfg.d/ for any conflicting config that might be disabling what you want to use? I recently made a VM template and needed to purge one of the files in there to get cloudinit working. Unfortunately, I don't have access to my home network now so I can't confirm what specific file that was.

Since we are still skeptical that cloud-init is ever running, can you try to identify what systemd unit (or similar construct) is launching cloud-init and find those logs? Looking at the cloud-init package for Ubuntu focal, it appears these systemd config files would be of interest

/lib/systemd/system-generators/cloud-init-generator
/lib/systemd/system/cloud-config.service
/lib/systemd/system/cloud-config.target
/lib/systemd/system/cloud-final.service
/lib/systemd/system/cloud-init-local.service
/lib/systemd/system/cloud-init.service
/lib/systemd/system/cloud-init.target

@ddelnano
Copy link
Collaborator

ddelnano commented Mar 6, 2023

/etc/cloud/cloud.cfg.d/99-installer.cfg was the file that was causing me trouble. This file prevents cloud-init from changing the instance's hostname and growing the root partition. This blog gives more details on this

@sMteX
Copy link
Author

sMteX commented Mar 6, 2023

I thought in both cases you were using a cloudinit ready VM image? Is that not the case?

I mean I was using a template that I've created myself. It's a Debian 11.6 that has just update packages and installed aforementioned packages. This install creates the config.cfg that I pasted earlier. Then stopped the VM and converted into a template I'm trying to use with Terraform.

Have you checked the /etc/cloud/cloud.cfg.d/ for any conflicting config that might be disabling what you want to use?

Inside the template, after installing cloud-init on Debian, there are no conflicting files (there's 05_logging.cfg and 00_debian.cfg which changes syslog_fix_perms and mount_default_fields). The file you mentioned isn't there (however I found it too while trying to configure the Ubuntu, have yet to try).

All the /lib/systemd/* files you mentioned are present on the template. However, after creating a VM with Terraform with that template, they're no longer there. I'm not sure if that's what's expected or not.

EDIT: Just for reference, when I tried using the Ubuntu template I made (just installed Ubuntu Server, it appears to already have the cloud-init binary), at least I saw the cloud-init output during boot so I'm inclined to think the provider isn't at fault here. Still it didn't install htop or changed the hostname which is weird to say the least.

@ddelnano
Copy link
Collaborator

ddelnano commented Mar 7, 2023

All the /lib/systemd/* files you mentioned are present on the template. However, after creating a VM with Terraform with that template, they're no longer there. I'm not sure if that's what's expected or not.

This doc explains more on the different stages of the boot process for cloud-init and systemd is a large part of that. So those missing unit files seem problematic.

Are the systemd generator and units mentioned in the docs above running when the instance is created through the XO UI? It would be interesting to see the kernel command line in the working and non working case as well (/proc/cmdline).

Cloud-init also tries to determine if the current boot is the first or a later reboot (docs). I would also check to see if your template has anything cached in it that is causing it to think it doesn't need to run.

I'm also not sure that the terraform provider is the issue here. From my anecdotal experience, creating these templates has many moving parts and it's difficult to pinpoint the underlying cause at times.

@sMteX
Copy link
Author

sMteX commented Mar 14, 2023

@ddelnano

My apologies for the radio silence last week. We initially thought we found the reason (the host we were trying to spin the VMs on had for some reason multiples of the same network interfaces, and on a different host it worked), but after reinstalling the faulty machine, the issue still persists.

2023-03-10_12-26

Currently the same procedure with cloud init only seems to work on one host out of 3 (coincidentally, the pool's master) and we've got no clue what could be different between those hosts that makes it not work. Ultimately we've come to (at least temporary) decision that it's not worth the effort to try and get it working (as there isn't really any obvious thing wrong) as we can accomplish more or less the same with Ansible, it's just not that automatic.

Thank you for the help and if we ever find the cause, I'll try to remember this thread and reply for anyone else facing similar problem.

@ddelnano
Copy link
Collaborator

No worries and makes sense that it wasn't worth the investment in continuing to debug. I wish we were able to get to the bottom of this, but if you do find the solution and remember to follow up that would be great.

For now I'm going to close this since there isn't any active lead to follow. If it becomes important for you again or someone else is interested in debugging this further, we can reopen this in the future.

@TheiLLeniumStudios
Copy link

TheiLLeniumStudios commented May 5, 2023

Hi,
I believe I'm running into a very similar issue as above but with setting up a Talos cluster. I have 2 Hosts in a pool and both have their own local SRs. I've created a VM Template for Talos using the no-cloud image. The VM template is stored in a shared SR that is connected to both Hosts in the pool. When creating VMs, the user-data in the Cloud Config disk attached to the VM on the 1st host (master) is read correctly and the Talos node is provisioned according to that. However, all the VMs that are deployed on the 2nd host, fail to provision using cloud config. And there seems to be a difference in size of the XO Cloud Config Drive connected to these VMs. The one's connected to the 1st host (master) has the correct files inside XO CloudConfigDrive VHD when exported. See the following images:

  1. Exported CloudConfigDrive from both VMs as VHD from XO:
    1
    The one containing (1) is from host 2 and the one with (2) is from host 1

  2. Contents of CloudConfigDrive exported from host 2:
    2

  3. Contents of CloudConfigDrive exported from host 1:
    3

Here is how I'm generating the cloud config using talos:

resource "talos_machine_secrets" "machine_secrets" {
    talos_version = var.talos_version
}

data "talos_machine_configuration" "controlplane" {
  count = var.master_count
  cluster_name     = var.talos_cluster_name
  machine_type             = "controlplane"
  cluster_endpoint = "https://${var.talos_vip}:6443"
  machine_secrets  = talos_machine_secrets.machine_secrets.machine_secrets
    talos_version = var.talos_version
  kubernetes_version = var.kubernetes_version
  config_patches = [
    templatefile("${path.module}/templates/controlplanepatch.yaml.tmpl", {
        vip = var.talos_vip
        hostname = "${var.talos_cluster_name}-master-${count.index + 1}"
        ip = var.master_ips[count.index]
        gateway = var.gateway_ip
        nameserver = var.nameserver
        talos_version = var.talos_version
    })
  ]
}

How I'm passing it to the VM:

data "xenorchestra_pool" "pool" {
  name_label = var.xo_pool
}

data "xenorchestra_hosts" "hosts" {
  pool_id = data.xenorchestra_pool.pool.id

  sort_by = "name_label"
  sort_order = "asc"
}

data "xenorchestra_sr" "local_storage" {
  count = length(data.xenorchestra_hosts.hosts.hosts)
  name_label = format("%s %s", split(".", data.xenorchestra_hosts.hosts.hosts[count.index].name_label)[0], var.xo_storage_tier)
  pool_id = data.xenorchestra_pool.pool.id
}

data "xenorchestra_template" "template" {
    name_label = var.xo_vm_template
    pool_id = data.xenorchestra_pool.pool.id
}

data "xenorchestra_network" "net" {
  name_label = var.xo_vm_network
  pool_id = data.xenorchestra_pool.pool.id
}

resource "xenorchestra_vm" "controlplane" {
    count = var.master_count
    memory_max = var.vm_memory * 1024 * 1024 * 1024
    cpus  = var.vm_cpu
    name_label = "${var.talos_cluster_name}-master-${count.index + 1}"
    template = data.xenorchestra_template.template.id

    cloud_config = data.talos_machine_configuration.controlplane[count.index].machine_configuration

    affinity_host = data.xenorchestra_hosts.hosts.hosts[count.index % length(data.xenorchestra_hosts.hosts.hosts)].id
    network {
      network_id = data.xenorchestra_network.net.id
      #mac_address = var.master_macs[count.index]
    }

    disk {
      sr_id = data.xenorchestra_sr.local_storage[count.index % length(data.xenorchestra_sr.local_storage)].id
      name_label = "${var.talos_cluster_name}-master-${count.index + 1}-disk1"
      size = var.vm_disk * 1024 * 1024 * 1024 
    }

    tags = [
      var.talos_cluster_name,
      "controlplane"
    ]
}

@ddelnano any help on this would be appreciated

@TheiLLeniumStudios
Copy link

I'm basically allocating the host via affinity_host parameter based on the index. So all the ones with -2 postfix have the faulty cloud config drive
image

@TheiLLeniumStudios
Copy link

I have tried to use the same VM Template for Talos and created a VM manually with the desired cloud-config via XO on Host 2 (non-master) and talos bootstrapped using the config just fine. Which makes me think that this is not a host issue but rather some sort of misconfiguration during the vm.create RPC call where some wrong parameters are passed in somehow when the affinity_host is different from the master in the pool but I cannot confirm or validate that it is the case

@TheiLLeniumStudios
Copy link

TheiLLeniumStudios commented May 5, 2023

Here is a diff of master-2 and master-3 vm.create DEBUG log: https://www.diffchecker.com/Cp6BSIeO/
master-3 (it is scheduled on host-1 which is the master) boots fine and detects cloud config while master-2 (it is scheduled on host-2 which is the 2nd host in the pool) fails to detect cloud config

@TheiLLeniumStudios
Copy link

After some debugging I also found what is happening but I have no idea why it happens. This warning gets generated on XO when the cloudconfigdrive is being created:

2023-05-05T19:47:44.457Z xo:xapi WARN importVdiContent:  {
  error: Error: 404 Not Found
      at Object.assertSuccess (/home/node/xen-orchestra/node_modules/http-request-plus/index.js:138:19)
      at httpRequestPlus (/home/node/xen-orchestra/node_modules/http-request-plus/index.js:205:22)
      at Xapi.putResource (/home/node/xen-orchestra/packages/xen-api/src/index.js:508:22)
      at Xapi.importContent (/home/node/xen-orchestra/@xen-orchestra/xapi/vdi.js:138:7)
      at Xapi.createCloudInitConfigDrive (file:///home/node/xen-orchestra/packages/xo-server/src/xapi/index.mjs:1332:5)
      at Xo.<anonymous> (file:///home/node/xen-orchestra/packages/xo-server/src/api/vm.mjs:211:11)
      at Api.#callApiMethod (file:///home/node/xen-orchestra/packages/xo-server/src/xo-mixins/api.mjs:417:20) {
    originalUrl: 'https://10.0.0.12/import_raw_vdi/?format=raw&vdi=OpaqueRef%3A39c9699e-85a7-c6b1-60b8-27ad51c05d2e&session_id=OpaqueRef%3A1d0c35c4-1107-aea1-02c9-352d7f6a0a5c&task_id=OpaqueRef%3Ab6a7c99f-b882-98a4-2de7-9f77a53a26c3',
    url: 'https://10.0.0.13/import_raw_vdi/?format=raw&vdi=OpaqueRef:39c9699e-85a7-c6b1-60b8-27ad51c05d2e&session_id=OpaqueRef:1d0c35c4-1107-aea1-02c9-352d7f6a0a5c&task_id=OpaqueRef:b6a7c99f-b882-98a4-2de7-9f77a53a26c3',
    pool_master: host {
      uuid: 'd1e916dc-8f8a-4aec-98c1-206c4c8144b0',
      name_label: 'minisforum-hm80-01.servers.illenium.gg',
      name_description: 'Default install',
      memory_overhead: 1161101312,
      allowed_operations: [Array],
      current_operations: [Object],
      API_version_major: 2,
      API_version_minor: 20,
      API_version_vendor: 'XenSource',
      API_version_vendor_implementation: {},
      enabled: true,
      software_version: [Object],
      other_config: [Object],
      capabilities: [Array],
      cpu_configuration: {},
      sched_policy: 'credit',
      supported_bootloaders: [Array],
      resident_VMs: [Array],
      logging: {},
      PIFs: [Array],
      suspend_image_sr: 'OpaqueRef:b90d54ba-44e7-8181-a4a2-6593276e73cd',
      crash_dump_sr: 'OpaqueRef:b90d54ba-44e7-8181-a4a2-6593276e73cd',
      crashdumps: [],
      patches: [],
      updates: [],
      PBDs: [Array],
      host_CPUs: [Array],
      cpu_info: [Object],
      hostname: 'minisforum-hm80-01.servers.illenium.gg',
      address: '10.0.0.12',
      metrics: 'OpaqueRef:655a27bd-de06-75cb-d231-8460d949d70a',
      license_params: [Object],
      ha_statefiles: [],
      ha_network_peers: [],
      blobs: {},
      tags: [],
      external_auth_type: '',
      external_auth_service_name: '',
      external_auth_configuration: {},
      edition: 'xcp-ng',
      license_server: [Object],
      bios_strings: [Object],
      power_on_mode: '',
      power_on_config: {},
      local_cache_sr: 'OpaqueRef:b90d54ba-44e7-8181-a4a2-6593276e73cd',
      chipset_info: [Object],
      PCIs: [Array],
      PGPUs: [Array],
      PUSBs: [Array],
      ssl_legacy: false,
      guest_VCPUs_params: {},
      display: 'enabled',
      virtual_hardware_platform_versions: [Array],
      control_domain: 'OpaqueRef:58fb5c9b-18c4-6578-803a-d9b8db989ec9',
      updates_requiring_reboot: [],
      features: [],
      iscsi_iqn: 'iqn.2023-03.gg.illenium.servers:6e94cf19',
      multipathing: false,
      uefi_certificates: '',
      certificates: [Array],
      editions: [Array],
      pending_guidances: [],
      tls_verification_enabled: true,
      last_software_update: '19700101T00:00:00Z',
      https_only: false
    },
    SR: SR {
      uuid: '8c7e88d7-25b4-02e2-5a4b-6e1e3ead70cf',
      name_label: 'minisforum-hm80-02 SSD',
      name_description: '',
      allowed_operations: [Array],
      current_operations: {},
      VDIs: [Array],
      PBDs: [Array],
      virtual_allocation: 45573210112,
      physical_utilisation: 290025472,
      physical_size: 901115478016,
      type: 'ext',
      content_type: 'user',
      shared: false,
      other_config: [Object],
      tags: [],
      sm_config: [Object],
      blobs: {},
      local_cache_enabled: true,
      introduced_by: 'OpaqueRef:NULL',
      clustered: false,
      is_tools_sr: false
    },
    VDI: VDI {
      uuid: '681a20be-6684-4480-8b7b-abfdb32ccdfb',
      name_label: 'XO CloudConfigDrive',
      name_description: '',
      allowed_operations: [Array],
      current_operations: {},
      SR: 'OpaqueRef:13a829ed-3408-f42e-695e-f14adf469fb3',
      VBDs: [],
      crash_dumps: [],
      virtual_size: 10485760,
      physical_utilisation: 3584,
      type: 'user',
      sharable: false,
      read_only: false,
      other_config: {},
      storage_lock: false,
      location: '681a20be-6684-4480-8b7b-abfdb32ccdfb',
      managed: true,
      missing: false,
      parent: 'OpaqueRef:NULL',
      xenstore_data: {},
      sm_config: {},
      is_a_snapshot: false,
      snapshot_of: 'OpaqueRef:NULL',
      snapshots: [],
      snapshot_time: '19700101T00:00:00Z',
      tags: [],
      allow_caching: false,
      on_boot: 'persist',
      metadata_of_pool: '',
      metadata_latest: false,
      is_tools_iso: false,
      cbt_enabled: false
    }
  }
}

It seems to be coming from here: https://github.com/vatesfr/xen-orchestra/blob/master/packages/xo-server/src/xapi/index.mjs#L1332-L1334

@ddelnano
Copy link
Collaborator

ddelnano commented May 8, 2023

@TheiLLeniumStudios thanks for the extremely detailed report and glad to hear that the XO team is working on the fix!

@TheiLLeniumStudios
Copy link

@ddelnano the problem has been fixed in this commit: vatesfr/xen-orchestra@01ba10f

I just tested it out and the cloud configs are created properly for all the VMs that are scheduled on the Slaves 🥳

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants