Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support multiple hypervisors on each virtualization node #3259

Open
7 tasks
dann1 opened this issue Apr 23, 2019 · 4 comments
Open
7 tasks

Support multiple hypervisors on each virtualization node #3259

dann1 opened this issue Apr 23, 2019 · 4 comments

Comments

@dann1
Copy link
Contributor

dann1 commented Apr 23, 2019

Description
A Linux OS can run simultaneously KVM and LXD, acting as a virtualization node that deploys containers and VMs. Currently, in order to use a hypervisor in OpenNebula, as both KVM and LXD, there are some limitations:

  • you need to add the node twice, which is a bit cumbersome
  • ultimately this method outputs false metrics of the DataCenter in terms of amount of nodes and also unnecessary duplicated data since the CPU and RAM usage of the nodes is measured the same, as they are both Linuxes.
  • In large deployments this escalates quickly since, it is twice a lot of data
  • the drivers in /var/tmp/one/ get overwritten when the node is added for the 2nd time, which could hurt some tinkering made by an admin on a virtualization node.
  • The scheduler can mess up some VM deployments because it can deploy KVM VMs on LXD nodes and LXD containers on KVM nodes as well.

Use case
Properly setup a node as KVM and LXD virtualization node

Interface Changes
There could be a lot of changes, since the vmm that is run when deploying a container, is selected based on the destination node, and not on whether the VM template states that the VM is a container or a regular VM. Also the wild VMs would need to be classified.

Additional Context
Proxmox treats its virtualization nodes this way, clearly differentiating a container from a VM. In the case of OpenNebula, it would just be marking the hypervisor setting in the template as a required field, and select the vmm_drivers based on that.

Progress Status

  • Branch created
  • Code committed to development branch
  • Testing - QA
  • Documentation
  • Release notes - resolved issues, compatibility, known issues
  • Code committed to upstream release/hotfix branches
  • Documentation committed to upstream release/hotfix branches
@vholer vholer changed the title Unify LXD and KVM hipervisors as Linux hypervisor Unify LXD and KVM hypervisors as Linux hypervisor Feb 17, 2020
@rsmontero rsmontero changed the title Unify LXD and KVM hypervisors as Linux hypervisor Support multiple hipervisors on each virtualization node May 5, 2021
@rsmontero rsmontero added this to the Release 6.2 milestone May 5, 2021
@rsmontero rsmontero removed this from the Release 6.2 milestone Sep 6, 2021
@dann1 dann1 changed the title Support multiple hipervisors on each virtualization node Support multiple hypervisors on each virtualization node Apr 19, 2022
@Franco-Sparrow
Copy link

Franco-Sparrow commented May 20, 2023

@dann1 I would love to see this feature comes true. It would be awsome that multiple hypervisors converge on the same host without these problems you have detailed before. I understand that, as a way to prevent these problems the team stablished these dependencies on hypervisor binaries, in order to dont allow the installation of multiple hypervisors on same host, but this is something that competency did, and I am sure that OpenNebula could do it as well. Having KVM, LXC and Firecracker on same host, on OpenNebula, hope to see it at least for ON 7.0.0.

Keep the hard work 💪

@Githopp192
Copy link

saw this good statement :

https://opennebula.io/blog/experiences/using-lxd-and-kvm-on-the-same-host/

Then i was trying on a RHEL 8.9 system (a opennebula kvm node) to install FireCracker - let's see:

rpm -ivh https://dl.fedoraproject.org/pub/epel/epel-release-latest-8.noarch.rpm

Retrieving https://dl.fedoraproject.org/pub/epel/epel-release-latest-8.noarch.rpm
warning: /var/tmp/rpm-tmp.Ioitr4: Header V4 RSA/SHA256 Signature, key ID 2f86d6a1: NOKEY

Verifying... ################################# [100%]
Preparing... ################################# [100%]
Updating / installing...
1:epel-release-8-19.el8 ################################# [100%]
Many EPEL packages require the CodeReady Builder (CRB) repository.
It is recommended that you run /usr/bin/crb enable to enable the CRB repository.
[root@nextcentos log]# /usr/bin/crb enable
Enabling CRB repo
Repository 'codeready-builder-for-rhel-8-x86_64-rpms' is enabled for this system.
CRB repo is enabled and named: codeready-builder-for-rhel-8-x86_64-rpms

dnf install opennebula-node-firecracker

Updating Subscription Management repositories.
Red Hat CodeReady Linux Builder for RHEL 8 x86_64 (RPMs) 1.2 MB/s | 8.8 MB 00:07
Last metadata expiration check: 0:00:07 ago on Tue 19 Dec 2023 07:04:03 PM CET.
Error:
Problem: problem with installed package opennebula-node-kvm-6.6.1.1-1.el8.noarch

  • package opennebula-node-kvm-6.6.1.1-1.el8.noarch from @System conflicts with opennebula-node-firecracker provided by opennebula-node-firecracker-6.6.1.1-1.el8.x86_64 from opennebula
  • package opennebula-node-firecracker-6.6.1.1-1.el8.x86_64 from opennebula conflicts with opennebula-node-kvm provided by opennebula-node-kvm-6.6.1.1-1.el8.noarch from @System
  • package opennebula-node-firecracker-6.6.1.1-1.el8.x86_64 from opennebula conflicts with opennebula-node-kvm provided by opennebula-node-kvm-6.6.1.1-1.el8.noarch from opennebula
  • package opennebula-node-kvm-6.6.1.1-1.el8.noarch from opennebula conflicts with opennebula-node-firecracker provided by opennebula-node-firecracker-6.6.1.1-1.el8.x86_64 from opennebula
  • conflicting requests
    (try to add '--allowerasing' to command line to replace conflicting pac

@kitatek
Copy link

kitatek commented May 9, 2024

Hello,

Aiming to launch KVM guests on a ONE 6.8 LXC testbed hardware to complement the current ONE LXC limitations (unprivileged prevents desktop container)...

Fails at the installation of opennebula-node-kvm:

$ sudo  apt-get -y install opennebula-node-kvm
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
The following packages were automatically installed and are no longer required:
  bindfs libarchive-tools libfuse2 liblxc-common liblxc1 libpam-cgfs libvncserver1 lxc lxc-utils lxcfs uidmap
Use 'sudo apt autoremove' to remove them.
The following additional packages will be installed:
  libnbd-bin libnbd0
The following packages will be REMOVED:
  opennebula-node-lxc
The following NEW packages will be installed:
  libnbd-bin libnbd0 opennebula-node-kvm
0 upgraded, 3 newly installed, 1 to remove and 15 not upgraded.
Need to get 136 kB of archives.
After this operation, 208 kB of additional disk space will be used.
Get:1 https://downloads.opennebula.io/repo/6.8/Ubuntu/22.04 stable/opennebula amd64 opennebula-node-kvm all 6.8.0-1 [11.9 kB]
Get:2 http://fr.archive.ubuntu.com/ubuntu jammy/universe amd64 libnbd0 amd64 1.10.5-1 [71.3 kB]
Get:3 http://fr.archive.ubuntu.com/ubuntu jammy/universe amd64 libnbd-bin amd64 1.10.5-1 [52.8 kB]
Fetched 136 kB in 1s (211 kB/s)  
(Reading database ... 139271 files and directories currently installed.)
Removing opennebula-node-lxc (6.8.0-1) ...
rmdir: failed to remove '/var/lib/lxc-one': Directory not empty
dpkg: error processing package opennebula-node-lxc (--remove):
 installed opennebula-node-lxc package post-removal script subprocess returned error exit status 1
dpkg: too many errors, stopping
Errors were encountered while processing:
 opennebula-node-lxc
Processing was halted because there were too many errors.
E: Sub-process /usr/bin/dpkg returned an error code (1)
$ 

Single host testbed for ONE makes a lot of sense when trying ONE KVM, LXC, Firecracker, for example.

It would ease a lot the evaluation work prior to ONE Open Cluster adoption. Open had better be ... open :)

Strongly support enabling this possibility.

@kitatek
Copy link

kitatek commented May 9, 2024

Partial, simple support would be perfectly OK as a first stage for evaluation only (e.g. requiring several names for the same IP address for example).
A manual recipe would also work as an intermediate step to allow such testing / evaluation, until the solution eventually makes it as a supported case.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

6 participants