Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support Ceph disk (former Can't setup connect KVM server) #22

Open
steven3579 opened this issue Jun 6, 2024 · 11 comments
Open

Support Ceph disk (former Can't setup connect KVM server) #22

steven3579 opened this issue Jun 6, 2024 · 11 comments
Labels
enhancement New feature or request

Comments

@steven3579
Copy link

steven3579 commented Jun 6, 2024

I performed the setup to connect KVM and CloudStack servers, but I'm currently encountering the error shown in the image. Has anyone else experienced a similar situation? Please guide me on how to fix this error. Thank Team
2024-06-06_10-47

  • Logs container core
    2024-06-06_11-47
@nabdoul
Copy link

nabdoul commented Jun 6, 2024

Hi Steven,

Welcome to the project 😊

Could you please send us the result of the following command? Here is the command to run on your backroll machine:

sudo git log

Also, have you added the complete SSH key to your hypervisor (~/.ssh/authorized_keys)? To get the complete SSH key, you need to scroll down in backroll ssh-key.

Best regards,
Navid

@steven3579
Copy link
Author

steven3579 commented Jun 6, 2024

Hi Navid,
Thank you for your response. Here is the log from the command you mentioned. I tried SSH directly from the core container to the KVM server, and it worked. However, the UI still shows the same error as befor.
Screenshot from 2024-06-06 18-01-07

@nabdoul
Copy link

nabdoul commented Jun 7, 2024

Hi @steven3579 ,

I've managed to reproduce your error:

Here's the solution to resolve the issue:

In the hypervisor, you need to modify the file /etc/ssh/sshd_config.

Add or uncomment the following line:

PubkeyAcceptedAlgorithms +ssh-rsa

Then, restart the service:

sudo service ssh restart

I hope this solution will help you resolve the problem.

Best regards,
Navid

@steven3579
Copy link
Author

Hi @nabdoul
It's working, thank you.

@steven3579 steven3579 reopened this Jun 9, 2024
@steven3579
Copy link
Author

steven3579 commented Jun 9, 2024

Hi @nabdoul
After I connected to the KVM and tried to run the backup, it didn't work. The current model I am using has the VM disk located on shared storage using Ceph. Here is the error image I was provided.
Additionally, I checked on my CloudStack server and there is no option to enter this backroll plugin. Could you provide me with any documentation on integrating CloudStack and backroll?

Backup framework provider plugin: backroll
Backup plugin backroll config appname: Name of your app name used for backroll api
Backup plugin backroll config password: Secret for the backroll_api found in your oauth provider.
Backup plugin backroll config url: URL of your backroll

Screenshot from 2024-06-09 21-45-22

Traceback (most recent call last): File "/usr/local/lib/python3.8/dist-packages/celery/app/trace.py", line 451, in trace_task R = retval = fun(*args, **kwargs) File "/usr/local/lib/python3.8/dist-packages/celery/app/trace.py", line 734, in protected_call return self.run(*args, **kwargs) File "/usr/src/app/app/backup_tasks/single_backup.py", line 161, in single_vm_backup raise backup_error File "/usr/src/app/app/backup_tasks/single_backup.py", line 154, in single_vm_backup raise startbackup_error File "/usr/src/app/app/backup_tasks/single_backup.py", line 150, in single_vm_backup backup_result = backup_creation(virtual_machine_info) File "/usr/src/app/app/backup_tasks/single_backup.py", line 137, in backup_creation raise sequence_error File "/usr/src/app/app/backup_tasks/single_backup.py", line 133, in backup_creation return backup_sequence(info, host_info) File "/usr/src/app/app/backup_tasks/single_backup.py", line 43, in backup_sequence virtual_machine['storage'] = kvm_list_disk.getDisk(info, host_info) File "/usr/src/app/app/kvm/kvm_list_disk.py", line 45, in getDisk json.append({'device': device, 'source': source}) UnboundLocalError: local variable 'source' referenced before assignment

@JoffreyLuang
Copy link
Contributor

Hi Steven,

The cloudstack plugin is not yet release.
You have a backroll connector menu in Backroll which allows to connect Backroll to your Cloudstack. This way it allows Backroll to perform restore on CS VMs and allows to backup halted VMs. Since halted VMs are not available on hosts we query CS to retrieve the metadata.

As for the error you encounter, I am checking with the team and I will get back to you shortly

@steven3579
Copy link
Author

Hi @JoffreyLuang
I will wait for information about the above error. Thank you

@steven3579
Copy link
Author

Hi @JoffreyLuang
Could you please update me on the latest information regarding the issue I'm experiencing? I'm still waiting for your response. Thank you

@m-dhellin
Copy link
Collaborator

Hello @steven3579,

Sorry for the late response. Here is where the error is raised :

def getDisk(virtual_machine, hypervisor):
conn = kvm_connection.kvm_connection(hypervisor)
json = []
print(virtual_machine)
dom = conn.lookupByName(virtual_machine['name'])
raw_xml = dom.XMLDesc(0)
xml = minidom.parseString(raw_xml)
disk_types = xml.getElementsByTagName('disk')
for disk_type in disk_types:
if (disk_type.getAttribute('device') != 'cdrom'):
disk_nodes = disk_type.childNodes
for disk_node in disk_nodes:
if disk_node.nodeName[0:1] != '#':
if (disk_node.nodeName == 'target'):
for attr in disk_node.attributes.keys():
if (disk_node.attributes[attr].name == 'dev'):
device = disk_node.attributes[attr].value
if (disk_node.nodeName == 'source'):
for attr in disk_node.attributes.keys():
if (disk_node.attributes[attr].name == 'file'):
source = disk_node.attributes[attr].value
json.append({'device': device, 'source': source})
conn.close()
return json

Based on the error in the logs you have provided, I understand that the XML description of the VM has no source node. Can you provide the XML description of your VM ? The node may be missing or maybe we have to update our parsing function.

@steven3579
Copy link
Author

steven3579 commented Aug 28, 2024

Hello @m-dhellin
This is the content of my XML VM file.
dumpxml --domain i-2-478-VM <domain type='kvm' id='68'> <name>i-2-478-VM</name> <uuid>a64b5202-51f3-49a2-a721-3024463d2c6d</uuid> <description>Ubuntu 22.04 LTS</description> <memory unit='KiB'>1048576</memory> <currentMemory unit='KiB'>1048576</currentMemory> <vcpu placement='static'>1</vcpu> <cputune> <shares>642</shares> </cputune> <resource> <partition>/machine</partition> </resource> <sysinfo type='smbios'> <system> <entry name='manufacturer'>Apache Software Foundation</entry> <entry name='product'>CloudStack KVM Hypervisor</entry> <entry name='uuid'>a64b5202-51f3-49a2-a721-3024463d2c6d</entry> </system> </sysinfo> <os> <type arch='x86_64' machine='pc-i440fx-6.2'>hvm</type> <boot dev='cdrom'/> <boot dev='hd'/> <smbios mode='sysinfo'/> </os> <features> <acpi/> <apic/> <pae/> </features> <cpu mode='custom' match='exact' check='full'> <model fallback='forbid'>qemu64</model> <feature policy='require' name='x2apic'/> <feature policy='require' name='hypervisor'/> <feature policy='require' name='lahf_lm'/> <feature policy='disable' name='svm'/> </cpu> <clock offset='utc'> <timer name='kvmclock'/> </clock> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>destroy</on_crash> <devices> <emulator>/usr/bin/qemu-system-x86_64</emulator> <disk type='network' device='disk'> <driver name='qemu' type='raw' cache='none'/> <auth username='admin'> <secret type='ceph' uuid='2d26efa8-890b-3aa3-8d4c-2496d2d86b6a'/> </auth> <source protocol='rbd' name='backup/9becc35c-a6fa-4234-b1dd-6e74c27eb291' index='2'> <host name='x.x.x.89'/> <host name='x.x.x.90'/> <host name='x.x.x.91'/> </source> <target dev='vda' bus='virtio'/> <serial>600dd17d19a54d9b97fd</serial> <alias name='virtio-disk0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/> </disk> <disk type='file' device='cdrom'> <driver name='qemu'/> <target dev='hdc' bus='ide'/> <readonly/> <alias name='ide0-1-0'/> <address type='drive' controller='0' bus='1' target='0' unit='0'/> </disk> <controller type='usb' index='0' model='piix3-uhci'> <alias name='usb'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/> </controller> <controller type='pci' index='0' model='pci-root'> <alias name='pci.0'/> </controller> <controller type='ide' index='0'> <alias name='ide'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/> </controller> <controller type='virtio-serial' index='0'> <alias name='virtio-serial0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/> </controller> <interface type='bridge'> <mac address='02:08:01:07:00:05'/> <source bridge='brens224-2908'/> <target dev='vnet93'/> <model type='virtio'/> <link state='up'/> <alias name='net0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/> </interface> <serial type='pty'> <source path='/dev/pts/5'/> <target type='isa-serial' port='0'> <model name='isa-serial'/> </target> <alias name='serial0'/> </serial> <console type='pty' tty='/dev/pts/5'> <source path='/dev/pts/5'/> <target type='serial' port='0'/> <alias name='serial0'/> </console> <channel type='unix'> <source mode='bind' path='/var/lib/libvirt/qemu/i-2-478-VM.org.qemu.guest_agent.0'/> <target type='virtio' name='org.qemu.guest_agent.0' state='disconnected'/> <alias name='channel0'/> <address type='virtio-serial' controller='0' bus='0' port='1'/> </channel> <input type='tablet' bus='usb'> <alias name='input0'/> <address type='usb' bus='0' port='1'/> </input> <input type='mouse' bus='ps2'> <alias name='input1'/> </input> <input type='keyboard' bus='ps2'> <alias name='input2'/> </input> <graphics type='vnc' port='5904' autoport='yes' listen='x.x.x.x'> <listen type='address' address='x.x.x.x'/> </graphics> <audio id='1' type='none'/> <video> <model type='cirrus' vram='16384' heads='1' primary='yes'/> <alias name='video0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/> </video> <watchdog model='i6300esb' action='none'> <alias name='watchdog0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/> </watchdog> <memballoon model='virtio'> <alias name='balloon0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/> </memballoon> </devices> <seclabel type='dynamic' model='dac' relabel='yes'> <label>+0:+0</label> <imagelabel>+0:+0</imagelabel> </seclabel> </domain>
If you have any documentation on backroll integration when using KVM with Ceph disks, could you provide it to me?
I hope you respond soon. Thanks Team

@m-dhellin
Copy link
Collaborator

m-dhellin commented Aug 28, 2024

Hello @steven3579,

Thank you for your quick response. Here is the part of the XML that is extracted by the function :

<disk type='network' device='disk'>
    <driver name='qemu' type='raw' cache='none'/>
    <auth username='admin'>
        <secret type='ceph' uuid='2d26efa8-890b-3aa3-8d4c-2496d2d86b6a'/>
    </auth>
    <source protocol='rbd' name='backup/9becc35c-a6fa-4234-b1dd-6e74c27eb291' index='2'>
        <host name='x.x.x.89'/>
        <host name='x.x.x.90'/>
        <host name='x.x.x.91'/>
    </source>
    <target dev='vda' bus='virtio'/>
    <serial>600dd17d19a54d9b97fd</serial>
    <alias name='virtio-disk0'/>
    <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
</disk>

Indeed the Ceph RDB storage is not supported by Backroll.

We could use the Python library for Ceph & RDB to connect Backroll to your Ceph server but it seems that we need to access your secret for authentication.

I am thinking about an other way :

  • on our side :
    • parse the source name and look for this path in something like /mnt/backroll/ceph_fs/
    • provide a depth independent restore
  • on your side :
    • mount the VM disk on the Backroll host using Ceph FS in /mnt/backroll/ceph_fs

Is this a way to go for you ? Can you manage to mount the disk on the Backroll host ?

@m-dhellin m-dhellin added the enhancement New feature or request label Aug 28, 2024
@m-dhellin m-dhellin changed the title Can't setup connect KVM server Support Ceph disk (former Can't setup connect KVM server) Aug 28, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

4 participants