Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

VM not starting, get Hotplug.device.timeout #88

Closed
ddimension opened this issue Sep 11, 2018 · 9 comments
Closed

VM not starting, get Hotplug.device.timeout #88

ddimension opened this issue Sep 11, 2018 · 9 comments

Comments

@ddimension
Copy link

Hi,
I've done a fresh install of xcp-ng 7.5. After that I installed with the netinstall script. After that I've run the following commands:

xe sr-introduce name-label="CEPH RBD Storage" type=rbdsr uuid=bd939283-6d37-4659-ad43-bbc2a8f8eafb shared=true content-type=user
xe pbd-create sr-uuid=bd939283-6d37-4659-ad43-bbc2a8f8eafb host-uuid=902ed625-e84a-4769-885d-9c24f2ea90b9 device-config:cluster=ceph device-config:image-format=raw device-config:datapath=qdisk
xe pbd-plug uuid=46fc9df6-b355-9e3d-4f3c-fca3c7b3f082

But If I now run the VM, it hangs with the upper error.
The SMlog gives me the following:

Sep 11 20:40:07 blade12 SMAPIv3: [9084] - INFO - called as: ['/usr/libexec/xapi-storage-script/volume/org.xen.xapi.storage.rbdsr/Volume.stat', '--json']
Sep 11 20:40:07 blade12 SMAPIv3: [9084] - DEBUG - dp_destroy: Volume.stat: SR: rbd+raw+qdisk://ceph/bd939283-6d37-4659-ad43-bbc2a8f8eafb Key: 42178673-c749-4d0c-9315-293fe69265a1
Sep 11 20:40:07 blade12 SMAPIv3: [9084] - DEBUG - dp_destroy: librbd.Volume.stat: SR: rbd+raw+qdisk://ceph/bd939283-6d37-4659-ad43-bbc2a8f8eafb Key: 42178673-c749-4d0c-9315-293fe69265a1
Sep 11 20:40:07 blade12 SMAPIv3: [9084] - DEBUG - dp_destroy: ceph_utils.connect: Cluster_name: ceph
Sep 11 20:40:07 blade12 SMAPIv3: [9084] - DEBUG - dp_destroy: ceph_utils.connect: conf_file: /etc/ceph/ceph.conf
Sep 11 20:40:07 blade12 SMAPIv3: [9084] - DEBUG - dp_destroy: ceph_utils.connect: librados version: 0.69.1
Sep 11 20:40:07 blade12 SMAPIv3: [9084] - DEBUG - dp_destroy: ceph_utils.connect: will attempt to connect to: node1
Sep 11 20:40:07 blade12 SMAPIv3: [9084] - DEBUG - dp_destroy: ceph_utils.connect: Cluster ID: 7a10ed6b-f688-4dba-b60c-d2dceda62faf
Sep 11 20:40:07 blade12 SMAPIv3: [9084] - DEBUG - dp_destroy: meta.MetadataHandler.load: uri: rbd+raw+qdisk://ceph/bd939283-6d37-4659-ad43-bbc2a8f8eafb/42178673-c749-4d0c-9315-293fe69265a1
Sep 11 20:40:07 blade12 SMAPIv3: [9084] - DEBUG - dp_destroy: meta.RBDMetadataHandler._load: uri: rbd+raw+qdisk://ceph/bd939283-6d37-4659-ad43-bbc2a8f8eafb/42178673-c749-4d0c-9315-293fe69265a1
Sep 11 20:40:07 blade12 SMAPIv3: [9084] - DEBUG - dp_destroy: ceph_utils.connect: Cluster_name: ceph
Sep 11 20:40:07 blade12 SMAPIv3: [9084] - DEBUG - dp_destroy: ceph_utils.connect: conf_file: /etc/ceph/ceph.conf
Sep 11 20:40:07 blade12 SMAPIv3: [9084] - DEBUG - dp_destroy: ceph_utils.connect: librados version: 0.69.1
Sep 11 20:40:07 blade12 SMAPIv3: [9084] - DEBUG - dp_destroy: ceph_utils.connect: will attempt to connect to: node1
Sep 11 20:40:07 blade12 SMAPIv3: [9084] - DEBUG - dp_destroy: ceph_utils.connect: Cluster ID: 7a10ed6b-f688-4dba-b60c-d2dceda62faf
Sep 11 20:40:07 blade12 SMAPIv3: [9084] - DEBUG - dp_destroy: rbd_utils.retrieveImageMeta: Cluster ID: 7a10ed6b-f688-4dba-b60c-d2dceda62faf Pool: RBD_XenStorage-bd939283-6d37-4659-ad43-bbc2a8f8eafb/RAW-42178673-c749-4d0c-9315-293fe69265a1
Sep 11 20:40:08 blade12 SMAPIv3: [9084] - DEBUG - dp_destroy: meta.RBDMetadataHandler._load: Image: RBD_XenStorage-bd939283-6d37-4659-ad43-bbc2a8f8eafb/RAW-42178673-c749-4d0c-9315-293fe69265a1 Metadata: {u'uuid': '42178673-c749-4d0c-9315-293fe69265a1', u'read_write': True, u'qemu_qmp_sock': '/var/run/qmp_sock.42178673-c749-4d0c-9315-293fe69265a1', u'qemu_qmp_log': '/var/run/qmp_log.42178673-c749-4d0c-9315-293fe69265a1', u'name': 'TestDisk', u'vdi_type': 'raw', u'active_on': '902ed625-e84a-4769-885d-9c24f2ea90b9', u'keys': {u'vdi-type': u'user'}, u'qemu_nbd_sock': '/var/run/qemu-nbd.42178673-c749-4d0c-9315-293fe69265a1', u'qemu_pid': 7344, u'uri': [u'rbd+raw+qdisk://ceph/bd939283-6d37-4659-ad43-bbc2a8f8eafb/42178673-c749-4d0c-9315-293fe69265a1'], u'physical_utilisation': 0, u'key': '42178673-c749-4d0c-9315-293fe69265a1', u'sharable': False, u'virtual_size': 21474836480, u'description': ' '}
Sep 11 20:40:08 blade12 SMAPIv3: [9084] - DEBUG - dp_destroy: ceph_utils.disconnect: Cluster ID: 7a10ed6b-f688-4dba-b60c-d2dceda62faf
Sep 11 20:40:08 blade12 SMAPIv3: [9084] - DEBUG - dp_destroy: rbd_utils.getPhysicalUtilisation: Cluster ID: 7a10ed6b-f688-4dba-b60c-d2dceda62faf Name: RBD_XenStorage-bd939283-6d37-4659-ad43-bbc2a8f8eafb/RAW-42178673-c749-4d0c-9315-293fe69265a1
Sep 11 20:40:08 blade12 SMAPIv3: [9084] - DEBUG - dp_destroy: librbd.Volume.stat: SR: rbd+raw+qdisk://ceph/bd939283-6d37-4659-ad43-bbc2a8f8eafb Key: 42178673-c749-4d0c-9315-293fe69265a1 Metadata: {u'uuid': '42178673-c749-4d0c-9315-293fe69265a1', u'read_write': True, u'qemu_qmp_sock': '/var/run/qmp_sock.42178673-c749-4d0c-9315-293fe69265a1', u'qemu_qmp_log': '/var/run/qmp_log.42178673-c749-4d0c-9315-293fe69265a1', u'name': 'TestDisk', u'vdi_type': 'raw', u'active_on': '902ed625-e84a-4769-885d-9c24f2ea90b9', u'keys': {u'vdi-type': u'user'}, u'qemu_nbd_sock': '/var/run/qemu-nbd.42178673-c749-4d0c-9315-293fe69265a1', u'qemu_pid': 7344, u'uri': [u'rbd+raw+qdisk://ceph/bd939283-6d37-4659-ad43-bbc2a8f8eafb/42178673-c749-4d0c-9315-293fe69265a1'], u'physical_utilisation': 21474836480L, u'key': '42178673-c749-4d0c-9315-293fe69265a1', u'sharable': False, u'virtual_size': 21474836480, u'description': ' '}
Sep 11 20:40:08 blade12 SMAPIv3: [9084] - DEBUG - dp_destroy: ceph_utils.disconnect: Cluster ID: 7a10ed6b-f688-4dba-b60c-d2dceda62faf
Sep 11 20:40:08 blade12 SMAPIv3: [9125] - INFO - called as: ['/usr/libexec/xapi-storage-script/datapath/rbd+raw+qdisk/Datapath.deactivate', '--json']
Sep 11 20:40:08 blade12 SMAPIv3: [9125] - DEBUG - dp_destroy: Datapath.deactivate: uri: rbd+raw+qdisk://ceph/bd939283-6d37-4659-ad43-bbc2a8f8eafb/42178673-c749-4d0c-9315-293fe69265a1 domain: 0
Sep 11 20:40:08 blade12 SMAPIv3: [9125] - DEBUG - dp_destroy: librbd.Datapath.deactivate: uri: rbd+raw+qdisk://ceph/bd939283-6d37-4659-ad43-bbc2a8f8eafb/42178673-c749-4d0c-9315-293fe69265a1 domain: 0
Sep 11 20:40:08 blade12 SMAPIv3: [9125] - DEBUG - dp_destroy: librbd.QdiskDatapath._deactivate: uri: rbd+raw+qdisk://ceph/bd939283-6d37-4659-ad43-bbc2a8f8eafb/42178673-c749-4d0c-9315-293fe69265a1 domain: 0
Sep 11 20:40:08 blade12 SMAPIv3: [9125] - DEBUG - dp_destroy: librbd.QdiskDatapath._load_qemu_dp: uri: rbd+raw+qdisk://ceph/bd939283-6d37-4659-ad43-bbc2a8f8eafb/42178673-c749-4d0c-9315-293fe69265a1 domain: 0
Sep 11 20:40:08 blade12 SMAPIv3: [9125] - DEBUG - dp_destroy: meta.MetadataHandler.load: uri: rbd+raw+qdisk://ceph/bd939283-6d37-4659-ad43-bbc2a8f8eafb/42178673-c749-4d0c-9315-293fe69265a1
Sep 11 20:40:08 blade12 SMAPIv3: [9125] - DEBUG - dp_destroy: meta.RBDMetadataHandler._load: uri: rbd+raw+qdisk://ceph/bd939283-6d37-4659-ad43-bbc2a8f8eafb/42178673-c749-4d0c-9315-293fe69265a1
Sep 11 20:40:08 blade12 SMAPIv3: [9125] - DEBUG - dp_destroy: ceph_utils.connect: Cluster_name: ceph
Sep 11 20:40:08 blade12 SMAPIv3: [9125] - DEBUG - dp_destroy: ceph_utils.connect: conf_file: /etc/ceph/ceph.conf
Sep 11 20:40:08 blade12 SMAPIv3: [9125] - DEBUG - dp_destroy: ceph_utils.connect: librados version: 0.69.1
Sep 11 20:40:08 blade12 SMAPIv3: [9125] - DEBUG - dp_destroy: ceph_utils.connect: will attempt to connect to: node1
Sep 11 20:40:08 blade12 SMAPIv3: [9125] - DEBUG - dp_destroy: ceph_utils.connect: Cluster ID: 7a10ed6b-f688-4dba-b60c-d2dceda62faf
Sep 11 20:40:08 blade12 SMAPIv3: [9125] - DEBUG - dp_destroy: rbd_utils.retrieveImageMeta: Cluster ID: 7a10ed6b-f688-4dba-b60c-d2dceda62faf Pool: RBD_XenStorage-bd939283-6d37-4659-ad43-bbc2a8f8eafb/RAW-42178673-c749-4d0c-9315-293fe69265a1
Sep 11 20:40:08 blade12 SMAPIv3: [9125] - DEBUG - dp_destroy: meta.RBDMetadataHandler._load: Image: RBD_XenStorage-bd939283-6d37-4659-ad43-bbc2a8f8eafb/RAW-42178673-c749-4d0c-9315-293fe69265a1 Metadata: {u'uuid': '42178673-c749-4d0c-9315-293fe69265a1', u'read_write': True, u'qemu_qmp_sock': '/var/run/qmp_sock.42178673-c749-4d0c-9315-293fe69265a1', u'qemu_qmp_log': '/var/run/qmp_log.42178673-c749-4d0c-9315-293fe69265a1', u'name': 'TestDisk', u'vdi_type': 'raw', u'active_on': '902ed625-e84a-4769-885d-9c24f2ea90b9', u'keys': {u'vdi-type': u'user'}, u'qemu_nbd_sock': '/var/run/qemu-nbd.42178673-c749-4d0c-9315-293fe69265a1', u'qemu_pid': 7344, u'uri': [u'rbd+raw+qdisk://ceph/bd939283-6d37-4659-ad43-bbc2a8f8eafb/42178673-c749-4d0c-9315-293fe69265a1'], u'physical_utilisation': 0, u'key': '42178673-c749-4d0c-9315-293fe69265a1', u'sharable': False, u'virtual_size': 21474836480, u'description': ' '}
Sep 11 20:40:08 blade12 SMAPIv3: [9125] - DEBUG - dp_destroy: ceph_utils.disconnect: Cluster ID: 7a10ed6b-f688-4dba-b60c-d2dceda62faf
Sep 11 20:40:08 blade12 SMAPIv3: [9125] - DEBUG - dp_destroy: qemudisk.introduce: sr_uuid: bd939283-6d37-4659-ad43-bbc2a8f8eafb vdi_uuid: 42178673-c749-4d0c-9315-293fe69265a1 vdi_type: raw pid: 7344 qmp_sock: /var/run/qmp_sock.42178673-c749-4d0c-9315-293fe69265a1 nbd_sock: /var/run/qemu-nbd.42178673-c749-4d0c-9315-293fe69265a1 qmp_log: /var/run/qmp_log.42178673-c749-4d0c-9315-293fe69265a1
Sep 11 20:40:08 blade12 SMAPIv3: [9125] - DEBUG - dp_destroy: qemudisk.Qemudisk.__init__: sr_uuid: bd939283-6d37-4659-ad43-bbc2a8f8eafb vdi_uuid: 42178673-c749-4d0c-9315-293fe69265a1 vdi_type: raw pid: 7344 qmp_sock: /var/run/qmp_sock.42178673-c749-4d0c-9315-293fe69265a1 nbd_sock: /var/run/qemu-nbd.42178673-c749-4d0c-9315-293fe69265a1 qmp_log: /var/run/qmp_log.42178673-c749-4d0c-9315-293fe69265a1
Sep 11 20:40:08 blade12 SMAPIv3: [9125] - DEBUG - dp_destroy: qemudisk.Qemudisk.close: vdi_uuid 42178673-c749-4d0c-9315-293fe69265a1 pid 7344 qmp_sock /var/run/qmp_sock.42178673-c749-4d0c-9315-293fe69265a1
Sep 11 20:40:08 blade12 SMAPIv3: [9125] - DEBUG - dp_destroy: Running cmd ['/usr/bin/xenstore-write', '/local/domain/2/device/vbd/768/state', '5']
Sep 11 20:40:08 blade12 SMAPIv3: [9125] - DEBUG - dp_destroy: meta.MetadataHandler.update: uri: rbd+raw+qdisk://ceph/bd939283-6d37-4659-ad43-bbc2a8f8eafb/42178673-c749-4d0c-9315-293fe69265a1
Sep 11 20:40:08 blade12 SMAPIv3: [9125] - DEBUG - dp_destroy: meta.RBDMetadataHandler._update_meta: uri: rbd+raw+qdisk://ceph/bd939283-6d37-4659-ad43-bbc2a8f8eafb/42178673-c749-4d0c-9315-293fe69265a1 image_meta: {'active_on': None}
Sep 11 20:40:08 blade12 SMAPIv3: [9125] - DEBUG - dp_destroy: ceph_utils.connect: Cluster_name: ceph
Sep 11 20:40:08 blade12 SMAPIv3: [9125] - DEBUG - dp_destroy: ceph_utils.connect: conf_file: /etc/ceph/ceph.conf
Sep 11 20:40:08 blade12 SMAPIv3: [9125] - DEBUG - dp_destroy: ceph_utils.connect: librados version: 0.69.1
Sep 11 20:40:08 blade12 SMAPIv3: [9125] - DEBUG - dp_destroy: ceph_utils.connect: will attempt to connect to: node1
Sep 11 20:40:08 blade12 SMAPIv3: [9125] - DEBUG - dp_destroy: ceph_utils.connect: Cluster ID: 7a10ed6b-f688-4dba-b60c-d2dceda62faf
Sep 11 20:40:08 blade12 SMAPIv3: [9125] - DEBUG - dp_destroy: rbd_utils.updateMetadata: Cluster ID: 7a10ed6b-f688-4dba-b60c-d2dceda62faf Name: RBD_XenStorage-bd939283-6d37-4659-ad43-bbc2a8f8eafb/RAW-42178673-c749-4d0c-9315-293fe69265a1 Metadata: {'active_on': None}
Sep 11 20:40:08 blade12 SMAPIv3: [9125] - DEBUG - dp_destroy: rbd_utils.updateMetadata: tag: active_on remove value
Sep 11 20:40:08 blade12 SMAPIv3: [9125] - DEBUG - dp_destroy: ceph_utils.disconnect: Cluster ID: 7a10ed6b-f688-4dba-b60c-d2dceda62faf
Sep 11 20:40:08 blade12 SMAPIv3: [9163] - INFO - called as: ['/usr/libexec/xapi-storage-script/volume/org.xen.xapi.storage.rbdsr/Volume.stat', '--json']
Sep 11 20:40:08 blade12 SMAPIv3: [9163] - DEBUG - dp_destroy: Volume.stat: SR: rbd+raw+qdisk://ceph/bd939283-6d37-4659-ad43-bbc2a8f8eafb Key: 42178673-c749-4d0c-9315-293fe69265a1
Sep 11 20:40:08 blade12 SMAPIv3: [9163] - DEBUG - dp_destroy: librbd.Volume.stat: SR: rbd+raw+qdisk://ceph/bd939283-6d37-4659-ad43-bbc2a8f8eafb Key: 42178673-c749-4d0c-9315-293fe69265a1
Sep 11 20:40:08 blade12 SMAPIv3: [9163] - DEBUG - dp_destroy: ceph_utils.connect: Cluster_name: ceph
Sep 11 20:40:08 blade12 SMAPIv3: [9163] - DEBUG - dp_destroy: ceph_utils.connect: conf_file: /etc/ceph/ceph.conf
Sep 11 20:40:08 blade12 SMAPIv3: [9163] - DEBUG - dp_destroy: ceph_utils.connect: librados version: 0.69.1
Sep 11 20:40:08 blade12 SMAPIv3: [9163] - DEBUG - dp_destroy: ceph_utils.connect: will attempt to connect to: node1
Sep 11 20:40:08 blade12 SMAPIv3: [9163] - DEBUG - dp_destroy: ceph_utils.connect: Cluster ID: 7a10ed6b-f688-4dba-b60c-d2dceda62faf
Sep 11 20:40:08 blade12 SMAPIv3: [9163] - DEBUG - dp_destroy: meta.MetadataHandler.load: uri: rbd+raw+qdisk://ceph/bd939283-6d37-4659-ad43-bbc2a8f8eafb/42178673-c749-4d0c-9315-293fe69265a1
Sep 11 20:40:08 blade12 SMAPIv3: [9163] - DEBUG - dp_destroy: meta.RBDMetadataHandler._load: uri: rbd+raw+qdisk://ceph/bd939283-6d37-4659-ad43-bbc2a8f8eafb/42178673-c749-4d0c-9315-293fe69265a1
Sep 11 20:40:08 blade12 SMAPIv3: [9163] - DEBUG - dp_destroy: ceph_utils.connect: Cluster_name: ceph
Sep 11 20:40:08 blade12 SMAPIv3: [9163] - DEBUG - dp_destroy: ceph_utils.connect: conf_file: /etc/ceph/ceph.conf
Sep 11 20:40:08 blade12 SMAPIv3: [9163] - DEBUG - dp_destroy: ceph_utils.connect: librados version: 0.69.1
Sep 11 20:40:08 blade12 SMAPIv3: [9163] - DEBUG - dp_destroy: ceph_utils.connect: will attempt to connect to: node1
Sep 11 20:40:08 blade12 SMAPIv3: [9163] - DEBUG - dp_destroy: ceph_utils.connect: Cluster ID: 7a10ed6b-f688-4dba-b60c-d2dceda62faf
Sep 11 20:40:08 blade12 SMAPIv3: [9163] - DEBUG - dp_destroy: rbd_utils.retrieveImageMeta: Cluster ID: 7a10ed6b-f688-4dba-b60c-d2dceda62faf Pool: RBD_XenStorage-bd939283-6d37-4659-ad43-bbc2a8f8eafb/RAW-42178673-c749-4d0c-9315-293fe69265a1
Sep 11 20:40:08 blade12 SMAPIv3: [9163] - DEBUG - dp_destroy: meta.RBDMetadataHandler._load: Image: RBD_XenStorage-bd939283-6d37-4659-ad43-bbc2a8f8eafb/RAW-42178673-c749-4d0c-9315-293fe69265a1 Metadata: {u'uuid': '42178673-c749-4d0c-9315-293fe69265a1', u'read_write': True, u'qemu_qmp_sock': '/var/run/qmp_sock.42178673-c749-4d0c-9315-293fe69265a1', u'qemu_qmp_log': '/var/run/qmp_log.42178673-c749-4d0c-9315-293fe69265a1', u'description': ' ', u'vdi_type': 'raw', u'keys': {u'vdi-type': u'user'}, u'qemu_nbd_sock': '/var/run/qemu-nbd.42178673-c749-4d0c-9315-293fe69265a1', u'qemu_pid': 7344, u'uri': [u'rbd+raw+qdisk://ceph/bd939283-6d37-4659-ad43-bbc2a8f8eafb/42178673-c749-4d0c-9315-293fe69265a1'], u'physical_utilisation': 0, u'key': '42178673-c749-4d0c-9315-293fe69265a1', u'sharable': False, u'virtual_size': 21474836480, u'name': 'TestDisk'}
Sep 11 20:40:08 blade12 SMAPIv3: [9163] - DEBUG - dp_destroy: ceph_utils.disconnect: Cluster ID: 7a10ed6b-f688-4dba-b60c-d2dceda62faf
Sep 11 20:40:08 blade12 SMAPIv3: [9163] - DEBUG - dp_destroy: rbd_utils.getPhysicalUtilisation: Cluster ID: 7a10ed6b-f688-4dba-b60c-d2dceda62faf Name: RBD_XenStorage-bd939283-6d37-4659-ad43-bbc2a8f8eafb/RAW-42178673-c749-4d0c-9315-293fe69265a1
Sep 11 20:40:08 blade12 SMAPIv3: [9163] - DEBUG - dp_destroy: librbd.Volume.stat: SR: rbd+raw+qdisk://ceph/bd939283-6d37-4659-ad43-bbc2a8f8eafb Key: 42178673-c749-4d0c-9315-293fe69265a1 Metadata: {u'uuid': '42178673-c749-4d0c-9315-293fe69265a1', u'read_write': True, u'qemu_qmp_sock': '/var/run/qmp_sock.42178673-c749-4d0c-9315-293fe69265a1', u'qemu_qmp_log': '/var/run/qmp_log.42178673-c749-4d0c-9315-293fe69265a1', u'description': ' ', u'vdi_type': 'raw', u'keys': {u'vdi-type': u'user'}, u'qemu_nbd_sock': '/var/run/qemu-nbd.42178673-c749-4d0c-9315-293fe69265a1', u'qemu_pid': 7344, u'uri': [u'rbd+raw+qdisk://ceph/bd939283-6d37-4659-ad43-bbc2a8f8eafb/42178673-c749-4d0c-9315-293fe69265a1'], u'physical_utilisation': 21474836480L, u'key': '42178673-c749-4d0c-9315-293fe69265a1', u'sharable': False, u'virtual_size': 21474836480, u'name': 'TestDisk'}
Sep 11 20:40:08 blade12 SMAPIv3: [9163] - DEBUG - dp_destroy: ceph_utils.disconnect: Cluster ID: 7a10ed6b-f688-4dba-b60c-d2dceda62faf
Sep 11 20:40:08 blade12 SMAPIv3: [9204] - INFO - called as: ['/usr/libexec/xapi-storage-script/datapath/rbd+raw+qdisk/Datapath.detach', '--json']
Sep 11 20:40:08 blade12 SMAPIv3: [9204] - DEBUG - dp_destroy: Datapath.detach: uri: rbd+raw+qdisk://ceph/bd939283-6d37-4659-ad43-bbc2a8f8eafb/42178673-c749-4d0c-9315-293fe69265a1 domain: 0
Sep 11 20:40:08 blade12 SMAPIv3: [9204] - DEBUG - dp_destroy: librbd.Datapath.detach: uri: rbd+raw+qdisk://ceph/bd939283-6d37-4659-ad43-bbc2a8f8eafb/42178673-c749-4d0c-9315-293fe69265a1 domain: 0
Sep 11 20:40:08 blade12 SMAPIv3: [9204] - DEBUG - dp_destroy: librbd.QdiskDatapath._detach: uri: rbd+raw+qdisk://ceph/bd939283-6d37-4659-ad43-bbc2a8f8eafb/42178673-c749-4d0c-9315-293fe69265a1 domain: 0
Sep 11 20:40:08 blade12 SMAPIv3: [9204] - DEBUG - dp_destroy: librbd.QdiskDatapath._load_qemu_dp: uri: rbd+raw+qdisk://ceph/bd939283-6d37-4659-ad43-bbc2a8f8eafb/42178673-c749-4d0c-9315-293fe69265a1 domain: 0
Sep 11 20:40:08 blade12 SMAPIv3: [9204] - DEBUG - dp_destroy: meta.MetadataHandler.load: uri: rbd+raw+qdisk://ceph/bd939283-6d37-4659-ad43-bbc2a8f8eafb/42178673-c749-4d0c-9315-293fe69265a1
Sep 11 20:40:08 blade12 SMAPIv3: [9204] - DEBUG - dp_destroy: meta.RBDMetadataHandler._load: uri: rbd+raw+qdisk://ceph/bd939283-6d37-4659-ad43-bbc2a8f8eafb/42178673-c749-4d0c-9315-293fe69265a1
Sep 11 20:40:08 blade12 SMAPIv3: [9204] - DEBUG - dp_destroy: ceph_utils.connect: Cluster_name: ceph
Sep 11 20:40:08 blade12 SMAPIv3: [9204] - DEBUG - dp_destroy: ceph_utils.connect: conf_file: /etc/ceph/ceph.conf
Sep 11 20:40:08 blade12 SMAPIv3: [9204] - DEBUG - dp_destroy: ceph_utils.connect: librados version: 0.69.1
Sep 11 20:40:08 blade12 SMAPIv3: [9204] - DEBUG - dp_destroy: ceph_utils.connect: will attempt to connect to: node1
Sep 11 20:40:09 blade12 SMAPIv3: [9204] - DEBUG - dp_destroy: ceph_utils.connect: Cluster ID: 7a10ed6b-f688-4dba-b60c-d2dceda62faf
Sep 11 20:40:09 blade12 SMAPIv3: [9204] - DEBUG - dp_destroy: rbd_utils.retrieveImageMeta: Cluster ID: 7a10ed6b-f688-4dba-b60c-d2dceda62faf Pool: RBD_XenStorage-bd939283-6d37-4659-ad43-bbc2a8f8eafb/RAW-42178673-c749-4d0c-9315-293fe69265a1
Sep 11 20:40:09 blade12 SMAPIv3: [9204] - DEBUG - dp_destroy: meta.RBDMetadataHandler._load: Image: RBD_XenStorage-bd939283-6d37-4659-ad43-bbc2a8f8eafb/RAW-42178673-c749-4d0c-9315-293fe69265a1 Metadata: {u'uuid': '42178673-c749-4d0c-9315-293fe69265a1', u'read_write': True, u'qemu_qmp_sock': '/var/run/qmp_sock.42178673-c749-4d0c-9315-293fe69265a1', u'qemu_qmp_log': '/var/run/qmp_log.42178673-c749-4d0c-9315-293fe69265a1', u'description': ' ', u'vdi_type': 'raw', u'keys': {u'vdi-type': u'user'}, u'qemu_nbd_sock': '/var/run/qemu-nbd.42178673-c749-4d0c-9315-293fe69265a1', u'qemu_pid': 7344, u'uri': [u'rbd+raw+qdisk://ceph/bd939283-6d37-4659-ad43-bbc2a8f8eafb/42178673-c749-4d0c-9315-293fe69265a1'], u'physical_utilisation': 0, u'key': '42178673-c749-4d0c-9315-293fe69265a1', u'sharable': False, u'virtual_size': 21474836480, u'name': 'TestDisk'}
Sep 11 20:40:09 blade12 SMAPIv3: [9204] - DEBUG - dp_destroy: ceph_utils.disconnect: Cluster ID: 7a10ed6b-f688-4dba-b60c-d2dceda62faf
Sep 11 20:40:09 blade12 SMAPIv3: [9204] - DEBUG - dp_destroy: qemudisk.introduce: sr_uuid: bd939283-6d37-4659-ad43-bbc2a8f8eafb vdi_uuid: 42178673-c749-4d0c-9315-293fe69265a1 vdi_type: raw pid: 7344 qmp_sock: /var/run/qmp_sock.42178673-c749-4d0c-9315-293fe69265a1 nbd_sock: /var/run/qemu-nbd.42178673-c749-4d0c-9315-293fe69265a1 qmp_log: /var/run/qmp_log.42178673-c749-4d0c-9315-293fe69265a1
Sep 11 20:40:09 blade12 SMAPIv3: [9204] - DEBUG - dp_destroy: qemudisk.Qemudisk.__init__: sr_uuid: bd939283-6d37-4659-ad43-bbc2a8f8eafb vdi_uuid: 42178673-c749-4d0c-9315-293fe69265a1 vdi_type: raw pid: 7344 qmp_sock: /var/run/qmp_sock.42178673-c749-4d0c-9315-293fe69265a1 nbd_sock: /var/run/qemu-nbd.42178673-c749-4d0c-9315-293fe69265a1 qmp_log: /var/run/qmp_log.42178673-c749-4d0c-9315-293fe69265a1
Sep 11 20:40:09 blade12 SMAPIv3: [9204] - DEBUG - dp_destroy: qemudisk.Qemudisk.quit: vdi_uuid 42178673-c749-4d0c-9315-293fe69265a1 pid 7344 qmp_sock /var/run/qmp_sock.42178673-c749-4d0c-9315-293fe69265a1
Sep 11 20:40:09 blade12 SMAPIv3: [9204] - DEBUG - dp_destroy: meta.MetadataHandler.update: uri: rbd+raw+qdisk://ceph/bd939283-6d37-4659-ad43-bbc2a8f8eafb/42178673-c749-4d0c-9315-293fe69265a1
Sep 11 20:40:09 blade12 SMAPIv3: [9204] - DEBUG - dp_destroy: meta.RBDMetadataHandler._update_meta: uri: rbd+raw+qdisk://ceph/bd939283-6d37-4659-ad43-bbc2a8f8eafb/42178673-c749-4d0c-9315-293fe69265a1 image_meta: {'qemu_qmp_sock': None, 'qemu_qmp_log': None, 'qemu_nbd_sock': None, 'qemu_pid': None}
Sep 11 20:40:09 blade12 SMAPIv3: [9204] - DEBUG - dp_destroy: ceph_utils.connect: Cluster_name: ceph
Sep 11 20:40:09 blade12 SMAPIv3: [9204] - DEBUG - dp_destroy: ceph_utils.connect: conf_file: /etc/ceph/ceph.conf
Sep 11 20:40:09 blade12 SMAPIv3: [9204] - DEBUG - dp_destroy: ceph_utils.connect: librados version: 0.69.1
Sep 11 20:40:09 blade12 SMAPIv3: [9204] - DEBUG - dp_destroy: ceph_utils.connect: will attempt to connect to: node1
Sep 11 20:40:09 blade12 SMAPIv3: [9204] - DEBUG - dp_destroy: ceph_utils.connect: Cluster ID: 7a10ed6b-f688-4dba-b60c-d2dceda62faf
Sep 11 20:40:09 blade12 SMAPIv3: [9204] - DEBUG - dp_destroy: rbd_utils.updateMetadata: Cluster ID: 7a10ed6b-f688-4dba-b60c-d2dceda62faf Name: RBD_XenStorage-bd939283-6d37-4659-ad43-bbc2a8f8eafb/RAW-42178673-c749-4d0c-9315-293fe69265a1 Metadata: {'qemu_qmp_sock': None, 'qemu_qmp_log': None, 'qemu_nbd_sock': None, 'qemu_pid': None}
Sep 11 20:40:09 blade12 SMAPIv3: [9204] - DEBUG - dp_destroy: rbd_utils.updateMetadata: tag: qemu_qmp_sock remove value
Sep 11 20:40:09 blade12 SMAPIv3: [9204] - DEBUG - dp_destroy: rbd_utils.updateMetadata: tag: qemu_qmp_log remove value
Sep 11 20:40:09 blade12 SMAPIv3: [9204] - DEBUG - dp_destroy: rbd_utils.updateMetadata: tag: qemu_nbd_sock remove value
Sep 11 20:40:09 blade12 SMAPIv3: [9204] - DEBUG - dp_destroy: rbd_utils.updateMetadata: tag: qemu_pid remove value
Sep 11 20:40:09 blade12 SMAPIv3: [9204] - DEBUG - dp_destroy: ceph_utils.disconnect: Cluster ID: 7a10ed6b-f688-4dba-b60c-d2dceda62faf
Sep 11 20:40:11 blade12 snapwatchd: [2486] XS-PATH -> /vss/a769891b-bc31-2928-f5cd-6e84f79488d8

Do you have idea where the error is?
BTW, I could map the rbd on one of the ceph servers and filled it with a working image. No Problem (Of course I needed to remove some features).

Kind regards,

André

@rposudnevskiy
Copy link
Owner

Hi André,
You need to install qemu-dp from the extras_testing repository.
With the default installation of xcp-ng 7.5 qemu-dp does not support rbd.
https://github.com/xcp-ng/xcp/wiki/Ceph-on-XCP-ng-7.5-or-later

Also, you can read this
https://xcp-ng.org/forum/topic/4/ceph-on-xcp-ng

@ddimension
Copy link
Author

ddimension commented Sep 13, 2018

Hi!
Thanks for your quick answer. I was unclear with my installation instructions. I enabled the repos like this:

[xcp-ng-base]
name=XCP-ng Base Repository
baseurl=https://updates.xcp-ng.org/7/7.5/base/x86_64/
enabled=1
gpgcheck=0

[xcp-ng-updates]
name=XCP-ng Updates Repository
baseurl=https://updates.xcp-ng.org/7/7.5/updates/x86_64/
enabled=1
gpgcheck=0

[xcp-ng-extras]
name=XCP-ng Extras Repository
baseurl=https://updates.xcp-ng.org/7/7.5/extras/x86_64/
enabled=0
gpgcheck=0

[xcp-ng-updates_testing]
name=XCP-ng Updates Testing Repository
baseurl=https://updates.xcp-ng.org/7/7.5/updates_testing/x86_64/
enabled=0
gpgcheck=0

[xcp-ng-extras_testing]
name=XCP-ng Extras Testing Repository
baseurl=https://updates.xcp-ng.org/7/7.5/extras_testing/x86_64/
enabled=1
gpgcheck=0

A yum update installed the qemu updates. I have this on disk

[root@blade12 ~]# rpm -ql qemu-dp
/usr/lib64/qemu-dp/bin
/usr/lib64/qemu-dp/bin/qemu-dp
/usr/lib64/qemu-dp/bin/qemu-img
/usr/lib64/qemu-dp/bin/qemu-io
/usr/lib64/qemu-dp/bin/qemu-nbd
[root@blade12 ~]# rpm -qa qemu-dp
qemu-dp-2.10.2-1.2.0.extras.x86_64

So I hope I'm not missing something.

Kind regards,
André

@ddimension
Copy link
Author

Hi!
It seems qemu-dp is properly openining connections to ceph mons and osds.
tcp 0 0 xcp-ng-cilatfwb:56718 node1.intern.marca:6800 ESTABLISHED 27282/qemu-dp
tcp 0 0 xcp-ng-cilatfwb:47710 node2.intern.marca:6800 ESTABLISHED 27282/qemu-dp
tcp 0 0 xcp-ng-cilatfwb:38272 admin1.intern.marc:6800 ESTABLISHED 27282/qemu-dp
tcp 0 0 xcp-ng-cilatfwb:49768 admin1.intern:smc-https ESTABLISHED 27282/qemu-dp
unix 2 [ ] DGRAM 984694 27282/qemu-dp
unix 3 [ ] STREAM CONNECTED 988594 27282/qemu-dp

But I still cannot understand why it hangs. I see this error in an strace:
27282 bind(34, {sa_family=AF_LOCAL, sun_path="/var/run/qemu-nbd.3475cd94-8e0f-411f-a46b-611afc46a56c"}, 110) = 0
27282 listen(34, 1) = 0
27282 getpeername(34, 0x55951ad0dba0, [128]) = -1 ENOTCONN (Transport endpoint is not connected)
27282 getsockname(34, {sa_family=AF_LOCAL, sun_path="/var/run/qemu-nbd.3475cd94-8e0f-411f-a46b-611afc46a56c"}, [57]) = 0
.....
27282 write(2, "xen be: qdisk-768: ", 19) = 19
27282 write(2, "watching frontend path (/local/domain/2/device/vbd/768) failed\n", 63) = 63
27282 sendmsg(20, {msg_name(0)=NULL, msg_iov(1)=[{"{"return": {}}\r\n", 16}], msg_controllen=0, msg_flags=0}, 0) = 16

Perhaps you could sen me logs of a working v3 installation so I can compare it?

@rposudnevskiy
Copy link
Owner

Hi,
Could you please check which version of glibc installed?

@ddimension
Copy link
Author

Hi!
Sorry for the delay, these are installed:
glibc-common-2.17-106.el7_2.4.x86_64
glibc-2.17-106.el7_2.4.x86_64

I've checked the yum.log and these are the packages which were installed after base installation:
Sep 19 23:28:43 Installed: centos-release-storage-common-2-2.el7.centos.noarch
Sep 19 23:28:43 Installed: centos-release-ceph-luminous-1.1-2.el7.centos.noarch
Sep 19 23:29:27 Updated: libnl3-3.2.28-4.el7.x86_64
Sep 19 23:29:27 Installed: userspace-rcu-0.10.0-3.el7.x86_64
Sep 19 23:29:27 Installed: lttng-ust-2.10.0-1.el7.x86_64
Sep 19 23:29:28 Installed: rdma-core-15-6.el7.x86_64
Sep 19 23:29:28 Installed: libibverbs-15-6.el7.x86_64
Sep 19 23:29:29 Installed: 2:librados2-12.2.5-0.el7.x86_64
Sep 19 23:29:30 Installed: 2:librbd1-12.2.5-0.el7.x86_64
Sep 19 23:29:30 Installed: 2:python-rados-12.2.5-0.el7.x86_64
Sep 19 23:29:30 Installed: 2:python-rbd-12.2.5-0.el7.x86_64
Sep 19 23:29:30 Installed: 2:rbd-nbd-12.2.5-0.el7.x86_64
Sep 19 23:52:16 Updated: vhd-tool-0.20.0-4.3.xcp.el7.centos.x86_64
Sep 19 23:52:17 Updated: xen-hypervisor-4.7.5-5.5.1.xcp.x86_64
Sep 19 23:52:17 Updated: xen-libs-4.7.5-5.5.1.xcp.x86_64
Sep 19 23:52:17 Updated: xen-dom0-libs-4.7.5-5.5.1.xcp.x86_64
Sep 19 23:52:17 Updated: xen-tools-4.7.5-5.5.1.xcp.x86_64
Sep 19 23:52:18 Updated: xen-dom0-tools-4.7.5-5.5.1.xcp.x86_64
Sep 19 23:52:18 Updated: 2:qemu-dp-2.10.2-1.2.0.extras.x86_64
Sep 19 23:52:21 Updated: xapi-tests-1.90.6-1.x86_64
Sep 19 23:52:31 Updated: xapi-core-1.90.6-1.x86_64
Sep 19 23:52:39 Updated: QConvergeConsoleCLI-Citrix-2.0.00-24.3.xcp.x86_64
Sep 19 23:52:47 Updated: linux-firmware-20170622-3.2.noarch
Sep 19 23:52:48 Updated: xcp-ng-center-7.5.0.8-3.noarch
Sep 19 23:52:49 Updated: xapi-xe-1.90.6-1.x86_64
Sep 19 23:52:49 Updated: 2:microcode_ctl-2.1-26.xs1.x86_64
Sep 19 23:52:57 Updated: kernel-4.4.52-4.0.7.1.x86_64

Kind regards,
André

@ddimension
Copy link
Author

Hi!

Thank you very much about the glibc hint. I just installed the newer version from centos. It is the same version, so API should also be the same.

@rposudnevskiy
Copy link
Owner

Hi,
Did you update glibc to 2.17-222?
Does qemu-dp work now?

@ddimension
Copy link
Author

Hi Roman,

yes I upgraded it to 222.el7:
Name : glibc
Arch : x86_64
Version : 2.17
Release : 222.el7
Size : 14 M
Repo : installed
From repo : base

And it work perfect. Still testing performance and availability.
BTW:
Perhaps you could update your readme.md so that it is clear howto setup a shared repo. If I understand it correctly, the storage repo is currently bound to the single host, not to a potential the cluster.

@rposudnevskiy
Copy link
Owner

To setup shared repo you can use shared=true in xe sr-create or xe sr-introduce command
There is a typo in current readme.md. I will fix it.

rposudnevskiy added a commit that referenced this issue Oct 12, 2018
- Update glibc to 2.17-222.el7
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants