Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

HA error with 7.5 #49

Closed
txsastre opened this issue Aug 14, 2018 · 47 comments
Closed

HA error with 7.5 #49

txsastre opened this issue Aug 14, 2018 · 47 comments

Comments

@txsastre
Copy link

already tested xcp 7.4 and 7.4.1 with no issues.

installed 7.5 from scratch and I cannot enable HA.
Tried with NFS and FC, also with GFS2 and LVM.

the log from /var/log/SMlog

Aug 14 12:07:20 xcp-ng-1 SM: [31960]     if self._deactivate_locked(sr_uuid, vdi_uuid, caching_params):
Aug 14 12:07:20 xcp-ng-1 SM: [31960]   File "/opt/xensource/sm/blktap2.py", line 83, in wrapper
Aug 14 12:07:20 xcp-ng-1 SM: [31960]     ret = op(self, *args)
Aug 14 12:07:20 xcp-ng-1 SM: [31960]   File "/opt/xensource/sm/blktap2.py", line 1666, in _deactivate_locked
Aug 14 12:07:20 xcp-ng-1 SM: [31960]     self._remove_tag(vdi_uuid)
Aug 14 12:07:20 xcp-ng-1 SM: [31960]   File "/opt/xensource/sm/blktap2.py", line 1452, in _remove_tag
Aug 14 12:07:20 xcp-ng-1 SM: [31960]     assert sm_config.has_key(host_key)
Aug 14 12:07:20 xcp-ng-1 SM: [31960]
Aug 14 12:07:20 xcp-ng-1 SM: [31960] lock: closed /var/lock/sm/b7b8b6a2-6326-4402-9f95-96135bb1d2f7/vdi
Aug 14 12:07:20 xcp-ng-1 SM: [31960] lock: closed /var/lock/sm/e6c74423-9152-b3b9-f503-efccde0e3edb/sr
Aug 14 12:08:51 xcp-ng-1 SM: [32437] Setting LVM_DEVICE to /dev/disk/by-scsid/3600000e00d00000000012a6b00000000
Aug 14 12:08:51 xcp-ng-1 SM: [32437] Setting LVM_DEVICE to /dev/disk/by-scsid/3600000e00d00000000012a6b00000000
Aug 14 12:08:51 xcp-ng-1 SM: [32437] lock: opening lock file /var/lock/sm/e6c74423-9152-b3b9-f503-efccde0e3edb/sr
Aug 14 12:08:51 xcp-ng-1 SM: [32437] LVMCache created for VG_XenStorage-e6c74423-9152-b3b9-f503-efccde0e3edb
Aug 14 12:08:51 xcp-ng-1 SM: [32437] ['/sbin/vgs', 'VG_XenStorage-e6c74423-9152-b3b9-f503-efccde0e3edb']
Aug 14 12:08:51 xcp-ng-1 SM: [32437]   pread SUCCESS
Aug 14 12:08:51 xcp-ng-1 SM: [32437] lock: acquired /var/lock/sm/e6c74423-9152-b3b9-f503-efccde0e3edb/sr
Aug 14 12:08:51 xcp-ng-1 SM: [32437] LVMCache: will initialize now
Aug 14 12:08:51 xcp-ng-1 SM: [32437] LVMCache: refreshing
Aug 14 12:08:51 xcp-ng-1 SM: [32437] ['/sbin/lvs', '--noheadings', '--units', 'b', '-o', '+lv_tags', '/dev/VG_XenStorage-e6c74423-9152-b3b9-f503-efccde0e3edb']
Aug 14 12:08:51 xcp-ng-1 SM: [32437]   pread SUCCESS
Aug 14 12:08:51 xcp-ng-1 SM: [32437] lock: released /var/lock/sm/e6c74423-9152-b3b9-f503-efccde0e3edb/sr
Aug 14 12:08:51 xcp-ng-1 SM: [32437] Entering _checkMetadataVolume
Aug 14 12:08:51 xcp-ng-1 SM: [32437] vdi_generate_config {'sr_uuid': 'e6c74423-9152-b3b9-f503-efccde0e3edb', 'subtask_of': 'OpaqueRef:5b5a6883-145d-404a-b9e3-e0715b227549', 'vdi_ref': 'OpaqueRef:94e45b2c-d99f-44e9-a4b5-e9660c99ecb4', 'vdi_on_boot': 'persist', 'args': [], 'vdi_location': 'ed1ae7c2-a5cb-4989-8bd2-7e351b77c835', 'host_ref': 'OpaqueRef:e4c37665-3b24-4b25-9da3-cf3567000e4b', 'session_ref': 'OpaqueRef:7f0ea971-e941-448f-a4a8-26af782c34a3', 'device_config': {'device': '/dev/disk/by-id/scsi-3600000e00d00000000012a6b00000000', 'SCSIid': '3600000e00d00000000012a6b00000000', 'SRmaster': 'true'}, 'command': 'vdi_generate_config', 'vdi_allow_caching': 'false', 'sr_ref': 'OpaqueRef:ddd6717d-987e-4618-9686-21af1f6bd7ff', 'vdi_uuid': 'ed1ae7c2-a5cb-4989-8bd2-7e351b77c835'}
Aug 14 12:08:51 xcp-ng-1 SM: [32437] LVHDoHBAVDI.generate_config
Aug 14 12:08:51 xcp-ng-1 SM: [32437] ['/sbin/lvdisplay', '/dev/VG_XenStorage-e6c74423-9152-b3b9-f503-efccde0e3edb/LV-ed1ae7c2-a5cb-4989-8bd2-7e351b77c835']
Aug 14 12:08:51 xcp-ng-1 SM: [32437]   pread SUCCESS
Aug 14 12:08:51 xcp-ng-1 SM: [32437] lock: closed /var/lock/sm/e6c74423-9152-b3b9-f503-efccde0e3edb/sr
Aug 14 12:08:52 xcp-ng-1 SM: [32462] Setting LVM_DEVICE to /dev/disk/by-scsid/3600000e00d00000000012a6b00000000
Aug 14 12:08:52 xcp-ng-1 SM: [32462] Setting LVM_DEVICE to /dev/disk/by-scsid/3600000e00d00000000012a6b00000000
Aug 14 12:08:52 xcp-ng-1 SM: [32462] Caught exception while looking up PBD for host  SR None: 'NoneType' object has no attribute 'xenapi'
Aug 14 12:08:52 xcp-ng-1 SM: [32462] lock: opening lock file /var/lock/sm/e6c74423-9152-b3b9-f503-efccde0e3edb/sr
Aug 14 12:08:52 xcp-ng-1 SM: [32462] LVMCache created for VG_XenStorage-e6c74423-9152-b3b9-f503-efccde0e3edb
Aug 14 12:08:52 xcp-ng-1 SM: [32462] ['/sbin/vgs', 'VG_XenStorage-e6c74423-9152-b3b9-f503-efccde0e3edb']
Aug 14 12:08:52 xcp-ng-1 SM: [32462]   pread SUCCESS
Aug 14 12:08:52 xcp-ng-1 SM: [32462] LVMCache: will initialize now
Aug 14 12:08:52 xcp-ng-1 SM: [32462] LVMCache: refreshing
Aug 14 12:08:52 xcp-ng-1 SM: [32462] ['/sbin/lvs', '--noheadings', '--units', 'b', '-o', '+lv_tags', '/dev/VG_XenStorage-e6c74423-9152-b3b9-f503-efccde0e3edb']
Aug 14 12:08:52 xcp-ng-1 SM: [32462]   pread SUCCESS
Aug 14 12:08:52 xcp-ng-1 SM: [32462] vdi_attach_from_config {'sr_uuid': 'e6c74423-9152-b3b9-f503-efccde0e3edb', 'device_config': {'device': '/dev/disk/by-id/scsi-3600000e00d00000000012a6b00000000', 'multipathing': 'false', 'SCSIid': '3600000e00d00000000012a6b00000000', 'SRmaster': 'true', 'multipathhandle': 'null'}, 'command': 'vdi_attach_from_config', 'vdi_uuid': 'ed1ae7c2-a5cb-4989-8bd2-7e351b77c835'}
Aug 14 12:08:52 xcp-ng-1 SM: [32462] LVHDoHBAVDI.attach_from_config
Aug 14 12:08:52 xcp-ng-1 SM: [32462] LVHDVDI.attach for ed1ae7c2-a5cb-4989-8bd2-7e351b77c835
Aug 14 12:08:52 xcp-ng-1 SM: [32462] lock: opening lock file /var/lock/sm/lvm-e6c74423-9152-b3b9-f503-efccde0e3edb/ed1ae7c2-a5cb-4989-8bd2-7e351b77c835
Aug 14 12:08:52 xcp-ng-1 SM: [32462] lock: acquired /var/lock/sm/lvm-e6c74423-9152-b3b9-f503-efccde0e3edb/ed1ae7c2-a5cb-4989-8bd2-7e351b77c835
Aug 14 12:08:52 xcp-ng-1 SM: [32462] lock: released /var/lock/sm/lvm-e6c74423-9152-b3b9-f503-efccde0e3edb/ed1ae7c2-a5cb-4989-8bd2-7e351b77c835
Aug 14 12:08:52 xcp-ng-1 SM: [32462] lock: closed /var/lock/sm/lvm-e6c74423-9152-b3b9-f503-efccde0e3edb/ed1ae7c2-a5cb-4989-8bd2-7e351b77c835
Aug 14 12:08:52 xcp-ng-1 SM: [32462] lock: opening lock file /var/lock/sm/lvm-e6c74423-9152-b3b9-f503-efccde0e3edb/ed1ae7c2-a5cb-4989-8bd2-7e351b77c835
Aug 14 12:08:52 xcp-ng-1 SM: [32462] lock: acquired /var/lock/sm/lvm-e6c74423-9152-b3b9-f503-efccde0e3edb/ed1ae7c2-a5cb-4989-8bd2-7e351b77c835
Aug 14 12:08:52 xcp-ng-1 SM: [32462] Refcount for lvm-e6c74423-9152-b3b9-f503-efccde0e3edb:ed1ae7c2-a5cb-4989-8bd2-7e351b77c835 (0, 0) + (0, 1) => (0, 1)
Aug 14 12:08:52 xcp-ng-1 SM: [32462] Refcount for lvm-e6c74423-9152-b3b9-f503-efccde0e3edb:ed1ae7c2-a5cb-4989-8bd2-7e351b77c835 set => (0, 1b)
Aug 14 12:08:52 xcp-ng-1 SM: [32462] ['/sbin/lvchange', '-ay', '/dev/VG_XenStorage-e6c74423-9152-b3b9-f503-efccde0e3edb/LV-ed1ae7c2-a5cb-4989-8bd2-7e351b77c835']
Aug 14 12:08:52 xcp-ng-1 SM: [32462]   pread SUCCESS
Aug 14 12:08:52 xcp-ng-1 SM: [32462] lock: released /var/lock/sm/lvm-e6c74423-9152-b3b9-f503-efccde0e3edb/ed1ae7c2-a5cb-4989-8bd2-7e351b77c835
Aug 14 12:08:52 xcp-ng-1 SM: [32462] lock: closed /var/lock/sm/lvm-e6c74423-9152-b3b9-f503-efccde0e3edb/ed1ae7c2-a5cb-4989-8bd2-7e351b77c835
Aug 14 12:08:52 xcp-ng-1 SM: [32462] lock: closed /var/lock/sm/e6c74423-9152-b3b9-f503-efccde0e3edb/sr
Aug 14 12:08:52 xcp-ng-1 SM: [32523] Setting LVM_DEVICE to /dev/disk/by-scsid/3600000e00d00000000012a6b00000000
Aug 14 12:08:52 xcp-ng-1 SM: [32523] Setting LVM_DEVICE to /dev/disk/by-scsid/3600000e00d00000000012a6b00000000
Aug 14 12:08:52 xcp-ng-1 SM: [32523] lock: opening lock file /var/lock/sm/e6c74423-9152-b3b9-f503-efccde0e3edb/sr
Aug 14 12:08:52 xcp-ng-1 SM: [32523] LVMCache created for VG_XenStorage-e6c74423-9152-b3b9-f503-efccde0e3edb
Aug 14 12:08:52 xcp-ng-1 SM: [32523] ['/sbin/vgs', 'VG_XenStorage-e6c74423-9152-b3b9-f503-efccde0e3edb']
Aug 14 12:08:52 xcp-ng-1 SM: [32523]   pread SUCCESS
Aug 14 12:08:52 xcp-ng-1 SM: [32523] lock: acquired /var/lock/sm/e6c74423-9152-b3b9-f503-efccde0e3edb/sr
Aug 14 12:08:52 xcp-ng-1 SM: [32523] LVMCache: will initialize now
Aug 14 12:08:52 xcp-ng-1 SM: [32523] LVMCache: refreshing
Aug 14 12:08:52 xcp-ng-1 SM: [32523] ['/sbin/lvs', '--noheadings', '--units', 'b', '-o', '+lv_tags', '/dev/VG_XenStorage-e6c74423-9152-b3b9-f503-efccde0e3edb']
Aug 14 12:08:52 xcp-ng-1 SM: [32523]   pread SUCCESS
Aug 14 12:08:52 xcp-ng-1 SM: [32523] lock: released /var/lock/sm/e6c74423-9152-b3b9-f503-efccde0e3edb/sr
Aug 14 12:08:52 xcp-ng-1 SM: [32523] Entering _checkMetadataVolume
Aug 14 12:08:52 xcp-ng-1 SM: [32523] vdi_generate_config {'sr_uuid': 'e6c74423-9152-b3b9-f503-efccde0e3edb', 'subtask_of': 'OpaqueRef:8d0fbda6-bf9c-47cc-a625-f4f2eb6af9b6', 'vdi_ref': 'OpaqueRef:b34971ea-0441-4266-8413-23a22ef5a226', 'vdi_on_boot': 'persist', 'args': [], 'vdi_location': 'b7b8b6a2-6326-4402-9f95-96135bb1d2f7', 'host_ref': 'OpaqueRef:e4c37665-3b24-4b25-9da3-cf3567000e4b', 'session_ref': 'OpaqueRef:e17028c1-a39d-4751-a809-84b2a25c6920', 'device_config': {'device': '/dev/disk/by-id/scsi-3600000e00d00000000012a6b00000000', 'SCSIid': '3600000e00d00000000012a6b00000000', 'SRmaster': 'true'}, 'command': 'vdi_generate_config', 'vdi_allow_caching': 'false', 'sr_ref': 'OpaqueRef:ddd6717d-987e-4618-9686-21af1f6bd7ff', 'vdi_uuid': 'b7b8b6a2-6326-4402-9f95-96135bb1d2f7'}
Aug 14 12:08:52 xcp-ng-1 SM: [32523] LVHDoHBAVDI.generate_config
Aug 14 12:08:52 xcp-ng-1 SM: [32523] ['/sbin/lvdisplay', '/dev/VG_XenStorage-e6c74423-9152-b3b9-f503-efccde0e3edb/LV-b7b8b6a2-6326-4402-9f95-96135bb1d2f7']
Aug 14 12:08:52 xcp-ng-1 SM: [32523]   pread SUCCESS
Aug 14 12:08:52 xcp-ng-1 SM: [32523] lock: closed /var/lock/sm/e6c74423-9152-b3b9-f503-efccde0e3edb/sr
Aug 14 12:08:53 xcp-ng-1 SM: [32553] Setting LVM_DEVICE to /dev/disk/by-scsid/3600000e00d00000000012a6b00000000
Aug 14 12:08:53 xcp-ng-1 SM: [32553] Setting LVM_DEVICE to /dev/disk/by-scsid/3600000e00d00000000012a6b00000000
Aug 14 12:08:53 xcp-ng-1 SM: [32553] Caught exception while looking up PBD for host  SR None: 'NoneType' object has no attribute 'xenapi'
Aug 14 12:08:53 xcp-ng-1 SM: [32553] lock: opening lock file /var/lock/sm/e6c74423-9152-b3b9-f503-efccde0e3edb/sr
Aug 14 12:08:53 xcp-ng-1 SM: [32553] LVMCache created for VG_XenStorage-e6c74423-9152-b3b9-f503-efccde0e3edb
Aug 14 12:08:53 xcp-ng-1 SM: [32553] ['/sbin/vgs', 'VG_XenStorage-e6c74423-9152-b3b9-f503-efccde0e3edb']
Aug 14 12:08:53 xcp-ng-1 SM: [32553]   pread SUCCESS
Aug 14 12:08:53 xcp-ng-1 SM: [32553] LVMCache: will initialize now
Aug 14 12:08:53 xcp-ng-1 SM: [32553] LVMCache: refreshing
Aug 14 12:08:53 xcp-ng-1 SM: [32553] ['/sbin/lvs', '--noheadings', '--units', 'b', '-o', '+lv_tags', '/dev/VG_XenStorage-e6c74423-9152-b3b9-f503-efccde0e3edb']
Aug 14 12:08:53 xcp-ng-1 SM: [32553]   pread SUCCESS
Aug 14 12:08:53 xcp-ng-1 SM: [32553] vdi_attach_from_config {'sr_uuid': 'e6c74423-9152-b3b9-f503-efccde0e3edb', 'device_config': {'device': '/dev/disk/by-id/scsi-3600000e00d00000000012a6b00000000', 'multipathing': 'false', 'SCSIid': '3600000e00d00000000012a6b00000000', 'SRmaster': 'true', 'multipathhandle': 'null'}, 'command': 'vdi_attach_from_config', 'vdi_uuid': 'b7b8b6a2-6326-4402-9f95-96135bb1d2f7'}
Aug 14 12:08:53 xcp-ng-1 SM: [32553] LVHDoHBAVDI.attach_from_config
Aug 14 12:08:53 xcp-ng-1 SM: [32553] LVHDVDI.attach for b7b8b6a2-6326-4402-9f95-96135bb1d2f7
Aug 14 12:08:53 xcp-ng-1 SM: [32553] lock: opening lock file /var/lock/sm/lvm-e6c74423-9152-b3b9-f503-efccde0e3edb/b7b8b6a2-6326-4402-9f95-96135bb1d2f7
Aug 14 12:08:53 xcp-ng-1 SM: [32553] lock: acquired /var/lock/sm/lvm-e6c74423-9152-b3b9-f503-efccde0e3edb/b7b8b6a2-6326-4402-9f95-96135bb1d2f7
Aug 14 12:08:53 xcp-ng-1 SM: [32553] lock: released /var/lock/sm/lvm-e6c74423-9152-b3b9-f503-efccde0e3edb/b7b8b6a2-6326-4402-9f95-96135bb1d2f7
Aug 14 12:08:53 xcp-ng-1 SM: [32553] lock: closed /var/lock/sm/lvm-e6c74423-9152-b3b9-f503-efccde0e3edb/b7b8b6a2-6326-4402-9f95-96135bb1d2f7
Aug 14 12:08:53 xcp-ng-1 SM: [32553] lock: opening lock file /var/lock/sm/lvm-e6c74423-9152-b3b9-f503-efccde0e3edb/b7b8b6a2-6326-4402-9f95-96135bb1d2f7
Aug 14 12:08:53 xcp-ng-1 SM: [32553] lock: acquired /var/lock/sm/lvm-e6c74423-9152-b3b9-f503-efccde0e3edb/b7b8b6a2-6326-4402-9f95-96135bb1d2f7
Aug 14 12:08:53 xcp-ng-1 SM: [32553] Refcount for lvm-e6c74423-9152-b3b9-f503-efccde0e3edb:b7b8b6a2-6326-4402-9f95-96135bb1d2f7 (0, 0) + (0, 1) => (0, 1)
Aug 14 12:08:53 xcp-ng-1 SM: [32553] Refcount for lvm-e6c74423-9152-b3b9-f503-efccde0e3edb:b7b8b6a2-6326-4402-9f95-96135bb1d2f7 set => (0, 1b)
Aug 14 12:08:53 xcp-ng-1 SM: [32553] ['/sbin/lvchange', '-ay', '/dev/VG_XenStorage-e6c74423-9152-b3b9-f503-efccde0e3edb/LV-b7b8b6a2-6326-4402-9f95-96135bb1d2f7']
Aug 14 12:08:53 xcp-ng-1 SM: [32553]   pread SUCCESS
Aug 14 12:08:53 xcp-ng-1 SM: [32553] lock: released /var/lock/sm/lvm-e6c74423-9152-b3b9-f503-efccde0e3edb/b7b8b6a2-6326-4402-9f95-96135bb1d2f7
Aug 14 12:08:53 xcp-ng-1 SM: [32553] lock: closed /var/lock/sm/lvm-e6c74423-9152-b3b9-f503-efccde0e3edb/b7b8b6a2-6326-4402-9f95-96135bb1d2f7
Aug 14 12:08:53 xcp-ng-1 SM: [32553] lock: closed /var/lock/sm/e6c74423-9152-b3b9-f503-efccde0e3edb/sr
Aug 14 12:09:02 xcp-ng-1 SM: [32739] Setting LVM_DEVICE to /dev/disk/by-scsid/3600000e00d00000000012a6b00000000
Aug 14 12:09:02 xcp-ng-1 SM: [32739] Setting LVM_DEVICE to /dev/disk/by-scsid/3600000e00d00000000012a6b00000000
Aug 14 12:09:02 xcp-ng-1 SM: [32739] lock: opening lock file /var/lock/sm/e6c74423-9152-b3b9-f503-efccde0e3edb/sr
Aug 14 12:09:02 xcp-ng-1 SM: [32739] LVMCache created for VG_XenStorage-e6c74423-9152-b3b9-f503-efccde0e3edb
Aug 14 12:09:02 xcp-ng-1 SM: [32739] ['/sbin/vgs', 'VG_XenStorage-e6c74423-9152-b3b9-f503-efccde0e3edb']
Aug 14 12:09:02 xcp-ng-1 SM: [32739]   pread SUCCESS
Aug 14 12:09:02 xcp-ng-1 SM: [32739] Entering _checkMetadataVolume
Aug 14 12:09:02 xcp-ng-1 SM: [32739] LVMCache: will initialize now
Aug 14 12:09:02 xcp-ng-1 SM: [32739] LVMCache: refreshing
Aug 14 12:09:02 xcp-ng-1 SM: [32739] ['/sbin/lvs', '--noheadings', '--units', 'b', '-o', '+lv_tags', '/dev/VG_XenStorage-e6c74423-9152-b3b9-f503-efccde0e3edb']
Aug 14 12:09:02 xcp-ng-1 SM: [32739]   pread SUCCESS
Aug 14 12:09:02 xcp-ng-1 SM: [32739] vdi_deactivate {'sr_uuid': 'e6c74423-9152-b3b9-f503-efccde0e3edb', 'subtask_of': 'OpaqueRef:57f4932a-adc1-421c-b94e-0af161d6cc3b', 'vdi_ref': 'OpaqueRef:94e45b2c-d99f-44e9-a4b5-e9660c99ecb4', 'vdi_on_boot': 'persist', 'args': [], 'vdi_location': 'ed1ae7c2-a5cb-4989-8bd2-7e351b77c835', 'host_ref': 'OpaqueRef:e4c37665-3b24-4b25-9da3-cf3567000e4b', 'session_ref': 'OpaqueRef:4163f1d4-23b4-4bfb-b381-8fa437e66ce8', 'device_config': {'device': '/dev/disk/by-id/scsi-3600000e00d00000000012a6b00000000', 'SCSIid': '3600000e00d00000000012a6b00000000', 'SRmaster': 'true'}, 'command': 'vdi_deactivate', 'vdi_allow_caching': 'false', 'sr_ref': 'OpaqueRef:ddd6717d-987e-4618-9686-21af1f6bd7ff', 'vdi_uuid': 'ed1ae7c2-a5cb-4989-8bd2-7e351b77c835'}
Aug 14 12:09:02 xcp-ng-1 SM: [32739] lock: opening lock file /var/lock/sm/ed1ae7c2-a5cb-4989-8bd2-7e351b77c835/vdi
Aug 14 12:09:02 xcp-ng-1 SM: [32739] blktap2.deactivate
Aug 14 12:09:02 xcp-ng-1 SM: [32739] lock: acquired /var/lock/sm/ed1ae7c2-a5cb-4989-8bd2-7e351b77c835/vdi
Aug 14 12:09:02 xcp-ng-1 SM: [32739] Backend path /dev/sm/backend/e6c74423-9152-b3b9-f503-efccde0e3edb/ed1ae7c2-a5cb-4989-8bd2-7e351b77c835 does not exist
Aug 14 12:09:02 xcp-ng-1 SM: [32739] LVHDVDI.detach for ed1ae7c2-a5cb-4989-8bd2-7e351b77c835
Aug 14 12:09:02 xcp-ng-1 SM: [32739] lock: opening lock file /var/lock/sm/lvm-e6c74423-9152-b3b9-f503-efccde0e3edb/ed1ae7c2-a5cb-4989-8bd2-7e351b77c835
Aug 14 12:09:02 xcp-ng-1 SM: [32739] lock: acquired /var/lock/sm/lvm-e6c74423-9152-b3b9-f503-efccde0e3edb/ed1ae7c2-a5cb-4989-8bd2-7e351b77c835
Aug 14 12:09:02 xcp-ng-1 SM: [32739] lock: released /var/lock/sm/lvm-e6c74423-9152-b3b9-f503-efccde0e3edb/ed1ae7c2-a5cb-4989-8bd2-7e351b77c835
Aug 14 12:09:02 xcp-ng-1 SM: [32739] lock: closed /var/lock/sm/lvm-e6c74423-9152-b3b9-f503-efccde0e3edb/ed1ae7c2-a5cb-4989-8bd2-7e351b77c835
Aug 14 12:09:02 xcp-ng-1 SM: [32739] lock: opening lock file /var/lock/sm/lvm-e6c74423-9152-b3b9-f503-efccde0e3edb/ed1ae7c2-a5cb-4989-8bd2-7e351b77c835
Aug 14 12:09:02 xcp-ng-1 SM: [32739] lock: acquired /var/lock/sm/lvm-e6c74423-9152-b3b9-f503-efccde0e3edb/ed1ae7c2-a5cb-4989-8bd2-7e351b77c835
Aug 14 12:09:02 xcp-ng-1 SM: [32739] Refcount for lvm-e6c74423-9152-b3b9-f503-efccde0e3edb:ed1ae7c2-a5cb-4989-8bd2-7e351b77c835 (0, 1) + (0, -1) => (0, 0)
Aug 14 12:09:02 xcp-ng-1 SM: [32739] Refcount for lvm-e6c74423-9152-b3b9-f503-efccde0e3edb:ed1ae7c2-a5cb-4989-8bd2-7e351b77c835 set => (0, 0b)
Aug 14 12:09:02 xcp-ng-1 SM: [32739] ['/sbin/lvchange', '-an', '/dev/VG_XenStorage-e6c74423-9152-b3b9-f503-efccde0e3edb/LV-ed1ae7c2-a5cb-4989-8bd2-7e351b77c835']
Aug 14 12:09:02 xcp-ng-1 SM: [32739]   pread SUCCESS
Aug 14 12:09:02 xcp-ng-1 SM: [32739] ['/sbin/dmsetup', 'status', 'VG_XenStorage--e6c74423--9152--b3b9--f503--efccde0e3edb-LV--ed1ae7c2--a5cb--4989--8bd2--7e351b77c835']
Aug 14 12:09:02 xcp-ng-1 SM: [32739]   pread SUCCESS
Aug 14 12:09:02 xcp-ng-1 SM: [32739] lock: released /var/lock/sm/lvm-e6c74423-9152-b3b9-f503-efccde0e3edb/ed1ae7c2-a5cb-4989-8bd2-7e351b77c835
Aug 14 12:09:02 xcp-ng-1 SM: [32739] lock: closed /var/lock/sm/lvm-e6c74423-9152-b3b9-f503-efccde0e3edb/ed1ae7c2-a5cb-4989-8bd2-7e351b77c835
Aug 14 12:09:02 xcp-ng-1 SM: [32739] ***** BLKTAP2:<function _deactivate_locked at 0x1356500>: EXCEPTION <type 'exceptions.AssertionError'>,
Aug 14 12:09:02 xcp-ng-1 SM: [32739]   File "/opt/xensource/sm/blktap2.py", line 83, in wrapper
Aug 14 12:09:02 xcp-ng-1 SM: [32739]     ret = op(self, *args)
Aug 14 12:09:02 xcp-ng-1 SM: [32739]   File "/opt/xensource/sm/blktap2.py", line 1666, in _deactivate_locked
Aug 14 12:09:02 xcp-ng-1 SM: [32739]     self._remove_tag(vdi_uuid)
Aug 14 12:09:02 xcp-ng-1 SM: [32739]   File "/opt/xensource/sm/blktap2.py", line 1452, in _remove_tag
Aug 14 12:09:02 xcp-ng-1 SM: [32739]     assert sm_config.has_key(host_key)
Aug 14 12:09:02 xcp-ng-1 SM: [32739]
Aug 14 12:09:02 xcp-ng-1 SM: [32739] lock: released /var/lock/sm/ed1ae7c2-a5cb-4989-8bd2-7e351b77c835/vdi
Aug 14 12:09:02 xcp-ng-1 SM: [32739] ***** generic exception: vdi_deactivate: EXCEPTION <type 'exceptions.AssertionError'>,
Aug 14 12:09:02 xcp-ng-1 SM: [32739]   File "/opt/xensource/sm/SRCommand.py", line 110, in run
Aug 14 12:09:02 xcp-ng-1 SM: [32739]     return self._run_locked(sr)
Aug 14 12:09:02 xcp-ng-1 SM: [32739]   File "/opt/xensource/sm/SRCommand.py", line 159, in _run_locked
Aug 14 12:09:02 xcp-ng-1 SM: [32739]     rv = self._run(sr, target)
Aug 14 12:09:02 xcp-ng-1 SM: [32739]   File "/opt/xensource/sm/SRCommand.py", line 274, in _run
Aug 14 12:09:02 xcp-ng-1 SM: [32739]     caching_params)
Aug 14 12:09:02 xcp-ng-1 SM: [32739]   File "/opt/xensource/sm/blktap2.py", line 1647, in deactivate
Aug 14 12:09:02 xcp-ng-1 SM: [32739]     if self._deactivate_locked(sr_uuid, vdi_uuid, caching_params):
Aug 14 12:09:02 xcp-ng-1 SM: [32739]   File "/opt/xensource/sm/blktap2.py", line 83, in wrapper
Aug 14 12:09:02 xcp-ng-1 SM: [32739]     ret = op(self, *args)
Aug 14 12:09:02 xcp-ng-1 SM: [32739]   File "/opt/xensource/sm/blktap2.py", line 1666, in _deactivate_locked
Aug 14 12:09:02 xcp-ng-1 SM: [32739]     self._remove_tag(vdi_uuid)
Aug 14 12:09:02 xcp-ng-1 SM: [32739]   File "/opt/xensource/sm/blktap2.py", line 1452, in _remove_tag
Aug 14 12:09:02 xcp-ng-1 SM: [32739]     assert sm_config.has_key(host_key)
Aug 14 12:09:02 xcp-ng-1 SM: [32739]
Aug 14 12:09:02 xcp-ng-1 SM: [32739] ***** LVHD over FC: EXCEPTION <type 'exceptions.AssertionError'>,
Aug 14 12:09:02 xcp-ng-1 SM: [32739]   File "/opt/xensource/sm/SRCommand.py", line 372, in run
Aug 14 12:09:02 xcp-ng-1 SM: [32739]     ret = cmd.run(sr)
Aug 14 12:09:02 xcp-ng-1 SM: [32739]   File "/opt/xensource/sm/SRCommand.py", line 110, in run
Aug 14 12:09:02 xcp-ng-1 SM: [32739]     return self._run_locked(sr)
Aug 14 12:09:02 xcp-ng-1 SM: [32739]   File "/opt/xensource/sm/SRCommand.py", line 159, in _run_locked
Aug 14 12:09:02 xcp-ng-1 SM: [32739]     rv = self._run(sr, target)
Aug 14 12:09:02 xcp-ng-1 SM: [32739]   File "/opt/xensource/sm/SRCommand.py", line 274, in _run
Aug 14 12:09:02 xcp-ng-1 SM: [32739]     caching_params)
Aug 14 12:09:02 xcp-ng-1 SM: [32739]   File "/opt/xensource/sm/blktap2.py", line 1647, in deactivate
Aug 14 12:09:02 xcp-ng-1 SM: [32739]     if self._deactivate_locked(sr_uuid, vdi_uuid, caching_params):
Aug 14 12:09:02 xcp-ng-1 SM: [32739]   File "/opt/xensource/sm/blktap2.py", line 83, in wrapper
Aug 14 12:09:02 xcp-ng-1 SM: [32739]     ret = op(self, *args)
Aug 14 12:09:02 xcp-ng-1 SM: [32739]   File "/opt/xensource/sm/blktap2.py", line 1666, in _deactivate_locked
Aug 14 12:09:02 xcp-ng-1 SM: [32739]     self._remove_tag(vdi_uuid)
Aug 14 12:09:02 xcp-ng-1 SM: [32739]   File "/opt/xensource/sm/blktap2.py", line 1452, in _remove_tag
Aug 14 12:09:02 xcp-ng-1 SM: [32739]     assert sm_config.has_key(host_key)
Aug 14 12:09:02 xcp-ng-1 SM: [32739]
Aug 14 12:09:02 xcp-ng-1 SM: [32739] lock: closed /var/lock/sm/ed1ae7c2-a5cb-4989-8bd2-7e351b77c835/vdi
Aug 14 12:09:02 xcp-ng-1 SM: [32739] lock: closed /var/lock/sm/e6c74423-9152-b3b9-f503-efccde0e3edb/sr
Aug 14 12:09:03 xcp-ng-1 SM: [330] Setting LVM_DEVICE to /dev/disk/by-scsid/3600000e00d00000000012a6b00000000
Aug 14 12:09:03 xcp-ng-1 SM: [330] Setting LVM_DEVICE to /dev/disk/by-scsid/3600000e00d00000000012a6b00000000
Aug 14 12:09:03 xcp-ng-1 SM: [330] lock: opening lock file /var/lock/sm/e6c74423-9152-b3b9-f503-efccde0e3edb/sr
Aug 14 12:09:03 xcp-ng-1 SM: [330] LVMCache created for VG_XenStorage-e6c74423-9152-b3b9-f503-efccde0e3edb
Aug 14 12:09:03 xcp-ng-1 SM: [330] ['/sbin/vgs', 'VG_XenStorage-e6c74423-9152-b3b9-f503-efccde0e3edb']
Aug 14 12:09:03 xcp-ng-1 SM: [330]   pread SUCCESS
Aug 14 12:09:03 xcp-ng-1 SM: [330] Entering _checkMetadataVolume
Aug 14 12:09:03 xcp-ng-1 SM: [330] LVMCache: will initialize now
Aug 14 12:09:03 xcp-ng-1 SM: [330] LVMCache: refreshing
Aug 14 12:09:03 xcp-ng-1 SM: [330] ['/sbin/lvs', '--noheadings', '--units', 'b', '-o', '+lv_tags', '/dev/VG_XenStorage-e6c74423-9152-b3b9-f503-efccde0e3edb']
Aug 14 12:09:03 xcp-ng-1 SM: [330]   pread SUCCESS
Aug 14 12:09:03 xcp-ng-1 SM: [330] vdi_deactivate {'sr_uuid': 'e6c74423-9152-b3b9-f503-efccde0e3edb', 'subtask_of': 'OpaqueRef:57f4932a-adc1-421c-b94e-0af161d6cc3b', 'vdi_ref': 'OpaqueRef:b34971ea-0441-4266-8413-23a22ef5a226', 'vdi_on_boot': 'persist', 'args': [], 'vdi_location': 'b7b8b6a2-6326-4402-9f95-96135bb1d2f7', 'host_ref': 'OpaqueRef:e4c37665-3b24-4b25-9da3-cf3567000e4b', 'session_ref': 'OpaqueRef:2561aac2-7579-4ad7-8e9e-232bd6b64efa', 'device_config': {'device': '/dev/disk/by-id/scsi-3600000e00d00000000012a6b00000000', 'SCSIid': '3600000e00d00000000012a6b00000000', 'SRmaster': 'true'}, 'command': 'vdi_deactivate', 'vdi_allow_caching': 'false', 'sr_ref': 'OpaqueRef:ddd6717d-987e-4618-9686-21af1f6bd7ff', 'vdi_uuid': 'b7b8b6a2-6326-4402-9f95-96135bb1d2f7'}
Aug 14 12:09:03 xcp-ng-1 SM: [330] lock: opening lock file /var/lock/sm/b7b8b6a2-6326-4402-9f95-96135bb1d2f7/vdi
Aug 14 12:09:03 xcp-ng-1 SM: [330] blktap2.deactivate
Aug 14 12:09:03 xcp-ng-1 SM: [330] lock: acquired /var/lock/sm/b7b8b6a2-6326-4402-9f95-96135bb1d2f7/vdi
Aug 14 12:09:03 xcp-ng-1 SM: [330] Backend path /dev/sm/backend/e6c74423-9152-b3b9-f503-efccde0e3edb/b7b8b6a2-6326-4402-9f95-96135bb1d2f7 does not exist
Aug 14 12:09:03 xcp-ng-1 SM: [330] LVHDVDI.detach for b7b8b6a2-6326-4402-9f95-96135bb1d2f7
Aug 14 12:09:03 xcp-ng-1 SM: [330] lock: opening lock file /var/lock/sm/lvm-e6c74423-9152-b3b9-f503-efccde0e3edb/b7b8b6a2-6326-4402-9f95-96135bb1d2f7
Aug 14 12:09:03 xcp-ng-1 SM: [330] lock: acquired /var/lock/sm/lvm-e6c74423-9152-b3b9-f503-efccde0e3edb/b7b8b6a2-6326-4402-9f95-96135bb1d2f7
Aug 14 12:09:03 xcp-ng-1 SM: [330] lock: released /var/lock/sm/lvm-e6c74423-9152-b3b9-f503-efccde0e3edb/b7b8b6a2-6326-4402-9f95-96135bb1d2f7
Aug 14 12:09:03 xcp-ng-1 SM: [330] lock: closed /var/lock/sm/lvm-e6c74423-9152-b3b9-f503-efccde0e3edb/b7b8b6a2-6326-4402-9f95-96135bb1d2f7
Aug 14 12:09:03 xcp-ng-1 SM: [330] lock: opening lock file /var/lock/sm/lvm-e6c74423-9152-b3b9-f503-efccde0e3edb/b7b8b6a2-6326-4402-9f95-96135bb1d2f7
Aug 14 12:09:03 xcp-ng-1 SM: [330] lock: acquired /var/lock/sm/lvm-e6c74423-9152-b3b9-f503-efccde0e3edb/b7b8b6a2-6326-4402-9f95-96135bb1d2f7
Aug 14 12:09:03 xcp-ng-1 SM: [330] Refcount for lvm-e6c74423-9152-b3b9-f503-efccde0e3edb:b7b8b6a2-6326-4402-9f95-96135bb1d2f7 (0, 1) + (0, -1) => (0, 0)
Aug 14 12:09:03 xcp-ng-1 SM: [330] Refcount for lvm-e6c74423-9152-b3b9-f503-efccde0e3edb:b7b8b6a2-6326-4402-9f95-96135bb1d2f7 set => (0, 0b)
Aug 14 12:09:03 xcp-ng-1 SM: [330] ['/sbin/lvchange', '-an', '/dev/VG_XenStorage-e6c74423-9152-b3b9-f503-efccde0e3edb/LV-b7b8b6a2-6326-4402-9f95-96135bb1d2f7']
Aug 14 12:09:03 xcp-ng-1 SM: [330]   pread SUCCESS
Aug 14 12:09:03 xcp-ng-1 SM: [330] ['/sbin/dmsetup', 'status', 'VG_XenStorage--e6c74423--9152--b3b9--f503--efccde0e3edb-LV--b7b8b6a2--6326--4402--9f95--96135bb1d2f7']
Aug 14 12:09:03 xcp-ng-1 SM: [330]   pread SUCCESS
Aug 14 12:09:03 xcp-ng-1 SM: [330] lock: released /var/lock/sm/lvm-e6c74423-9152-b3b9-f503-efccde0e3edb/b7b8b6a2-6326-4402-9f95-96135bb1d2f7
Aug 14 12:09:03 xcp-ng-1 SM: [330] lock: closed /var/lock/sm/lvm-e6c74423-9152-b3b9-f503-efccde0e3edb/b7b8b6a2-6326-4402-9f95-96135bb1d2f7
Aug 14 12:09:03 xcp-ng-1 SM: [330] ***** BLKTAP2:<function _deactivate_locked at 0x16cc500>: EXCEPTION <type 'exceptions.AssertionError'>,
Aug 14 12:09:03 xcp-ng-1 SM: [330]   File "/opt/xensource/sm/blktap2.py", line 83, in wrapper
Aug 14 12:09:03 xcp-ng-1 SM: [330]     ret = op(self, *args)
Aug 14 12:09:03 xcp-ng-1 SM: [330]   File "/opt/xensource/sm/blktap2.py", line 1666, in _deactivate_locked
Aug 14 12:09:03 xcp-ng-1 SM: [330]     self._remove_tag(vdi_uuid)
Aug 14 12:09:03 xcp-ng-1 SM: [330]   File "/opt/xensource/sm/blktap2.py", line 1452, in _remove_tag
Aug 14 12:09:03 xcp-ng-1 SM: [330]     assert sm_config.has_key(host_key)
Aug 14 12:09:03 xcp-ng-1 SM: [330]
Aug 14 12:09:03 xcp-ng-1 SM: [330] lock: released /var/lock/sm/b7b8b6a2-6326-4402-9f95-96135bb1d2f7/vdi
Aug 14 12:09:03 xcp-ng-1 SM: [330] ***** generic exception: vdi_deactivate: EXCEPTION <type 'exceptions.AssertionError'>,
Aug 14 12:09:03 xcp-ng-1 SM: [330]   File "/opt/xensource/sm/SRCommand.py", line 110, in run
Aug 14 12:09:03 xcp-ng-1 SM: [330]     return self._run_locked(sr)
Aug 14 12:09:03 xcp-ng-1 SM: [330]   File "/opt/xensource/sm/SRCommand.py", line 159, in _run_locked
Aug 14 12:09:03 xcp-ng-1 SM: [330]     rv = self._run(sr, target)
Aug 14 12:09:03 xcp-ng-1 SM: [330]   File "/opt/xensource/sm/SRCommand.py", line 274, in _run
Aug 14 12:09:03 xcp-ng-1 SM: [330]     caching_params)
Aug 14 12:09:03 xcp-ng-1 SM: [330]   File "/opt/xensource/sm/blktap2.py", line 1647, in deactivate
Aug 14 12:09:03 xcp-ng-1 SM: [330]     if self._deactivate_locked(sr_uuid, vdi_uuid, caching_params):
Aug 14 12:09:03 xcp-ng-1 SM: [330]   File "/opt/xensource/sm/blktap2.py", line 83, in wrapper
Aug 14 12:09:03 xcp-ng-1 SM: [330]     ret = op(self, *args)
Aug 14 12:09:03 xcp-ng-1 SM: [330]   File "/opt/xensource/sm/blktap2.py", line 1666, in _deactivate_locked
Aug 14 12:09:03 xcp-ng-1 SM: [330]     self._remove_tag(vdi_uuid)
Aug 14 12:09:03 xcp-ng-1 SM: [330]   File "/opt/xensource/sm/blktap2.py", line 1452, in _remove_tag
Aug 14 12:09:03 xcp-ng-1 SM: [330]     assert sm_config.has_key(host_key)
Aug 14 12:09:03 xcp-ng-1 SM: [330]
Aug 14 12:09:03 xcp-ng-1 SM: [330] ***** LVHD over FC: EXCEPTION <type 'exceptions.AssertionError'>,
Aug 14 12:09:03 xcp-ng-1 SM: [330]   File "/opt/xensource/sm/SRCommand.py", line 372, in run
Aug 14 12:09:03 xcp-ng-1 SM: [330]     ret = cmd.run(sr)
Aug 14 12:09:03 xcp-ng-1 SM: [330]   File "/opt/xensource/sm/SRCommand.py", line 110, in run
Aug 14 12:09:03 xcp-ng-1 SM: [330]     return self._run_locked(sr)
Aug 14 12:09:03 xcp-ng-1 SM: [330]   File "/opt/xensource/sm/SRCommand.py", line 159, in _run_locked
Aug 14 12:09:03 xcp-ng-1 SM: [330]     rv = self._run(sr, target)
Aug 14 12:09:03 xcp-ng-1 SM: [330]   File "/opt/xensource/sm/SRCommand.py", line 274, in _run
Aug 14 12:09:03 xcp-ng-1 SM: [330]     caching_params)
Aug 14 12:09:03 xcp-ng-1 SM: [330]   File "/opt/xensource/sm/blktap2.py", line 1647, in deactivate
Aug 14 12:09:03 xcp-ng-1 SM: [330]     if self._deactivate_locked(sr_uuid, vdi_uuid, caching_params):
Aug 14 12:09:03 xcp-ng-1 SM: [330]   File "/opt/xensource/sm/blktap2.py", line 83, in wrapper
Aug 14 12:09:03 xcp-ng-1 SM: [330]     ret = op(self, *args)
Aug 14 12:09:03 xcp-ng-1 SM: [330]   File "/opt/xensource/sm/blktap2.py", line 1666, in _deactivate_locked
Aug 14 12:09:03 xcp-ng-1 SM: [330]     self._remove_tag(vdi_uuid)
Aug 14 12:09:03 xcp-ng-1 SM: [330]   File "/opt/xensource/sm/blktap2.py", line 1452, in _remove_tag
Aug 14 12:09:03 xcp-ng-1 SM: [330]     assert sm_config.has_key(host_key)
Aug 14 12:09:03 xcp-ng-1 SM: [330]
Aug 14 12:09:03 xcp-ng-1 SM: [330] lock: closed /var/lock/sm/b7b8b6a2-6326-4402-9f95-96135bb1d2f7/vdi
Aug 14 12:09:03 xcp-ng-1 SM: [330] lock: closed /var/lock/sm/e6c74423-9152-b3b9-f503-efccde0e3edb/sr

@olivierlambert
Copy link
Member

Maybe xensource.log is more useful here?

@txsastre
Copy link
Author

txsastre commented Aug 14, 2018

here you are, I can see an error in the log at "Aug 14 12:18:16"



Aug 14 12:18:16 xcp-ng-1 xapi: [debug|xcp-ng-1|320040 UNIX /var/lib/xcp/xapi||dummytaskhelper] task dispatch:VDI.get_by_uuid D:871f4c04d078 created by task R:c954089f20f0
Aug 14 12:18:16 xcp-ng-1 xapi: [debug|xcp-ng-1|320041 UNIX /var/lib/xcp/xapi||dummytaskhelper] task dispatch:host.get_by_uuid D:fd76cacb507f created by task R:c954089f20f0
Aug 14 12:18:16 xcp-ng-1 xapi: [debug|xcp-ng-1|320042 UNIX /var/lib/xcp/xapi||dummytaskhelper] task dispatch:VDI.get_sm_config D:11598e0a6e5e created by task R:c954089f20f0
Aug 14 12:18:16 xcp-ng-1 xapi: [ info|xcp-ng-1|320015 UNIX /var/lib/xcp/xapi|sm_exec D:1e3a01602069|xapi] Session.destroy trackid=753bf9fe9dd2b223f311d2c4c3ff652d
Aug 14 12:18:16 xcp-ng-1 xapi: [error|xcp-ng-1|320015 UNIX /var/lib/xcp/xapi|host.ha_release_resources R:c954089f20f0|backtrace] sm_exec D:1e3a01602069 failed with exception Storage_interface.Backend_error(_)
Aug 14 12:18:16 xcp-ng-1 xapi: [error|xcp-ng-1|320015 UNIX /var/lib/xcp/xapi|host.ha_release_resources R:c954089f20f0|backtrace] Raised Storage_interface.Backend_error(_)
Aug 14 12:18:16 xcp-ng-1 xapi: [error|xcp-ng-1|320015 UNIX /var/lib/xcp/xapi|host.ha_release_resources R:c954089f20f0|backtrace] 1/8 xapi @ xcp-ng-1 Raised at file ocaml/xapi/sm_exec.ml, line 216
Aug 14 12:18:16 xcp-ng-1 xapi: [error|xcp-ng-1|320015 UNIX /var/lib/xcp/xapi|host.ha_release_resources R:c954089f20f0|backtrace] 2/8 xapi @ xcp-ng-1 Called from file lib/xapi-stdext-pervasives/pervasiveext.ml, line 22
Aug 14 12:18:16 xcp-ng-1 xapi: [error|xcp-ng-1|320015 UNIX /var/lib/xcp/xapi|host.ha_release_resources R:c954089f20f0|backtrace] 3/8 xapi @ xcp-ng-1 Called from file lib/xapi-stdext-pervasives/pervasiveext.ml, line 26
Aug 14 12:18:16 xcp-ng-1 xapi: [error|xcp-ng-1|320015 UNIX /var/lib/xcp/xapi|host.ha_release_resources R:c954089f20f0|backtrace] 4/8 xapi @ xcp-ng-1 Called from file ocaml/xapi/server_helpers.ml, line 73
Aug 14 12:18:16 xcp-ng-1 xapi: [error|xcp-ng-1|320015 UNIX /var/lib/xcp/xapi|host.ha_release_resources R:c954089f20f0|backtrace] 5/8 xapi @ xcp-ng-1 Called from file ocaml/xapi/server_helpers.ml, line 91
Aug 14 12:18:16 xcp-ng-1 xapi: [error|xcp-ng-1|320015 UNIX /var/lib/xcp/xapi|host.ha_release_resources R:c954089f20f0|backtrace] 6/8 xapi @ xcp-ng-1 Called from file lib/xapi-stdext-pervasives/pervasiveext.ml, line 22
Aug 14 12:18:16 xcp-ng-1 xapi: [error|xcp-ng-1|320015 UNIX /var/lib/xcp/xapi|host.ha_release_resources R:c954089f20f0|backtrace] 7/8 xapi @ xcp-ng-1 Called from file map.ml, line 122
Aug 14 12:18:16 xcp-ng-1 xapi: [error|xcp-ng-1|320015 UNIX /var/lib/xcp/xapi|host.ha_release_resources R:c954089f20f0|backtrace] 8/8 xapi @ xcp-ng-1 Called from file src0/sexp_conv.ml, line 150
Aug 14 12:18:16 xcp-ng-1 xapi: [error|xcp-ng-1|320015 UNIX /var/lib/xcp/xapi|host.ha_release_resources R:c954089f20f0|backtrace]
Aug 14 12:18:16 xcp-ng-1 xapi: [ warn|xcp-ng-1|320015 UNIX /var/lib/xcp/xapi|host.ha_release_resources R:c954089f20f0|xapi] Ignoring exception calling SM vdi_deactivate for VDI uuid 29057119-5c02-43da-ba37-a201b7f4a7c9: INTERNAL_ERROR: [ Storage_interface.Backend_error(_) ] (possibly VDI has been deleted while we were offline
Aug 14 12:18:16 xcp-ng-1 xapi: [ info|xcp-ng-1|320015 UNIX /var/lib/xcp/xapi|host.ha_release_resources R:c954089f20f0|xapi] permanent_vdi_detach: vdi-uuid = 29057119-5c02-43da-ba37-a201b7f4a7c9
Aug 14 12:18:16 xcp-ng-1 xapi: [debug|xcp-ng-1|319418 INET :::80||dummytaskhelper] task dispatch:VDI.get_by_uuid D:c6b990b882e6 created by task R:076ddacd8e13
Aug 14 12:18:16 xcp-ng-1 xapi: [debug|xcp-ng-1|309215 INET :::80||dummytaskhelper] task dispatch:host.get_by_uuid D:bcb659e1c700 created by task R:076ddacd8e13
Aug 14 12:18:16 xcp-ng-1 xapi: [debug|xcp-ng-1|319316 INET :::80||dummytaskhelper] task dispatch:VDI.get_by_uuid D:0d3a54623940 created by task R:b2ad6437fcc0
Aug 14 12:18:16 xcp-ng-1 xapi: [debug|xcp-ng-1|319418 INET :::80||dummytaskhelper] task dispatch:VDI.get_sm_config D:b0917d6497e3 created by task R:076ddacd8e13
Aug 14 12:18:16 xcp-ng-1 xapi: [debug|xcp-ng-1|319383 INET :::80||dummytaskhelper] task dispatch:host.get_by_uuid D:ade83445ddfd created by task R:b2ad6437fcc0
Aug 14 12:18:16 xcp-ng-1 xapi: [debug|xcp-ng-1|319316 INET :::80||dummytaskhelper] task dispatch:VDI.get_sm_config D:78a1febaf7f2 created by task R:b2ad6437fcc0
Aug 14 12:18:16 xcp-ng-1 xapi: [debug|xcp-ng-1|320015 UNIX /var/lib/xcp/xapi|host.ha_release_resources R:c954089f20f0|helpers] /opt/xensource/bin/static-vdis detach 29057119-5c02-43da-ba37-a201b7f4a7c9 succeeded [ output = '' ]
Aug 14 12:18:16 xcp-ng-1 xapi: [debug|xcp-ng-1|320015 UNIX /var/lib/xcp/xapi|host.ha_release_resources R:c954089f20f0|helpers] /opt/xensource/bin/static-vdis del 29057119-5c02-43da-ba37-a201b7f4a7c9 succeeded [ output = '' ]
Aug 14 12:18:16 xcp-ng-1 xapi: [debug|xcp-ng-1|320013 UNIX /var/lib/xcp/xapi|host.ha_release_resources R:076ddacd8e13|xmlrpc_client] stunnel pid: 4330 (cached = true) returned stunnel to cache
Aug 14 12:18:16 xcp-ng-1 xapi: [ info|xcp-ng-1|320013 UNIX /var/lib/xcp/xapi|host.ha_release_resources R:076ddacd8e13|xapi] Session.destroy trackid=abf832f01daed97e28ba831f1560e0ec
Aug 14 12:18:16 xcp-ng-1 xapi: [debug|xcp-ng-1|320013 UNIX /var/lib/xcp/xapi|host.ha_release_resources R:076ddacd8e13|taskhelper] the status of R:076ddacd8e13 is: success; cannot set it to `success
Aug 14 12:18:16 xcp-ng-1 xapi: [debug|xcp-ng-1|320014 UNIX /var/lib/xcp/xapi|host.ha_release_resources R:b2ad6437fcc0|xmlrpc_client] stunnel pid: 4391 (cached = true) returned stunnel to cache
Aug 14 12:18:16 xcp-ng-1 xapi: [ info|xcp-ng-1|320014 UNIX /var/lib/xcp/xapi|host.ha_release_resources R:b2ad6437fcc0|xapi] Session.destroy trackid=0c7828f469231ededba5efdaea58bf01
Aug 14 12:18:16 xcp-ng-1 xapi: [debug|xcp-ng-1|320014 UNIX /var/lib/xcp/xapi|host.ha_release_resources R:b2ad6437fcc0|taskhelper] the status of R:b2ad6437fcc0 is: success; cannot set it to `success
Aug 14 12:18:16 xcp-ng-1 xapi: [debug|xcp-ng-1|319868 |Async.pool.enable_ha R:015db4ab2358|mscgen] xapi=>xapi [label="session.logout"];
Aug 14 12:18:16 xcp-ng-1 xapi: [debug|xcp-ng-1|320043 UNIX /var/lib/xcp/xapi||dummytaskhelper] task dispatch:session.logout D:0efda1a56f16 created by task R:015db4ab2358
Aug 14 12:18:16 xcp-ng-1 xapi: [ info|xcp-ng-1|320043 UNIX /var/lib/xcp/xapi|session.logout D:3b02a41d6c3f|xapi] Session.destroy trackid=6c82904184f6569c0d42642f5eb58952
Aug 14 12:18:16 xcp-ng-1 xapi: [debug|xcp-ng-1|319868 |Async.pool.enable_ha R:015db4ab2358|mscgen] xapi=>xapi [label="session.logout"];
Aug 14 12:18:16 xcp-ng-1 xapi: [debug|xcp-ng-1|320044 UNIX /var/lib/xcp/xapi||dummytaskhelper] task dispatch:session.logout D:1eff289e9760 created by task R:015db4ab2358
Aug 14 12:18:16 xcp-ng-1 xapi: [ info|xcp-ng-1|320044 UNIX /var/lib/xcp/xapi|session.logout D:381e299c17e9|xapi] Session.destroy trackid=814553616e2e60c977fd718a5ee5a8b6
Aug 14 12:18:16 xcp-ng-1 xapi: [debug|xcp-ng-1|319868 |Async.pool.enable_ha R:015db4ab2358|xapi_ha] Caught exception while enabling HA: INTERNAL_ERROR: [ Xha_scripts.Xha_error(4) ]
Aug 14 12:18:16 xcp-ng-1 xapi: [error|xcp-ng-1|319868 ||backtrace] Async.pool.enable_ha R:015db4ab2358 failed with exception Server_error(INTERNAL_ERROR, [ Xha_scripts.Xha_error(4) ])
Aug 14 12:18:16 xcp-ng-1 xapi: [error|xcp-ng-1|319868 ||backtrace] Raised Server_error(INTERNAL_ERROR, [ Xha_scripts.Xha_error(4) ])
Aug 14 12:18:16 xcp-ng-1 xapi: [error|xcp-ng-1|319868 ||backtrace] 1/1 xapi @ xcp-ng-1 Raised at file (Thread 319868 has no backtrace table. Was with_backtraces called?, line 0
Aug 14 12:18:16 xcp-ng-1 xapi: [error|xcp-ng-1|319868 ||backtrace]
Aug 14 12:18:16 xcp-ng-1 xapi: [debug|xcp-ng-1|58 ||mscgen] xapi=>xapi [label="session.logout"];
Aug 14 12:18:16 xcp-ng-1 xapi: [debug|xcp-ng-1|320045 UNIX /var/lib/xcp/xapi||dummytaskhelper] task dispatch:session.logout D:7c1b42e84a68 created by task D:f8fe220f864c
Aug 14 12:18:16 xcp-ng-1 xapi: [ info|xcp-ng-1|320045 UNIX /var/lib/xcp/xapi|session.logout D:e3779fca00fe|xapi] Session.destroy trackid=5b63a07846b308ee27b45f657d03142a
Aug 14 12:18:16 xcp-ng-1 xapi: [debug|xcp-ng-1|58 ||mscgen] xapi=>xapi [label="session.slave_login"];
Aug 14 12:18:16 xcp-ng-1 xapi: [debug|xcp-ng-1|320046 UNIX /var/lib/xcp/xapi||dummytaskhelper] task dispatch:session.slave_login D:b672350efe06 created by task D:f8fe220f864c
Aug 14 12:18:16 xcp-ng-1 xapi: [ info|xcp-ng-1|320046 UNIX /var/lib/xcp/xapi|session.slave_login D:97cdfb9e06e2|xapi] Session.create trackid=8ca6a77faae414fa201d452cfae9b949 pool=true uname= originator=xapi is_local_superuser=true auth_user_sid= parent=trackid=9834f5af41c964e225f24279aefe4e49
Aug 14 12:18:16 xcp-ng-1 xapi: [debug|xcp-ng-1|320046 UNIX /var/lib/xcp/xapi|session.slave_login D:97cdfb9e06e2|mscgen] xapi=>xapi [label="pool.get_all"];
Aug 14 12:18:16 xcp-ng-1 xapi: [debug|xcp-ng-1|320047 UNIX /var/lib/xcp/xapi||dummytaskhelper] task dispatch:pool.get_all D:e0cdc72b9ac2 created by task D:97cdfb9e06e2
Aug 14 12:18:16 xcp-ng-1 xapi: [debug|xcp-ng-1|58 ||mscgen] xapi=>xapi [label="event.from"];
Aug 14 12:18:16 xcp-ng-1 xapi: [debug|xcp-ng-1|320048 UNIX /var/lib/xcp/xapi||dummytaskhelper] task dispatch:event.from D:873e112c40da created by task D:f8fe220f864c
Aug 14 12:18:20 xcp-ng-1 xcp-networkd: [ info|xcp-ng-1|1 |monitor_thread|network_utils] /usr/bin/ovs-appctl bond/show bond0
Aug 14 12:18:20 xcp-ng-1 xcp-networkd: [ info|xcp-ng-1|1 |monitor_thread|network_utils] /usr/bin/ovs-vsctl --timeout=20 get port bond0 bond_mode
Aug 14 12:18:21 xcp-ng-1 xapi: [debug|xcp-ng-1|320051 ||mscgen] xapi=>xapi [label="event.from"];
Aug 14 12:18:21 xcp-ng-1 xapi: [debug|xcp-ng-1|320052 UNIX /var/lib/xcp/xapi||dummytaskhelper] task dispatch:event.from D:2c20c749dd77 created by task D:4f88a9a905b6
Aug 14 12:18:25 xcp-ng-1 xcp-networkd: [ info|xcp-ng-1|1 |monitor_thread|network_utils] /usr/bin/ovs-appctl bond/show bond0
Aug 14 12:18:25 xcp-ng-1 xcp-networkd: [ info|xcp-ng-1|1 |monitor_thread|network_utils] /usr/bin/ovs-vsctl --timeout=20 get port bond0 bond_mode
Aug 14 12:18:30 xcp-ng-1 xcp-networkd: [ info|xcp-ng-1|1 |monitor_thread|network_utils] /usr/bin/ovs-appctl bond/show bond0
Aug 14 12:18:30 xcp-ng-1 xcp-networkd: [ info|xcp-ng-1|1 |monitor_thread|network_utils] /usr/bin/ovs-vsctl --timeout=20 get port bond0 bond_mode
Aug 14 12:18:30 xcp-ng-1 xapi: [debug|xcp-ng-1|319383 INET :::80||dummytaskhelper] task dispatch:event.from D:b4b3286dc096 created by task D:bbc09d91d00b

@olivierlambert
Copy link
Member

Can you reduce a bit the output around what do you think can be interesting? Hard to read that huge pile of text "as is". Thanks!

@olivierlambert
Copy link
Member

Installing hosts in VMs to see if I can reproduce the issue.

@olivierlambert
Copy link
Member

olivierlambert commented Aug 14, 2018

Okay I can't reproduce. I had 3 XCP-ng 7.5 VMs freshly installed (VM1, VM2 and VM3).

  1. Added VM2 and VM3 into VM1 pool
  2. Attached a shared NFS storage to the pool
  3. Enabled HA with xe pool-ha-enable heartbeat-sr-uuids=<NFS_SR_UUID> ha-config:timeout=60
  4. Waited a bit, HA was enabled without any error.

So it seems there is no issue with NFS. Note that GFS is not supported because XenServer code for it is not open source.

@txsastre
Copy link
Author

hi, thanks for your effort.
well, I gonna try to install 7.4.1 again, and upgrade from there instead fresh install...

so, a question, when XCP-ng is asking me how do I want the provisioning method, it doesn't matter which one do I choose ?

seleccio_031

@olivierlambert
Copy link
Member

It matters. GFS2 is not supported.

@txsastre
Copy link
Author

But if I choose GFS2 it ask for create a cluster to manage that... 🤔 And it creates it

@olivierlambert
Copy link
Member

But some GFS2 packages aren't Open Source, hence not included in XCP-ng, so there is a lot of chance that it doesn't work in the end.

@txsastre
Copy link
Author

so, if I have a block device storage (FC, iSCSI) I cannot do thin provisioning over it ? only on file device storage such as NFS ?

@olivierlambert
Copy link
Member

That's correct.

@txsastre
Copy link
Author

Thanks

@wranders
Copy link

Have to second this. Tried NFS and iSCSI (LVM only based on the above dialog), generating the same errors.
Attempting to maunally enable HA via the console command xe pool-ha-enable heartbeat-sr-uuids=<SR_UUID> results in the same error.
HA doesn't currently seem to be available. All three servers are fresh installs and up-to-date as of 15Sep@0030CDT.

@olivierlambert
Copy link
Member

I can't reproduce the issue here in the lab. Do you have NTP correctly set on all your hosts? (this is vital to get HA working)

@wranders
Copy link

My firewall is the NTP server, NFS permissions verified, and both NFS and iSCSI retried after manually resyncing the servers to the firewall (all were around 0.0004s off).
Four-NIC bond, 2GB SR. I'm not sure what else to try.

@olivierlambert
Copy link
Member

olivierlambert commented Sep 15, 2018

How many hosts?

edit: 3, my bad, didn't see that

edit2: tried again in the lab, worked perfectly 🤔

@wranders
Copy link

wranders commented Sep 15, 2018

Figured it out. Clustering enabled on the pool is preventing HA from being set up.
Clustering can't be enabled while HA is active, but HA should be able to be started while clustering is active, at least according to Citrix's documentation.
Is this deviation expected?

EDIT: Removed follow-up comment. PEBCAK on that one.

@olivierlambert
Copy link
Member

That's good to know!

But do not expect GFS2 to work in next XCP-ng release, it's NOT Open Source. To enjoy thin pro, use NFS. We'll probably work in the future on a solution for iSCSI to use a FS on top, until then, NFS is the best choice.

@MrMEEE
Copy link

MrMEEE commented Jul 11, 2019

@olivierlambert What exactly is it in GFS2, that is not opensource??? Am I mistaken, thinking that it's the same GFS2, that is in the Linux kernel?? Or is it some tools around it that is missing??

@stormi
Copy link
Member

stormi commented Jul 11, 2019

GFS2 support in XenServer is not opensource. GFS2 itself is.

@MrMEEE
Copy link

MrMEEE commented Jul 11, 2019

so, it's basically the module/utility that creates and mount the filesystem on the disks?

@stormi
Copy link
Member

stormi commented Jul 11, 2019

Several packages are proprietary : xapi-clusterd, xapi-storage-plugins*...

@MrMEEE
Copy link

MrMEEE commented Jul 11, 2019

hmm.. xapi-clusterd is probably just some integration with lvm clusterd.. xapi-storage-plugins I don't know... Think I'm going to look in to this.. I been doing alot of LVM/GFS2 setup.. maybe I can reuse some of the knowledges to implement this

@MrMEEE
Copy link

MrMEEE commented Jul 11, 2019

If someone have a complete list of the proprietary packages, that would help alot..

@stormi
Copy link
Member

stormi commented Jul 11, 2019

Related to GFS2 support: xapi-clusterd xapi-storage-plugins-datapaths xapi-storage-plugins-gfs2 xapi-storage-plugins-libs and maybe sm-transport-lib

Other non-free stuff: v6d-citrix emu-manager livepatch-utils vgpu xs-clipboardd citrix-crypto-module security-tools plus some packages from vendors such as QConvergeConsoleCLI-Citrix QCS-CLI elxocmcore elxocmcorelibs hbaapiwrapper...

@MrMEEE
Copy link

MrMEEE commented Jul 11, 2019

ok.. I will into it.. thanks

@MrMEEE
Copy link

MrMEEE commented Jul 11, 2019

Any idea where the xapi packages in this repo comes from?:
ftp://mirror.schlundtech.de/xcp-ng/beta/

@stormi
Copy link
Member

stormi commented Jul 11, 2019

Probably from an early version of the XCP-ng 7.4 installation ISO, or from the final 7.4 ISO itself.

@MrMEEE
Copy link

MrMEEE commented Jul 11, 2019

Did we have the source for those packages? or were they just copied from XenServer?

@stormi
Copy link
Member

stormi commented Jul 11, 2019

They hadn't split parts of xapi into closed-source components yet at that time, so those are free. The source RPMs are those from XenServer 7.4 source ISO.

@MrMEEE
Copy link

MrMEEE commented Jul 11, 2019

ahh.. could we use those?.. might not be as optimized, but should be good enough?

@stormi
Copy link
Member

stormi commented Jul 11, 2019

The code base has probably evolved a lot (clustering was an experimental feature and is being developed across several versions), so I foresee a lot of work to adapt it.

@MrMEEE
Copy link

MrMEEE commented Jul 11, 2019

hmmm.. you might be right.. maybe we could just use clvm and thin provisioned lv's... That has been stable for years

@MrMEEE
Copy link

MrMEEE commented Jul 11, 2019

sorry, no clvm, lvmlockd.. which supports thin provisioned lv's

@MrMEEE
Copy link

MrMEEE commented Jul 11, 2019

I will try to setup a POC..

@nagilum99
Copy link

I really wonder why it never has been implemented. I mean the LVM stuff, overall, works okay. The bigges problem is the size of snapshots - but from what I read through, you can set the size rather small and tell it to increment in case of filling up. So a configurable variable for the maximum initial size of snapshots should be pretty doable.
Doesn't fix the 2 TB limit, but even that is no LVM problem itself, it's just how its handled. QCOW2 should also work with LVM as replacement for the VHD. Otherwise: Is QCOW2 any better than VHDX, which is what MS Hyper-V is based on? (Not sure what VMware is acutally using, I guess also VHDX.

@MrMEEE
Copy link

MrMEEE commented Jul 12, 2019

Actually.. I remember that there WAS ISCSI LVM thin provisioning at one time (around Xenserver 5).. and then it changed to GFS2?..

@stormi or others.. I need the softdog.ko kernel module built.. what is the easiest way to build a kernel module for XCP-ng??

@stormi
Copy link
Member

stormi commented Jul 12, 2019

There has been discussion on the forum about that kind of thing. The issue is clustering. LVM thin provisioning on a single host is easy. It is not when you have several hosts that need to synchronize. Hence distributed systems such as GFS2.

To build a kernel module, see https://github.com/xcp-ng/xcp-ng-build-env

Pinging @Wescoeur who might want to elaborate about how we see the future of storage in XCP-ng.

@MrMEEE
Copy link

MrMEEE commented Jul 12, 2019

Clustering with lvmlockd instead of clvmd actually gives features like thin provisioning..

Lvmlockd is now default in SUSE 15 and have made it into RHEL8.. So I would consider it pretty stable..

As I see it, GFS2 would only be needed for shared volumes..

I have setup a POC, just needs to compile the softdog module to see how it works..

@olivierlambert
Copy link
Member

olivierlambert commented Jul 12, 2019

They decided to switch from block based (all LVM-ish solutions: shared iSCSI, local LVM, HBA) for multiple reasons:

  • active disk can't be thin provisioned in LVM (because if you share a LVM between multiple hosts, thin volume will be corrupted, they tried this way and failed in the past). This is the main drawback of LVM in XS/XCP-ng)
  • file based volumes are far easier to manage (NFS, GFS2) because it uses plain files in whatever format you like (vhd historically, then qcow2 on SMAPIv3)
  • I'm not sure they tested lvmlockd, but if it can be done as a POC maybe for SMAPIv1, I'm not sure it will be easy to do it for SMAPIv3

@MrMEEE
Copy link

MrMEEE commented Jul 12, 2019

sanlock is definitely not the way to go.. will try to do a setup with dlm/corosync..

@olivierlambert
Copy link
Member

corosync will be a pain to integrate properly (because it's already only partially integrated by XAPI but the rest is closed source). I plan a lot of pain if you try to do it. Good luck!

@MrMEEE
Copy link

MrMEEE commented Jul 13, 2019

pacemaker is maybe a better alternative??

Thinking of adopting something like this:

https://www.suse.com/documentation/sle-ha-15/book_sleha_guide/data/sec_ha_clvm_config.html

@olivierlambert
Copy link
Member

It's very complicated because you'll probably need to integrate your work into XAPI. This is why it's not a matter of just grabbing a tech, but integration. However, as I said, contributions/PoC are VERY welcome!

@benapetr
Copy link

Hello, I have similar problem - after I disabled HA and did some HW maintenance, rebooted couple of hosts and reassembled them back into the pool, I am no longer able to re-enable HA with exactly same error.

I tried changing the SR of HA from iSCSI, to NFS etc. it's always same error

@benapetr
Copy link

These are the logs from xensource - very useless to be honest "Not_found" with no context whatsoever

Oct 17 12:45:22 xen3 xapi: [ info|xen3.insw.cz|1333230 UNIX /var/lib/xcp/xapi|host.ha_join_liveset R:a25adf0879c7|xapi] Session.destroy trackid=e023a6526b03e23845bf70438a5f25d4
Oct 17 12:45:22 xen3 xapi: [error|xen3.insw.cz|1333230 UNIX /var/lib/xcp/xapi|dispatch:host.ha_join_liveset D:805cd1a43f89|backtrace] host.ha_join_liveset R:a25adf0879c7 failed with exception Server_error(INTERNAL_ERROR, [ Not_found ])
Oct 17 12:45:22 xen3 xapi: [error|xen3.insw.cz|1333230 UNIX /var/lib/xcp/xapi|dispatch:host.ha_join_liveset D:805cd1a43f89|backtrace] Raised Server_error(INTERNAL_ERROR, [ Not_found ])
Oct 17 12:45:22 xen3 xapi: [error|xen3.insw.cz|1333230 UNIX /var/lib/xcp/xapi|dispatch:host.ha_join_liveset D:805cd1a43f89|backtrace] 1/10 xapi @ xen3.insw.cz Raised at file ocaml/xapi-client/client.ml, line 6
Oct 17 12:45:22 xen3 xapi: [error|xen3.insw.cz|1333230 UNIX /var/lib/xcp/xapi|dispatch:host.ha_join_liveset D:805cd1a43f89|backtrace] 2/10 xapi @ xen3.insw.cz Called from file ocaml/xapi-client/client.ml, line 18
Oct 17 12:45:22 xen3 xapi: [error|xen3.insw.cz|1333230 UNIX /var/lib/xcp/xapi|dispatch:host.ha_join_liveset D:805cd1a43f89|backtrace] 3/10 xapi @ xen3.insw.cz Called from file ocaml/xapi-client/client.ml, line 8381
Oct 17 12:45:22 xen3 xapi: [error|xen3.insw.cz|1333230 UNIX /var/lib/xcp/xapi|dispatch:host.ha_join_liveset D:805cd1a43f89|backtrace] 4/10 xapi @ xen3.insw.cz Called from file lib/xapi-stdext-pervasives/pervasiveext.ml, line 24
Oct 17 12:45:22 xen3 xapi: [error|xen3.insw.cz|1333230 UNIX /var/lib/xcp/xapi|dispatch:host.ha_join_liveset D:805cd1a43f89|backtrace] 5/10 xapi @ xen3.insw.cz Called from file ocaml/xapi/rbac.ml, line 236
Oct 17 12:45:22 xen3 xapi: [error|xen3.insw.cz|1333230 UNIX /var/lib/xcp/xapi|dispatch:host.ha_join_liveset D:805cd1a43f89|backtrace] 6/10 xapi @ xen3.insw.cz Called from file ocaml/xapi/server_helpers.ml, line 83
Oct 17 12:45:22 xen3 xapi: [error|xen3.insw.cz|1333230 UNIX /var/lib/xcp/xapi|dispatch:host.ha_join_liveset D:805cd1a43f89|backtrace] 7/10 xapi @ xen3.insw.cz Called from file ocaml/xapi/server_helpers.ml, line 99
Oct 17 12:45:22 xen3 xapi: [error|xen3.insw.cz|1333230 UNIX /var/lib/xcp/xapi|dispatch:host.ha_join_liveset D:805cd1a43f89|backtrace] 8/10 xapi @ xen3.insw.cz Called from file lib/xapi-stdext-pervasives/pervasiveext.ml, line 24
Oct 17 12:45:22 xen3 xapi: [error|xen3.insw.cz|1333230 UNIX /var/lib/xcp/xapi|dispatch:host.ha_join_liveset D:805cd1a43f89|backtrace] 9/10 xapi @ xen3.insw.cz Called from file lib/xapi-stdext-pervasives/pervasiveext.ml, line 35
Oct 17 12:45:22 xen3 xapi: [error|xen3.insw.cz|1333230 UNIX /var/lib/xcp/xapi|dispatch:host.ha_join_liveset D:805cd1a43f89|backtrace] 10/10 xapi @ xen3.insw.cz Called from file lib/backtrace.ml, line 177
Oct 17 12:45:22 xen3 xapi: [error|xen3.insw.cz|1333230 UNIX /var/lib/xcp/xapi|dispatch:host.ha_join_liveset D:805cd1a43f89|backtrace]
Oct 17 12:45:22 xen3 xapi: [debug|xen3.insw.cz|1333229 UNIX /var/lib/xcp/xapi|host.ha_join_liveset R:1c61685647e6|helpers] /usr/libexec/xapi/cluster-stack/xhad/ha_start_daemon  succeeded [ output = '' ]
Oct 17 12:45:22 xen3 xapi: [ info|xen3.insw.cz|1333229 UNIX /var/lib/xcp/xapi|host.ha_join_liveset R:1c61685647e6|xapi_ha] Local flag ha_armed <- true

@stormi
Copy link
Member

stormi commented Nov 30, 2020

So if I re-read correctly, this issue was related to "clustering" being enabled. This is documented in our officials docs now so I'm closing this issue. Feel free to reopen, or better yet create a new one (that would be more readable) if it's needed.

@stormi stormi closed this as completed Nov 30, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

7 participants