Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

kvm: qemu-img convert -s to backup snapshot is deprecated #4094

Closed

Conversation

rohityadavcloud
Copy link
Member

This fixes the qemu-img convert command to use the support way to create
qcow2 file from a snapshot name of a qcow2 volume using the -l snapshot.name=<name> as the -s <snapshot name is no longer supported
in newer qemu-img.

Tested on Ubuntu 20.04, needs testing on CentOS8 cc @davidjumani @shwstppr and regression testing on CentOS7 and Ubuntu 16.04/18.04 cc @wido @GabrielBrascher

Types of changes

  • Breaking change (fix or feature that would cause existing functionality to change)
  • New feature (non-breaking change which adds functionality)
  • Bug fix (non-breaking change which fixes an issue)
  • Enhancement (improves an existing feature and functionality)
  • Cleanup (Code refactoring and cleanup, that may add test cases)

This fixes the qemu-img convert command to use the support way to create
qcow2 file from a snapshot name of a qcow2 volume using the `-l
snapshot.name=<name>` as the `-s <snapshot name` is no longer supported
in newer qemu-img.

Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
@rohityadavcloud rohityadavcloud added this to the 4.15.0.0 milestone May 20, 2020
@rohityadavcloud
Copy link
Member Author

@blueorangutan package

@blueorangutan
Copy link

@rhtyd a Jenkins job has been kicked to build packages. I'll keep you posted as I make progress.

@@ -153,7 +153,7 @@ destroy_snapshot() {
lvm lvremove -f "${vg}/${snapshotname}-cow"
elif [ -f $disk ]; then
#delete all the existing snapshots
$qemu_img snapshot -l $disk |tail -n +3|awk '{print $1}'|xargs -I {} $qemu_img snapshot -d {} $disk >&2
$qemu_img snapshot -l $disk |tail -n +3|awk '{print $2}'|xargs -I {} $qemu_img snapshot -d {} $disk >&2
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@shwstppr @davidjumani can any of you advise if qemu-img snapshot -l second column is the name of the snapshot or ID? qemu-img snapshot -d in Ubuntu 20.04 and possibly CentOS8 only accepts snapshot name and not the ID.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@rhtyd from the qemu wiki :

$ qemu-img snapshot -l xxtest.qcow2
Snapshot list:
ID        TAG                 VM SIZE                DATE       VM CLOCK
1                                1.5M 2010-07-26 16:51:52   00:00:08.599
2                                1.5M 2010-07-26 16:51:53   00:00:09.719
3                                1.5M 2010-07-26 17:26:49   00:00:13.245
4                                1.5M 2010-07-26 19:01:00   00:00:46.763

no name to be found.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@DaanHoogland I think we'll have to check the command for our snapshots, when we create them we generally pass like a uuid-looking name. @shwstppr @davidjumani can you test and confirm (with centos7 vs centos8) with this PR?

@blueorangutan
Copy link

Packaging result: ✔centos7 ✔debian. JID-1237

@rohityadavcloud
Copy link
Member Author

@blueorangutan test

1 similar comment
@rohityadavcloud
Copy link
Member Author

@blueorangutan test

@blueorangutan
Copy link

@rhtyd a Trillian-Jenkins test job (centos7 mgmt + kvm-centos7) has been kicked to run smoke tests

@blueorangutan
Copy link

Trillian test result (tid-1555)
Environment: kvm-centos7 (x2), Advanced Networking with Mgmt server 7
Total time taken: 40110 seconds
Marvin logs: https://github.com/blueorangutan/acs-prs/releases/download/trillian/pr4094-t1555-kvm-centos7.zip
Intermittent failure detected: /marvin/tests/smoke/test_list_ids_parameter.py
Intermittent failure detected: /marvin/tests/smoke/test_privategw_acl.py
Intermittent failure detected: /marvin/tests/smoke/test_snapshots.py
Intermittent failure detected: /marvin/tests/smoke/test_usage.py
Intermittent failure detected: /marvin/tests/smoke/test_vpc_redundant.py
Smoke tests completed. 80 look OK, 3 have error(s)
Only failed tests results shown below:

Test Result Time (s) Test File
ContextSuite context=TestListIdsParams>:setup Error 0.00 test_list_ids_parameter.py
test_01_snapshot_root_disk Error 3.17 test_snapshots.py
test_02_list_snapshots_with_removed_data_store Error 12.83 test_snapshots.py
test_01_snapshot_usage Error 3.16 test_usage.py

@rohityadavcloud
Copy link
Member Author

@blueorangutan test

@blueorangutan
Copy link

@rhtyd a Trillian-Jenkins test job (centos7 mgmt + kvm-centos7) has been kicked to run smoke tests

@blueorangutan
Copy link

Trillian test result (tid-1556)
Environment: kvm-centos7 (x2), Advanced Networking with Mgmt server 7
Total time taken: 37194 seconds
Marvin logs: https://github.com/blueorangutan/acs-prs/releases/download/trillian/pr4094-t1556-kvm-centos7.zip
Intermittent failure detected: /marvin/tests/smoke/test_list_ids_parameter.py
Intermittent failure detected: /marvin/tests/smoke/test_snapshots.py
Intermittent failure detected: /marvin/tests/smoke/test_usage.py
Smoke tests completed. 80 look OK, 3 have error(s)
Only failed tests results shown below:

Test Result Time (s) Test File
ContextSuite context=TestListIdsParams>:setup Error 0.00 test_list_ids_parameter.py
test_01_snapshot_root_disk Error 4.18 test_snapshots.py
test_02_list_snapshots_with_removed_data_store Error 14.95 test_snapshots.py
test_01_snapshot_usage Error 4.16 test_usage.py

@rohityadavcloud
Copy link
Member Author

@blueorangutan package

@blueorangutan
Copy link

@rhtyd a Jenkins job has been kicked to build packages. I'll keep you posted as I make progress.

@blueorangutan
Copy link

Packaging result: ✔centos7 ✔debian. JID-1243

@rohityadavcloud
Copy link
Member Author

@blueorangutan test

@blueorangutan
Copy link

@rhtyd a Trillian-Jenkins test job (centos7 mgmt + kvm-centos7) has been kicked to run smoke tests

@blueorangutan
Copy link

Trillian test result (tid-1566)
Environment: kvm-centos7 (x2), Advanced Networking with Mgmt server 7
Total time taken: 34460 seconds
Marvin logs: https://github.com/blueorangutan/acs-prs/releases/download/trillian/pr4094-t1566-kvm-centos7.zip
Intermittent failure detected: /marvin/tests/smoke/test_list_ids_parameter.py
Intermittent failure detected: /marvin/tests/smoke/test_privategw_acl.py
Intermittent failure detected: /marvin/tests/smoke/test_snapshots.py
Intermittent failure detected: /marvin/tests/smoke/test_usage.py
Smoke tests completed. 79 look OK, 4 have error(s)
Only failed tests results shown below:

Test Result Time (s) Test File
ContextSuite context=TestListIdsParams>:setup Error 0.00 test_list_ids_parameter.py
test_04_rvpc_privategw_static_routes Failure 882.18 test_privategw_acl.py
test_01_snapshot_root_disk Error 3.15 test_snapshots.py
test_02_list_snapshots_with_removed_data_store Error 13.88 test_snapshots.py
test_01_snapshot_usage Error 2.12 test_usage.py

@rohityadavcloud
Copy link
Member Author

Looks like this change breaks snapshot for older/centos7 distro :(

@rohityadavcloud
Copy link
Member Author

ping @shwstppr @davidjumani - let me know if this issue has been handled in the other PR (centos8 or ubuntu 20.04)? I'll close this PR.

@shwstppr
Copy link
Contributor

shwstppr commented Jun 5, 2020

@rhtyd Checked with CentOS8. This PR branch over #4068. No regression in snapshot behavior.

  • Works for stopped VM
  • Does not work for running VM
(localcloud) 🐱 > create snapshot volumeid=3b2ba1c3-acc5-4ab3-ab93-9c391c13532e name=testcmk
🙈 Error: (HTTP 530, error code 4250) KVM Snapshot is not supported: 1
(localcloud) 🐱 > stop virtualmachine id=8dfa8795-d5d3-472e-836b-87d4d0b50c32 
{
  "virtualmachine": {
    "account": "admin",
    "affinitygroup": [],
    "cpunumber": 1,
    "cpuspeed": 500,
    "cpuused": "6.12%",
    "created": "2020-06-05T02:25:14-0400",
    "details": {
      "Message.ReservedCapacityFreed.Flag": "false"
    },
    "diskioread": 3252,
    "diskiowrite": 1299,
    "diskkbsread": 126593,
    "diskkbswrite": 13906,
    "displayname": "VM-8dfa8795-d5d3-472e-836b-87d4d0b50c32",
    "displayvm": true,
    "domain": "ROOT",
    "domainid": "8c14d697-a6f3-11ea-b8d1-5254000bd6e5",
    "guestosid": "8c99acec-a6f3-11ea-b8d1-5254000bd6e5",
    "haenable": false,
    "hypervisor": "KVM",
    "id": "8dfa8795-d5d3-472e-836b-87d4d0b50c32",
    "instancename": "i-2-3-VM",
    "isdynamicallyscalable": false,
    "jobid": "8fde136a-5738-4fab-bf5f-ba5f7d8cf919",
    "jobstatus": 0,
    "memory": 512,
    "memoryintfreekbs": 295804,
    "memorykbs": 524288,
    "memorytargetkbs": 524288,
    "name": "VM-8dfa8795-d5d3-472e-836b-87d4d0b50c32",
    "networkkbsread": 3,
    "networkkbswrite": 8,
    "nic": [
      {
        "extradhcpoption": [],
        "gateway": "10.1.1.1",
        "id": "6d7efdca-cb23-4329-aad3-dac5154d1517",
        "ipaddress": "10.1.1.175",
        "isdefault": true,
        "macaddress": "02:00:4e:8d:00:01",
        "netmask": "255.255.255.0",
        "networkid": "81be6827-8f15-407e-8f55-f0efe042545a",
        "networkname": "test",
        "secondaryip": [],
        "traffictype": "Guest",
        "type": "Isolated"
      }
    ],
    "ostypeid": "8c99acec-a6f3-11ea-b8d1-5254000bd6e5",
    "passwordenabled": false,
    "rootdeviceid": 0,
    "rootdevicetype": "ROOT",
    "securitygroup": [],
    "serviceofferingid": "dda5ffb1-1d1b-46ff-b496-8d8758042572",
    "serviceofferingname": "Small Instance",
    "state": "Stopped",
    "tags": [],
    "templatedisplaytext": "CentOS 5.5(64-bit) no GUI (KVM)",
    "templateid": "8c206fe5-a6f3-11ea-b8d1-5254000bd6e5",
    "templatename": "CentOS 5.5(64-bit) no GUI (KVM)",
    "userid": "c332deea-a6f3-11ea-b8d1-5254000bd6e5",
    "username": "admin",
    "zoneid": "8f610c3f-a5a2-4775-b10d-1c293e82f64d",
    "zonename": "z1"
  }
}
(localcloud) 🐱 > create snapshot volumeid=3b2ba1c3-acc5-4ab3-ab93-9c391c13532e name=testcmk
{
  "snapshot": {
    "account": "admin",
    "created": "2020-06-05T02:31:27-0400",
    "domain": "ROOT",
    "domainid": "8c14d697-a6f3-11ea-b8d1-5254000bd6e5",
    "id": "900b2067-be46-4d55-b423-e75894300b78",
    "intervaltype": "MANUAL",
    "name": "testcmk",
    "osdisplayname": "CentOS 5.5 (64-bit)",
    "ostypeid": "8c99acec-a6f3-11ea-b8d1-5254000bd6e5",
    "physicalsize": 1510604800,
    "revertable": true,
    "snapshottype": "MANUAL",
    "state": "BackedUp",
    "tags": [],
    "virtualsize": 8589934592,
    "volumeid": "3b2ba1c3-acc5-4ab3-ab93-9c391c13532e",
    "volumename": "ROOT-3",
    "volumetype": "ROOT",
    "zoneid": "8f610c3f-a5a2-4775-b10d-1c293e82f64d"
  }
}
(localcloud) 🐱 > exit
[root@srvr1 ~]# hostnamectl
   Static hostname: localhost.localdomain
Transient hostname: srvr1.cloud.priv
         Icon name: computer-vm
           Chassis: vm
        Machine ID: 5319303eb7f24b7b9ff80b20b150cc0c
           Boot ID: 0fdccec7d6354a588b3dd13b3161e7d5
    Virtualization: kvm
  Operating System: CentOS Linux 8 (Core)
       CPE OS Name: cpe:/o:centos:centos:8
            Kernel: Linux 4.18.0-147.8.1.el8_1.x86_64
      Architecture: x86-64

@andrijapanicsb
Copy link
Contributor

Keep in mind that the code for volume snapshot while VM is running (kvm.snapshot.enabled=true) vs when VM is stopped is completely different code path - i.e. should be tested.

@shwstppr
Copy link
Contributor

shwstppr commented Jun 5, 2020

@andrijapanicsb will check and report after changing that setting cc @rhtyd

@shwstppr
Copy link
Contributor

shwstppr commented Jun 5, 2020

After setting to true,

  • Stopped VM snapshot working fine
  • Running VM snapshot failed:
(localcloud) 🐱 > update configuration name="kvm.snapshot.enabled" value=true
{
  "configuration": {
    "category": "Snapshots",
    "description": "whether snapshot is enabled for KVM hosts",
    "isdynamic": false,
    "name": "kvm.snapshot.enabled",
    "value": "true"
  }
}
(localcloud) 🐱 > create snapshot volumeid=3b2ba1c3-acc5-4ab3-ab93-9c391c13532e name=testcmk1
{
  "accountid": "c331c341-a6f3-11ea-b8d1-5254000bd6e5",
  "cmd": "org.apache.cloudstack.api.command.user.snapshot.CreateSnapshotCmd",
  "completed": "2020-06-05T07:23:58-0400",
  "created": "2020-06-05T07:23:55-0400",
  "jobid": "7d030c32-8398-4e47-98cc-ab19e5e4d8cc",
  "jobinstanceid": "d9796b43-14e6-47b4-a263-ca077dd4c46c",
  "jobinstancetype": "Snapshot",
  "jobprocstatus": 0,
  "jobresult": {
    "errorcode": 530,
    "errortext": "Failed to create snapshot due to an internal error creating snapshot for volume 3b2ba1c3-acc5-4ab3-ab93-9c391c13532e"
  },
  "jobresultcode": 530,
  "jobresulttype": "object",
  "jobstatus": 2,
  "userid": "c332deea-a6f3-11ea-b8d1-5254000bd6e5"
}
🙈 Error: async API failed for job 7d030c32-8398-4e47-98cc-ab19e5e4d8cc
2020-06-05 07:23:58,637 DEBUG [c.c.a.t.Request] (Work-Job-Executor-11:ctx-470eb62a job-43/job-44 ctx-676bb70e) (logid:7d030c32) Seq 1-3130001741022494770: Sending  { Cmd , MgmtId: 90520731506405, via: 1(localhost.localdomain), Ver: v1, Flags: 100011, [{"org.apache.cloudstack.storage.command.CopyCommand":{"srcTO":{"org.apache.cloudstack.storage.to.SnapshotObjectTO":{"path":"/mnt/b4b078fe-1969-395e-b69d-68f1adfbf620/3b2ba1c3-acc5-4ab3-ab93-9c391c13532e/43c6b30d-3c03-4970-9b34-63a7b24c5d87","volume":{"uuid":"3b2ba1c3-acc5-4ab3-ab93-9c391c13532e","volumeType":"ROOT","dataStore":{"org.apache.cloudstack.storage.to.PrimaryDataStoreTO":{"uuid":"b4b078fe-1969-395e-b69d-68f1adfbf620","id":1,"poolType":"NetworkFilesystem","host":"172.20.1.22","path":"/export/primary","port":2049,"url":"NetworkFilesystem://172.20.1.22/export/primary/?ROLE=Primary&STOREUUID=b4b078fe-1969-395e-b69d-68f1adfbf620","isManaged":false}},"name":"ROOT-3","size":8589934592,"path":"3b2ba1c3-acc5-4ab3-ab93-9c391c13532e","volumeId":3,"vmName":"i-2-3-VM","accountId":2,"format":"QCOW2","provisioningType":"THIN","id":3,"deviceId":0,"hypervisorType":"KVM","directDownload":false},"dataStore":{"org.apache.cloudstack.storage.to.PrimaryDataStoreTO":{"uuid":"b4b078fe-1969-395e-b69d-68f1adfbf620","id":1,"poolType":"NetworkFilesystem","host":"172.20.1.22","path":"/export/primary","port":2049,"url":"NetworkFilesystem://172.20.1.22/export/primary/?ROLE=Primary&STOREUUID=b4b078fe-1969-395e-b69d-68f1adfbf620","isManaged":false}},"vmName":"i-2-3-VM","name":"testcmk1","hypervisorType":"KVM","id":3,"quiescevm":false,"physicalSize":0}},"destTO":{"org.apache.cloudstack.storage.to.SnapshotObjectTO":{"path":"snapshots/2/3","volume":{"uuid":"3b2ba1c3-acc5-4ab3-ab93-9c391c13532e","volumeType":"ROOT","dataStore":{"org.apache.cloudstack.storage.to.PrimaryDataStoreTO":{"uuid":"b4b078fe-1969-395e-b69d-68f1adfbf620","id":1,"poolType":"NetworkFilesystem","host":"172.20.1.22","path":"/export/primary","port":2049,"url":"NetworkFilesystem://172.20.1.22/export/primary/?ROLE=Primary&STOREUUID=b4b078fe-1969-395e-b69d-68f1adfbf620","isManaged":false}},"name":"ROOT-3","size":8589934592,"path":"3b2ba1c3-acc5-4ab3-ab93-9c391c13532e","volumeId":3,"vmName":"i-2-3-VM","accountId":2,"format":"QCOW2","provisioningType":"THIN","id":3,"deviceId":0,"hypervisorType":"KVM","directDownload":false},"dataStore":{"com.cloud.agent.api.to.NfsTO":{"_url":"nfs://172.20.1.22/export/secondary","_role":"Image"}},"vmName":"i-2-3-VM","name":"testcmk1","hypervisorType":"KVM","id":3,"quiescevm":false,"physicalSize":0}},"executeInSequence":false,"options":{"fullSnapshot":"true"},"options2":{},"wait":21600}}] }
2020-06-05 07:23:58,833 DEBUG [c.c.a.t.Request] (AgentManager-Handler-13:null) (logid:) Seq 1-3130001741022494770: Processing:  { Ans: , MgmtId: 90520731506405, via: 1, Ver: v1, Flags: 10, [{"org.apache.cloudstack.storage.command.CopyCmdAnswer":{"result":false,"details":"qemu-img: Could not open '/mnt/b4b078fe-1969-395e-b69d-68f1adfbf620/3b2ba1c3-acc5-4ab3-ab93-9c391c13532e': Failed to get shared \"write\" lockIs another process using the image [/mnt/b4b078fe-1969-395e-b69d-68f1adfbf620/3b2ba1c3-acc5-4ab3-ab93-9c391c13532e]?there is no 43c6b30d-3c03-4970-9b34-63a7b24c5d87 on disk /mnt/b4b078fe-1969-395e-b69d-68f1adfbf620/3b2ba1c3-acc5-4ab3-ab93-9c391c13532e","wait":0}}] }
2020-06-05 07:23:58,833 DEBUG [c.c.a.t.Request] (Work-Job-Executor-11:ctx-470eb62a job-43/job-44 ctx-676bb70e) (logid:7d030c32) Seq 1-3130001741022494770: Received:  { Ans: , MgmtId: 90520731506405, via: 1(localhost.localdomain), Ver: v1, Flags: 10, { CopyCmdAnswer } }
2020-06-05 07:23:58,876 DEBUG [c.c.s.s.SnapshotManagerImpl] (Work-Job-Executor-11:ctx-470eb62a job-43/job-44 ctx-676bb70e) (logid:7d030c32) Failed to create snapshotqemu-img: Could not open '/mnt/b4b078fe-1969-395e-b69d-68f1adfbf620/3b2ba1c3-acc5-4ab3-ab93-9c391c13532e': Failed to get shared "write" lockIs another process using the image [/mnt/b4b078fe-1969-395e-b69d-68f1adfbf620/3b2ba1c3-acc5-4ab3-ab93-9c391c13532e]?there is no 43c6b30d-3c03-4970-9b34-63a7b24c5d87 on disk /mnt/b4b078fe-1969-395e-b69d-68f1adfbf620/3b2ba1c3-acc5-4ab3-ab93-9c391c13532e
2020-06-05 07:23:58,876 DEBUG [c.c.r.ResourceLimitManagerImpl] (Work-Job-Executor-11:ctx-470eb62a job-43/job-44 ctx-676bb70e) (logid:7d030c32) Updating resource Type = snapshot count for Account = 2 Operation = decreasing Amount = 1
2020-06-05 07:23:58,884 DEBUG [c.c.r.ResourceLimitManagerImpl] (Work-Job-Executor-11:ctx-470eb62a job-43/job-44 ctx-676bb70e) (logid:7d030c32) Updating resource Type = secondary_storage count for Account = 2 Operation = decreasing Amount = 8589934592
2020-06-05 07:23:58,901 ERROR [o.a.c.s.v.VolumeServiceImpl] (Work-Job-Executor-11:ctx-470eb62a job-43/job-44 ctx-676bb70e) (logid:7d030c32) Take snapshot: 3 failed
com.cloud.utils.exception.CloudRuntimeException: qemu-img: Could not open '/mnt/b4b078fe-1969-395e-b69d-68f1adfbf620/3b2ba1c3-acc5-4ab3-ab93-9c391c13532e': Failed to get shared "write" lockIs another process using the image [/mnt/b4b078fe-1969-395e-b69d-68f1adfbf620/3b2ba1c3-acc5-4ab3-ab93-9c391c13532e]?there is no 43c6b30d-3c03-4970-9b34-63a7b24c5d87 on disk /mnt/b4b078fe-1969-395e-b69d-68f1adfbf620/3b2ba1c3-acc5-4ab3-ab93-9c391c13532e
	at org.apache.cloudstack.storage.snapshot.SnapshotServiceImpl.backupSnapshot(SnapshotServiceImpl.java:301)
	at org.apache.cloudstack.storage.snapshot.DefaultSnapshotStrategy.backupSnapshot(DefaultSnapshotStrategy.java:171)
	at com.cloud.storage.snapshot.SnapshotManagerImpl.backupSnapshotToSecondary(SnapshotManagerImpl.java:1213)
	at com.cloud.storage.snapshot.SnapshotManagerImpl.takeSnapshot(SnapshotManagerImpl.java:1164)
	at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.base/java.lang.reflect.Method.invoke(Method.java:566)
	at org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:344)
	at org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:198)
	at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:163)
	at org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:95)
	at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:186)
	at org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:212)
	at com.sun.proxy.$Proxy196.takeSnapshot(Unknown Source)
	at org.apache.cloudstack.storage.volume.VolumeServiceImpl.takeSnapshot(VolumeServiceImpl.java:2073)
	at com.cloud.storage.VolumeApiServiceImpl.orchestrateTakeVolumeSnapshot(VolumeApiServiceImpl.java:2541)
	at com.cloud.storage.VolumeApiServiceImpl.orchestrateTakeVolumeSnapshot(VolumeApiServiceImpl.java:3465)
	at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.base/java.lang.reflect.Method.invoke(Method.java:566)
	at com.cloud.vm.VmWorkJobHandlerProxy.handleVmWorkJob(VmWorkJobHandlerProxy.java:107)
	at com.cloud.storage.VolumeApiServiceImpl.handleVmWorkJob(VolumeApiServiceImpl.java:3471)
	at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.base/java.lang.reflect.Method.invoke(Method.java:566)
	at org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:344)
	at org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:198)
	at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:163)
	at org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:95)
	at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:186)
	at org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:212)
	at com.sun.proxy.$Proxy204.handleVmWorkJob(Unknown Source)
	at com.cloud.vm.VmWorkJobDispatcher.runJob(VmWorkJobDispatcher.java:102)
	at org.apache.cloudstack.framework.jobs.impl.AsyncJobManagerImpl$5.runInContext(AsyncJobManagerImpl.java:603)
	at org.apache.cloudstack.managed.context.ManagedContextRunnable$1.run(ManagedContextRunnable.java:49)
	at org.apache.cloudstack.managed.context.impl.DefaultManagedContext$1.call(DefaultManagedContext.java:56)
	at org.apache.cloudstack.managed.context.impl.DefaultManagedContext.callWithContext(DefaultManagedContext.java:103)
	at org.apache.cloudstack.managed.context.impl.DefaultManagedContext.runWithContext(DefaultManagedContext.java:53)
	at org.apache.cloudstack.managed.context.ManagedContextRunnable.run(ManagedContextRunnable.java:46)
	at org.apache.cloudstack.framework.jobs.impl.AsyncJobManagerImpl$5.run(AsyncJobManagerImpl.java:551)
	at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
	at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
	at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
	at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
	at java.base/java.lang.Thread.run(Thread.java:834)
2020-06-05 07:23:58,902 ERROR [c.c.v.VmWorkJobHandlerProxy] (Work-Job-Executor-11:ctx-470eb62a job-43/job-44 ctx-676bb70e) (logid:7d030c32) Invocation exception, caused by: com.cloud.utils.exception.CloudRuntimeException: qemu-img: Could not open '/mnt/b4b078fe-1969-395e-b69d-68f1adfbf620/3b2ba1c3-acc5-4ab3-ab93-9c391c13532e': Failed to get shared "write" lockIs another process using the image [/mnt/b4b078fe-1969-395e-b69d-68f1adfbf620/3b2ba1c3-acc5-4ab3-ab93-9c391c13532e]?there is no 43c6b30d-3c03-4970-9b34-63a7b24c5d87 on disk /mnt/b4b078fe-1969-395e-b69d-68f1adfbf620/3b2ba1c3-acc5-4ab3-ab93-9c391c13532e
2020-06-05 07:23:58,902 INFO  [c.c.v.VmWorkJobHandlerProxy] (Work-Job-Executor-11:ctx-470eb62a job-43/job-44 ctx-676bb70e) (logid:7d030c32) Rethrow exception com.cloud.utils.exception.CloudRuntimeException: qemu-img: Could not open '/mnt/b4b078fe-1969-395e-b69d-68f1adfbf620/3b2ba1c3-acc5-4ab3-ab93-9c391c13532e': Failed to get shared "write" lockIs another process using the image [/mnt/b4b078fe-1969-395e-b69d-68f1adfbf620/3b2ba1c3-acc5-4ab3-ab93-9c391c13532e]?there is no 43c6b30d-3c03-4970-9b34-63a7b24c5d87 on disk /mnt/b4b078fe-1969-395e-b69d-68f1adfbf620/3b2ba1c3-acc5-4ab3-ab93-9c391c13532e
2020-06-05 07:23:58,902 DEBUG [c.c.v.VmWorkJobDispatcher] (Work-Job-Executor-11:ctx-470eb62a job-43/job-44) (logid:7d030c32) Done with run of VM work job: com.cloud.vm.VmWorkTakeVolumeSnapshot for VM 3, job origin: 43
2020-06-05 07:23:58,902 ERROR [c.c.v.VmWorkJobDispatcher] (Work-Job-Executor-11:ctx-470eb62a job-43/job-44) (logid:7d030c32) Unable to complete AsyncJobVO {id:44, userId: 2, accountId: 2, instanceType: null, instanceId: null, cmd: com.cloud.vm.VmWorkTakeVolumeSnapshot, cmdInfo: rO0ABXNyACVjb20uY2xvdWQudm0uVm1Xb3JrVGFrZVZvbHVtZVNuYXBzaG90BL5gG4Li1c8CAAZaAAthc3luY0JhY2t1cFoACXF1aWVzY2VWbUwADGxvY2F0aW9uVHlwZXQAKUxjb20vY2xvdWQvc3RvcmFnZS9TbmFwc2hvdCRMb2NhdGlvblR5cGU7TAAIcG9saWN5SWR0ABBMamF2YS9sYW5nL0xvbmc7TAAKc25hcHNob3RJZHEAfgACTAAIdm9sdW1lSWRxAH4AAnhyABNjb20uY2xvdWQudm0uVm1Xb3Jrn5m2VvAlZ2sCAARKAAlhY2NvdW50SWRKAAZ1c2VySWRKAAR2bUlkTAALaGFuZGxlck5hbWV0ABJMamF2YS9sYW5nL1N0cmluZzt4cAAAAAAAAAACAAAAAAAAAAIAAAAAAAAAA3QAFFZvbHVtZUFwaVNlcnZpY2VJbXBsAABwc3IADmphdmEubGFuZy5Mb25nO4vkkMyPI98CAAFKAAV2YWx1ZXhyABBqYXZhLmxhbmcuTnVtYmVyhqyVHQuU4IsCAAB4cAAAAAAAAAAAc3EAfgAHAAAAAAAAAANxAH4ACg, cmdVersion: 0, status: IN_PROGRESS, processStatus: 0, resultCode: 0, result: null, initMsid: 90520731506405, completeMsid: null, lastUpdated: null, lastPolled: null, created: Fri Jun 05 07:23:55 EDT 2020, removed: null}, job origin:43
com.cloud.utils.exception.CloudRuntimeException: qemu-img: Could not open '/mnt/b4b078fe-1969-395e-b69d-68f1adfbf620/3b2ba1c3-acc5-4ab3-ab93-9c391c13532e': Failed to get shared "write" lockIs another process using the image [/mnt/b4b078fe-1969-395e-b69d-68f1adfbf620/3b2ba1c3-acc5-4ab3-ab93-9c391c13532e]?there is no 43c6b30d-3c03-4970-9b34-63a7b24c5d87 on disk /mnt/b4b078fe-1969-395e-b69d-68f1adfbf620/3b2ba1c3-acc5-4ab3-ab93-9c391c13532e
	at org.apache.cloudstack.storage.snapshot.SnapshotServiceImpl.backupSnapshot(SnapshotServiceImpl.java:301)
	at org.apache.cloudstack.storage.snapshot.DefaultSnapshotStrategy.backupSnapshot(DefaultSnapshotStrategy.java:171)
	at com.cloud.storage.snapshot.SnapshotManagerImpl.backupSnapshotToSecondary(SnapshotManagerImpl.java:1213)
	at com.cloud.storage.snapshot.SnapshotManagerImpl.takeSnapshot(SnapshotManagerImpl.java:1164)
	at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.base/java.lang.reflect.Method.invoke(Method.java:566)
	at org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:344)
	at org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:198)
	at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:163)
	at org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:95)
	at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:186)
	at org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:212)
	at com.sun.proxy.$Proxy196.takeSnapshot(Unknown Source)
	at org.apache.cloudstack.storage.volume.VolumeServiceImpl.takeSnapshot(VolumeServiceImpl.java:2073)
	at com.cloud.storage.VolumeApiServiceImpl.orchestrateTakeVolumeSnapshot(VolumeApiServiceImpl.java:2541)
	at com.cloud.storage.VolumeApiServiceImpl.orchestrateTakeVolumeSnapshot(VolumeApiServiceImpl.java:3465)
	at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.base/java.lang.reflect.Method.invoke(Method.java:566)
	at com.cloud.vm.VmWorkJobHandlerProxy.handleVmWorkJob(VmWorkJobHandlerProxy.java:107)
	at com.cloud.storage.VolumeApiServiceImpl.handleVmWorkJob(VolumeApiServiceImpl.java:3471)
	at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.base/java.lang.reflect.Method.invoke(Method.java:566)
	at org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:344)
	at org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:198)
	at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:163)
	at org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:95)
	at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:186)
	at org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:212)
	at com.sun.proxy.$Proxy204.handleVmWorkJob(Unknown Source)
	at com.cloud.vm.VmWorkJobDispatcher.runJob(VmWorkJobDispatcher.java:102)
	at org.apache.cloudstack.framework.jobs.impl.AsyncJobManagerImpl$5.runInContext(AsyncJobManagerImpl.java:603)
	at org.apache.cloudstack.managed.context.ManagedContextRunnable$1.run(ManagedContextRunnable.java:49)
	at org.apache.cloudstack.managed.context.impl.DefaultManagedContext$1.call(DefaultManagedContext.java:56)
	at org.apache.cloudstack.managed.context.impl.DefaultManagedContext.callWithContext(DefaultManagedContext.java:103)
	at org.apache.cloudstack.managed.context.impl.DefaultManagedContext.runWithContext(DefaultManagedContext.java:53)
	at org.apache.cloudstack.managed.context.ManagedContextRunnable.run(ManagedContextRunnable.java:46)
	at org.apache.cloudstack.framework.jobs.impl.AsyncJobManagerImpl$5.run(AsyncJobManagerImpl.java:551)
	at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
	at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
	at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
	at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
	at java.base/java.lang.Thread.run(Thread.java:834)

@rohityadavcloud
Copy link
Member Author

I'll close this, please refer this PR and port changes to #4068 instead @shwstppr @davidjumani Thanks.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

6 participants