-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
CLOUDSTACK-8826: XenServer - Use device id passed as part of attach v… #792
Conversation
Note that I had to undo changes from PR #773 in my local setup to run the tests. |
cloudstack-pull-rats #552 SUCCESS |
cloudstack-pull-analysis #487 ABORTED |
code LGTM |
Can anyone review this? |
} | ||
} | ||
} catch (final Exception e) { | ||
s_logger.debug("destroyUnattachedVBD failed due to " + e.toString()); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
maybe add the exception or/and catch more specific exceptions
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@DaanHoogland Are you suggesting to log the full exception? Currently only the message is getting logged. The destroyUnattachedVBD is a best-effort call to cleanup VBDs.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@koushik-das I would catch only specific exceptions and give them specific messages and report on results in a finally block. the present state is that any numberformat, memory, stackoverflow or devidebyzero will be caught here, where it is not relevant. My initial remark was to make sure we know what kind of exception is caught at least. This is not always clear from the message which may be null.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@DaanHoogland Now logging the full exception
one remark, otherwise lgtm |
…olume API properly If device id passed as part of API and available then use it otherwise fallback on XS to automatically assign one. For ISO device id used is 3 and it is processed before any other entry to avoid conflict.
1dd27bd
to
f5b9a96
Compare
cloudstack-pull-rats #679 SUCCESS |
cloudstack-pull-analysis #629 SUCCESS |
Merging as 2 LGTMs. |
CLOUDSTACK-8826: XenServer - Use device id passed as part of attach volume API properly If device id passed as part of API and available then use it otherwise fallback on XS to automatically assign one. For ISO device id used is 3 and it is processed before any other entry to avoid conflict. Signed-off-by: Koushik Das <koushik@apache.org>
@koushik-das does this fix got tested with HVM vm having more than 4 VDI? because we are experiencing issue where an HVM vm on XenServer 6.5 having 4 vdi (1 root + 3 datadisk) fail to start after a shutdown and it seams to be related to this part of code. This is currently working on 4.4.x version of ACS. |
I confirm what @pdion891 described in the previous comment. |
@pdion891 This wasn't tested with HVM VMs. |
…olume API properly
If device id passed as part of API and available then use it otherwise fallback on XS to automatically assign one.
For ISO device id used is 3 and it is processed before any other entry to avoid conflict.
This is only specific to XS, ran the following tests:
BVT: test/integration/smoke/test_volumes.py
Test Volume creation for all Disk Offerings (incl. custom) ... === TestName: test_01_create_volume | Status : SUCCESS ===
ok
Attach a created Volume to a Running VM ... === TestName: test_02_attach_volume | Status : SUCCESS ===
ok
Download a Volume attached to a VM ... === TestName: test_03_download_attached_volume | Status : SUCCESS ===
ok
Delete a Volume attached to a VM ... === TestName: test_04_delete_attached_volume | Status : SUCCESS ===
ok
Detach a Volume attached to a VM ... === TestName: test_05_detach_volume | Status : SUCCESS ===
ok
Download a Volume unattached to an VM ... === TestName: test_06_download_detached_volume | Status : SUCCESS ===
ok
Test resize (negative) non-existent volume ... === TestName: test_07_resize_fail | Status : SUCCESS ===
ok
Test resize a volume ... === TestName: test_08_resize_volume | Status : SUCCESS ===
ok
Delete a Volume unattached to an VM ... === TestName: test_09_delete_detached_volume | Status : SUCCESS ===
ok
Ran 9 tests in 1116.972s
OK
Regression: test/integration/component/test_volumes.py
Test Volume attach/detach to VM (5 data volumes) ... === TestName: test_01_volume_attach_detach | Status : SUCCESS ===
ok
Test Attach volumes (max capacity) ... === TestName: test_01_volume_attach | Status : SUCCESS ===
ok
Test attach volumes (more than max) to an instance ... === TestName: test_02_volume_attach_max | Status : SUCCESS ===
ok
Test Volumes and ISO attach ... === TestName: test_01_volume_iso_attach | Status : SUCCESS ===
ok
Test custom disk sizes beyond range ... === TestName: test_deployVmWithCustomDisk | Status : SUCCESS ===
ok
@desc:Volume is not retaining same uuid when migrating from one ... SKIP: No suitable storage pools found for volume migration. Skipping
Attach a created Volume to a Running VM ... === TestName: test_01_attach_volume | Status : SUCCESS ===
ok
Detach a Volume attached to a VM ... === TestName: test_02_detach_volume | Status : SUCCESS ===
ok
Delete a Volume unattached to an VM ... === TestName: test_03_delete_detached_volume | Status : SUCCESS ===
ok
Create a volume under a non-root domain as non-root-domain user ... === TestName: test_create_volume_under_domain | Status : SUCCESS ===
ok
Ran 10 tests in 1750.686s
OK (SKIP=1)