Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CLOUDSTACK-8826: XenServer - Use device id passed as part of attach v… #792

Merged
merged 1 commit into from
Sep 22, 2015

Conversation

koushik-das
Copy link
Contributor

…olume API properly

If device id passed as part of API and available then use it otherwise fallback on XS to automatically assign one.
For ISO device id used is 3 and it is processed before any other entry to avoid conflict.

This is only specific to XS, ran the following tests:

BVT: test/integration/smoke/test_volumes.py

Test Volume creation for all Disk Offerings (incl. custom) ... === TestName: test_01_create_volume | Status : SUCCESS ===
ok
Attach a created Volume to a Running VM ... === TestName: test_02_attach_volume | Status : SUCCESS ===
ok
Download a Volume attached to a VM ... === TestName: test_03_download_attached_volume | Status : SUCCESS ===
ok
Delete a Volume attached to a VM ... === TestName: test_04_delete_attached_volume | Status : SUCCESS ===
ok
Detach a Volume attached to a VM ... === TestName: test_05_detach_volume | Status : SUCCESS ===
ok
Download a Volume unattached to an VM ... === TestName: test_06_download_detached_volume | Status : SUCCESS ===
ok
Test resize (negative) non-existent volume ... === TestName: test_07_resize_fail | Status : SUCCESS ===
ok
Test resize a volume ... === TestName: test_08_resize_volume | Status : SUCCESS ===
ok
Delete a Volume unattached to an VM ... === TestName: test_09_delete_detached_volume | Status : SUCCESS ===
ok


Ran 9 tests in 1116.972s

OK

Regression: test/integration/component/test_volumes.py

Test Volume attach/detach to VM (5 data volumes) ... === TestName: test_01_volume_attach_detach | Status : SUCCESS ===
ok
Test Attach volumes (max capacity) ... === TestName: test_01_volume_attach | Status : SUCCESS ===
ok
Test attach volumes (more than max) to an instance ... === TestName: test_02_volume_attach_max | Status : SUCCESS ===
ok
Test Volumes and ISO attach ... === TestName: test_01_volume_iso_attach | Status : SUCCESS ===
ok
Test custom disk sizes beyond range ... === TestName: test_deployVmWithCustomDisk | Status : SUCCESS ===
ok
@desc:Volume is not retaining same uuid when migrating from one ... SKIP: No suitable storage pools found for volume migration. Skipping
Attach a created Volume to a Running VM ... === TestName: test_01_attach_volume | Status : SUCCESS ===
ok
Detach a Volume attached to a VM ... === TestName: test_02_detach_volume | Status : SUCCESS ===
ok
Delete a Volume unattached to an VM ... === TestName: test_03_delete_detached_volume | Status : SUCCESS ===
ok
Create a volume under a non-root domain as non-root-domain user ... === TestName: test_create_volume_under_domain | Status : SUCCESS ===
ok


Ran 10 tests in 1750.686s

OK (SKIP=1)

@koushik-das
Copy link
Contributor Author

Note that I had to undo changes from PR #773 in my local setup to run the tests.

@asfbot
Copy link

asfbot commented Sep 9, 2015

cloudstack-pull-rats #552 SUCCESS
This pull request looks good

@asfbot
Copy link

asfbot commented Sep 9, 2015

cloudstack-pull-analysis #487 ABORTED

@rohityadavcloud
Copy link
Member

code LGTM

@koushik-das
Copy link
Contributor Author

Can anyone review this?

}
}
} catch (final Exception e) {
s_logger.debug("destroyUnattachedVBD failed due to " + e.toString());
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

maybe add the exception or/and catch more specific exceptions

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@DaanHoogland Are you suggesting to log the full exception? Currently only the message is getting logged. The destroyUnattachedVBD is a best-effort call to cleanup VBDs.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@koushik-das I would catch only specific exceptions and give them specific messages and report on results in a finally block. the present state is that any numberformat, memory, stackoverflow or devidebyzero will be caught here, where it is not relevant. My initial remark was to make sure we know what kind of exception is caught at least. This is not always clear from the message which may be null.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@DaanHoogland Now logging the full exception

@DaanHoogland
Copy link
Contributor

one remark, otherwise lgtm

…olume API properly

If device id passed as part of API and available then use it otherwise fallback on XS to automatically assign one.
For ISO device id used is 3 and it is processed before any other entry to avoid conflict.
@asfbot
Copy link

asfbot commented Sep 21, 2015

cloudstack-pull-rats #679 SUCCESS
This pull request looks good

@asfbot
Copy link

asfbot commented Sep 21, 2015

cloudstack-pull-analysis #629 SUCCESS
This pull request looks good

@koushik-das
Copy link
Contributor Author

Merging as 2 LGTMs.

@asfgit asfgit merged commit f5b9a96 into apache:master Sep 22, 2015
asfgit pushed a commit that referenced this pull request Sep 22, 2015
CLOUDSTACK-8826: XenServer - Use device id passed as part of attach volume API properly

If device id passed as part of API and available then use it otherwise fallback on XS to automatically assign one.
For ISO device id used is 3 and it is processed before any other entry to avoid conflict.

Signed-off-by: Koushik Das <koushik@apache.org>
@pdion891
Copy link
Contributor

@koushik-das does this fix got tested with HVM vm having more than 4 VDI? because we are experiencing issue where an HVM vm on XenServer 6.5 having 4 vdi (1 root + 3 datadisk) fail to start after a shutdown and it seams to be related to this part of code. This is currently working on 4.4.x version of ACS.

@simongodard
Copy link

I confirm what @pdion891 described in the previous comment.
http://markmail.org/thread/4nmyra6aofxtu3o2

@koushik-das
Copy link
Contributor Author

@pdion891 This wasn't tested with HVM VMs.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

7 participants