New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[CLOUDSTACK-10240] ACS cannot migrate a local volume to shared storage #2425

Merged
merged 3 commits into from Mar 7, 2018

Conversation

@rafaelweingartner
Member

rafaelweingartner commented Jan 23, 2018

CloudStack is logically restricting the migration of local storages to shared storage and vice versa. This restriction is a logical one and can be removed for XenServer deployments. Therefore, we will enable migration of volumes between local-shared storages in XenServers independently of their service offering. This will work as an override mechanism to the disk offering used by volumes. If administrators want to migrate local volumes to a shared storage, they should be able to do so (the hypervisor already allows that). The same the other way around.

Extra information for reviewers:
This PS is introducing an overriding mechanism for volumes placement. We are providing a way for root administrators to override service/disk offering definitions by allocation volumes in storage pools that they could not have been allocated before if we followed their(volumes) service/disk offerings.
Therefore, it will not change the type of disk/service offerings when migrating volumes between local/shared pools. We will simply migrate them as requested by root admin. Furthermore, volumes will only be able to migrate to "suitable" storage pools. This means, storage pools that have same tags as the disk/service offering.

In summary, we are overriding placement considering location (shared/local) storage pools. However, we still consider storage tags to decide which storage pools are suitable or not.

@DaanHoogland

A lot of good and welcome refactorings. to much to review in one commit IMNSHO. next time please separate the reformats, the refactorings and the logic changes out, for easier review.

small change please, the logger name.

public class FindStoragePoolsForMigrationCmd extends BaseListCmd {
public static final Logger s_logger = Logger.getLogger(FindStoragePoolsForMigrationCmd.class.getName());
public static final Logger s_logger = Logger.getLogger(FindStoragePoolsForMigrationCmdTest.class.getName());

This comment has been minimized.

@DaanHoogland

DaanHoogland Jan 24, 2018

Contributor

this is not the test class!

This comment has been minimized.

@rafaelweingartner

rafaelweingartner Jan 25, 2018

Member

Thanks! Fixed.

This comment has been minimized.

@DaanHoogland

DaanHoogland Jan 25, 2018

Contributor

it is stil there, did you push?

This comment has been minimized.

@rafaelweingartner

rafaelweingartner Jan 25, 2018

Member

I forgot to push :(
my bad!

@rafaelweingartner

This comment has been minimized.

Member

rafaelweingartner commented Jan 27, 2018

@daan, thanks for the feedback here!
I am starting to doubt that we are all using the same code formatter (I am using this one [1]). I can remove the code formatting changes and leave only the methods that I created if you prefer.

[1] https://cwiki.apache.org/confluence/download/attachments/29687985/ApacheCloudStack.xml?version=1&modificationDate=1382960469000&api=v2

@DaanHoogland

This comment has been minimized.

Contributor

DaanHoogland commented Jan 27, 2018

@rafaelweingartner that is the only true formatter for cloudstack, isn't it. still i'd like to have reformattings in a different commit. the mindset of reviewing reformatting is a bit different than that of reviewing semantic/logic changes.

@rafaelweingartner

This comment has been minimized.

Member

rafaelweingartner commented Jan 27, 2018

I would say that there is only one formatting style for ACS. However, I have seen so many PRs like this one where we see a lot of formatting changes that I am starting to doubt if we are all using the same.

I agree with you. My intention here was never to do code formatting. My eclipse is configured to format only edit lines, but for some reason it formatted all of the Java files I touched (maybe I pressed a ctrl+shift+f without noticing). Would you like me to remove the formatting and leave only the real changes I am introducing here?

@DaanHoogland

This comment has been minimized.

Contributor

DaanHoogland commented Jan 27, 2018

no, never mind. (next time better!)

@rafaelweingartner

This comment has been minimized.

Member

rafaelweingartner commented Jan 27, 2018

ok @DaanHoogland.
Thanks!

[CLOUDSTACK-10240] ACS cannot migrate a volume from local to shared s…
…torage.

CloudStack is logically restricting the migration of local storages to shared storage and vice versa. This restriction is a logical one and can be removed for XenServer deployments. Therefore, we will enable migration of volumes between local-shared storages in XenServers independently of their service offering. This will work as an override mechanism to the disk offering used by volumes. If administrators want to migrate local volumes to a shared storage, they should be able to do so (the hypervisor already allows that). The same the other way around.
@rafaelweingartner

This comment has been minimized.

Member

rafaelweingartner commented Feb 15, 2018

@DaanHoogland I separated the code changes from formatting one as your suggestion. I think it is way better now.

rafaelweingartner added some commits Feb 15, 2018

Fix test case test_03_migrate_options_storage_tags
The changes applied were:
- When loading hypervisors capabilities we must use "default" instead of nulls
- "Enable" storage migration for simulator hypervisor
- Remove restriction on "ClusterScopeStoragePoolAllocator" to find shared pools
@DaanHoogland

This comment has been minimized.

Contributor

DaanHoogland commented Feb 19, 2018

@blueorangutan package

@blueorangutan

This comment has been minimized.

blueorangutan commented Feb 19, 2018

@DaanHoogland a Jenkins job has been kicked to build packages. I'll keep you posted as I make progress.

@blueorangutan

This comment has been minimized.

blueorangutan commented Feb 19, 2018

Packaging result: ✔centos6 ✔centos7 ✔debian. JID-1726

@DaanHoogland

This comment has been minimized.

Contributor

DaanHoogland commented Feb 27, 2018

@blueorangutan

This comment has been minimized.

blueorangutan commented Feb 27, 2018

@DaanHoogland a Trillian-Jenkins test job (centos7 mgmt + kvm-centos7) has been kicked to run smoke tests

@blueorangutan

This comment has been minimized.

blueorangutan commented Feb 27, 2018

Trillian test result (tid-2285)
Environment: kvm-centos7 (x2), Advanced Networking with Mgmt server 7
Total time taken: 34459 seconds
Marvin logs: https://github.com/blueorangutan/acs-prs/releases/download/trillian/pr2425-t2285-kvm-centos7.zip
Intermitten failure detected: /marvin/tests/smoke/test_privategw_acl.py
Intermitten failure detected: /marvin/tests/smoke/test_router_dhcphosts.py
Intermitten failure detected: /marvin/tests/smoke/test_host_maintenance.py
Intermitten failure detected: /marvin/tests/smoke/test_hostha_kvm.py
Smoke tests completed. 64 look OK, 3 have error(s)
Only failed tests results shown below:

Test Result Time (s) Test File
test_04_rvpc_privategw_static_routes Failure 370.30 test_privategw_acl.py
test_02_cancel_host_maintenace_with_migration_jobs Error 153.71 test_host_maintenance.py
test_hostha_enable_ha_when_host_in_maintenance Error 0.87 test_hostha_kvm.py
@rafaelweingartner

This comment has been minimized.

Member

rafaelweingartner commented Feb 27, 2018

These failures are the same that appeared in other PR that is changing different things. How much can we trust these errors? Or an even better question, can we trust in a successful execution?

@DaanHoogland

This comment has been minimized.

Contributor

DaanHoogland commented Feb 28, 2018

These are nested environments in which we test so they might be false positives as you imply by your question. I think we still need to check the errors. What PR shows these? I saw failures today but nowhere the exact same bunch, @rafaelweingartner ?

@rafaelweingartner

This comment has been minimized.

Member

rafaelweingartner commented Feb 28, 2018

The one I remember right now is the HA test, which I have seen failing in PRs 2438 and 2469.
My doubt is the following, as long as we have false negatives (showing that there is a problem, when there isn't), can't we have false positive (showing that everything is ok, when it is not) as well?

@DaanHoogland

This comment has been minimized.

Contributor

DaanHoogland commented Feb 28, 2018

@rafaelweingartner , first to clarify; my understanding of false positive and false negitive is reversed from yours;
a false positive: a bug is reported but it isn't there
a false negative: a bug is not reported in spite of being there
kind of like in the doping control mafia; positive means your busted. maybe a bit counter intuitive but I'll run with it (pun as happy accident)

So more more to the point, a false positive might be due to a timing issue that occurs because of the nested nature of the test environment, and isn't a real issue. I can think of no false negative counterpart for that.
maybe we can think of more examples of the reverse but the issue is that we need to assess on each failure if it is noteworthy and we need to keep doing code reviews.

@rafaelweingartner

This comment has been minimized.

Member

rafaelweingartner commented Feb 28, 2018

@DaanHoogland thanks for the explanations. Yes I was considering false positive/negative differently, but as long as we are able to explain them further as you did, we can understand each other.

So, it seems that most of these failures are due to runtime issues, normally from timeouts. Moreover, as far as we know it is not possible for the tests to return with a "success" message, when there is a problem. Unless, of course, if there is a problem/case that the current tests do not cover.

@DaanHoogland

This comment has been minimized.

Contributor

DaanHoogland commented Mar 2, 2018

YES @rafaelweingartner , that is always the big question. I think we assume somethings there and to be sure we need to 'prove' each individual test. As soon as we've proven them we will start assuming again, unfortunately. How often do we need to revisit?

@rafaelweingartner

This comment has been minimized.

Member

rafaelweingartner commented Mar 2, 2018

That is a good question. For me, it is always best to start solving a bug/issue/new feature by unit tests because they cost less to run than these integration/smoke tests (this is what I have been doing so far). However, this requires a level of commitment (code has to be designed to be unit tested).

Regarding revisiting tests, I would check them every-time they fail. Otherwise, there is too much for too few hands.

@DaanHoogland

This comment has been minimized.

Contributor

DaanHoogland commented Mar 2, 2018

@blueorangutan

This comment has been minimized.

blueorangutan commented Mar 2, 2018

@DaanHoogland a Trillian-Jenkins test job (centos7 mgmt + kvm-centos7) has been kicked to run smoke tests

@blueorangutan

This comment has been minimized.

blueorangutan commented Mar 2, 2018

Trillian test result (tid-2303)
Environment: kvm-centos7 (x2), Advanced Networking with Mgmt server 7
Total time taken: 29005 seconds
Marvin logs: https://github.com/blueorangutan/acs-prs/releases/download/trillian/pr2425-t2303-kvm-centos7.zip
Intermitten failure detected: /marvin/tests/smoke/test_primary_storage.py
Intermitten failure detected: /marvin/tests/smoke/test_privategw_acl.py
Intermitten failure detected: /marvin/tests/smoke/test_snapshots.py
Intermitten failure detected: /marvin/tests/smoke/test_vm_life_cycle.py
Intermitten failure detected: /marvin/tests/smoke/test_vpc_redundant.py
Intermitten failure detected: /marvin/tests/smoke/test_host_maintenance.py
Intermitten failure detected: /marvin/tests/smoke/test_hostha_kvm.py
Smoke tests completed. 60 look OK, 7 have error(s)
Only failed tests results shown below:

Test Result Time (s) Test File
test_01_add_primary_storage_disabled_host Error 1.06 test_primary_storage.py
test_01_primary_storage_nfs Error 0.17 test_primary_storage.py
ContextSuite context=TestStorageTags>:setup Error 0.29 test_primary_storage.py
test_04_rvpc_privategw_static_routes Failure 346.33 test_privategw_acl.py
test_02_list_snapshots_with_removed_data_store Error 1.26 test_snapshots.py
test_08_migrate_vm Error 125.28 test_vm_life_cycle.py
test_04_rvpc_network_garbage_collector_nics Failure 289.85 test_vpc_redundant.py
test_01_cancel_host_maintenace_with_no_migration_jobs Failure 0.19 test_host_maintenance.py
test_02_cancel_host_maintenace_with_migration_jobs Error 3.50 test_host_maintenance.py
test_hostha_enable_ha_when_host_in_maintenance Error 3.14 test_hostha_kvm.py
@rafaelweingartner

This comment has been minimized.

Member

rafaelweingartner commented Mar 2, 2018

Now the failures increased... :)

@DaanHoogland

This comment has been minimized.

Contributor

DaanHoogland commented Mar 5, 2018

:( jenkins had crashed due to disk full, should not be related and should not be. acting insane:
@blueorangutan test

@blueorangutan

This comment has been minimized.

blueorangutan commented Mar 5, 2018

@DaanHoogland a Trillian-Jenkins test job (centos7 mgmt + kvm-centos7) has been kicked to run smoke tests

@blueorangutan

This comment has been minimized.

blueorangutan commented Mar 5, 2018

Trillian test result (tid-2312)
Environment: kvm-centos7 (x2), Advanced Networking with Mgmt server 7
Total time taken: 28936 seconds
Marvin logs: https://github.com/blueorangutan/acs-prs/releases/download/trillian/pr2425-t2312-kvm-centos7.zip
Intermitten failure detected: /marvin/tests/smoke/test_deploy_virtio_scsi_vm.py
Intermitten failure detected: /marvin/tests/smoke/test_outofbandmanagement.py
Intermitten failure detected: /marvin/tests/smoke/test_vpc_redundant.py
Intermitten failure detected: /marvin/tests/smoke/test_vpc_vpn.py
Intermitten failure detected: /marvin/tests/smoke/test_hostha_kvm.py
Smoke tests completed. 66 look OK, 1 have error(s)
Only failed tests results shown below:

Test Result Time (s) Test File
test_hostha_enable_ha_when_host_in_maintenance Error 1.86 test_hostha_kvm.py
@rafaelweingartner

This comment has been minimized.

Member

rafaelweingartner commented Mar 6, 2018

@DaanHoogland this test test_hostha_enable_ha_when_host_in_maintenance error seems to be persistent in other PRs as well. Can I consider that the tests finished successfully?

@rafaelweingartner

This comment has been minimized.

Member

rafaelweingartner commented Mar 6, 2018

@khos2ow would you mind reviewing this PR?

@DaanHoogland

This comment has been minimized.

Contributor

DaanHoogland commented Mar 7, 2018

@rafaelweingartner , yes you can. It seems to be a timing issue. the host does not have a state needed in some occurences and does have that state in other runs.

@rafaelweingartner

This comment has been minimized.

Member

rafaelweingartner commented Mar 7, 2018

Thanks @DaanHoogland!

@khos2ow

khos2ow approved these changes Mar 7, 2018

I haven't had the chance to manual test this myself, but the code LGTM. +1.

@rafaelweingartner

This comment has been minimized.

Member

rafaelweingartner commented Mar 7, 2018

If we have no objections I will be merging this one latter today.

@rafaelweingartner rafaelweingartner merged commit f2efbce into apache:master Mar 7, 2018

2 checks passed

Jenkins This pull request looks good
Details
continuous-integration/travis-ci/pr The Travis CI build passed
Details

rafaelweingartner added a commit to rafaelweingartner/cloudstack that referenced this pull request Mar 13, 2018

[CLOUDSTACK-10323] Allow changing disk offering during volume migration
This is a continuation of work developed on PR apache#2425 (CLOUDSTACK-10240), which provided root admins an override mechanism to move volumes between storage systems types (local/shared) even when the disk offering would not allow such operation. To complete the work, we will now provide a way for administrators to enter a new disk offering that can reflect the new placement of the volume. We will add an extra parameter to allow the root admin inform a new disk offering for the volume. Therefore, when the volume is being migrated, it will be possible to replace the disk offering to reflect the new placement of the volume.

The API method will have the following parameters:

* storageid (required)
* volumeid (required)
* livemigrate(optional)
* newdiskofferingid (optional) – this is the new parameter

The expected behavior is the following:

* If “newdiskofferingid” is not provided the current behavior is maintained. Override mechanism will also keep working as we have seen so far.
* If the “newdiskofferingid” is provided by the admin, we will execute the following checks
** new disk offering mode (local/shared) must match the target storage mode. If it does not match, an exception will be thrown and the operator will receive a message indicating the problem.
** we will check if the new disk offering tags match the target storage tags. If it does not match, an exception will be thrown and the operator will receive a message indicating the problem.
** check if the target storage has the capacity for the new volume. If it does not have enough space, then an exception is thrown and the operator will receive a message indicating the problem.
** check if the size of the volume is the same as the size of the new disk offering. If it is not the same, we will ALLOW the change of the service offering, and a warning message will be logged.

We execute the change of the Disk offering as soon as the migration of the volume finishes. Therefore, if an error happens during the migration and the volume remains in the original storage system, the disk offering will keep reflecting this situation.

rafaelweingartner added a commit to rafaelweingartner/cloudstack that referenced this pull request Mar 13, 2018

[CLOUDSTACK-10323] Allow changing disk offering during volume migration
This is a continuation of work developed on PR apache#2425 (CLOUDSTACK-10240), which provided root admins an override mechanism to move volumes between storage systems types (local/shared) even when the disk offering would not allow such operation. To complete the work, we will now provide a way for administrators to enter a new disk offering that can reflect the new placement of the volume. We will add an extra parameter to allow the root admin inform a new disk offering for the volume. Therefore, when the volume is being migrated, it will be possible to replace the disk offering to reflect the new placement of the volume.

The API method will have the following parameters:

* storageid (required)
* volumeid (required)
* livemigrate(optional)
* newdiskofferingid (optional) – this is the new parameter

The expected behavior is the following:

* If “newdiskofferingid” is not provided the current behavior is maintained. Override mechanism will also keep working as we have seen so far.
* If the “newdiskofferingid” is provided by the admin, we will execute the following checks
** new disk offering mode (local/shared) must match the target storage mode. If it does not match, an exception will be thrown and the operator will receive a message indicating the problem.
** we will check if the new disk offering tags match the target storage tags. If it does not match, an exception will be thrown and the operator will receive a message indicating the problem.
** check if the target storage has the capacity for the new volume. If it does not have enough space, then an exception is thrown and the operator will receive a message indicating the problem.
** check if the size of the volume is the same as the size of the new disk offering. If it is not the same, we will ALLOW the change of the service offering, and a warning message will be logged.

We execute the change of the Disk offering as soon as the migration of the volume finishes. Therefore, if an error happens during the migration and the volume remains in the original storage system, the disk offering will keep reflecting this situation.

rafaelweingartner added a commit to rafaelweingartner/cloudstack that referenced this pull request Mar 13, 2018

[CLOUDSTACK-10323] Allow changing disk offering during volume migration
This is a continuation of work developed on PR apache#2425 (CLOUDSTACK-10240), which provided root admins an override mechanism to move volumes between storage systems types (local/shared) even when the disk offering would not allow such operation. To complete the work, we will now provide a way for administrators to enter a new disk offering that can reflect the new placement of the volume. We will add an extra parameter to allow the root admin inform a new disk offering for the volume. Therefore, when the volume is being migrated, it will be possible to replace the disk offering to reflect the new placement of the volume.

The API method will have the following parameters:

* storageid (required)
* volumeid (required)
* livemigrate(optional)
* newdiskofferingid (optional) – this is the new parameter

The expected behavior is the following:

* If “newdiskofferingid” is not provided the current behavior is maintained. Override mechanism will also keep working as we have seen so far.
* If the “newdiskofferingid” is provided by the admin, we will execute the following checks
** new disk offering mode (local/shared) must match the target storage mode. If it does not match, an exception will be thrown and the operator will receive a message indicating the problem.
** we will check if the new disk offering tags match the target storage tags. If it does not match, an exception will be thrown and the operator will receive a message indicating the problem.
** check if the target storage has the capacity for the new volume. If it does not have enough space, then an exception is thrown and the operator will receive a message indicating the problem.
** check if the size of the volume is the same as the size of the new disk offering. If it is not the same, we will ALLOW the change of the service offering, and a warning message will be logged.

We execute the change of the Disk offering as soon as the migration of the volume finishes. Therefore, if an error happens during the migration and the volume remains in the original storage system, the disk offering will keep reflecting this situation.

rafaelweingartner added a commit to rafaelweingartner/cloudstack that referenced this pull request Mar 13, 2018

[CLOUDSTACK-10323] Allow changing disk offering during volume migration
This is a continuation of work developed on PR apache#2425 (CLOUDSTACK-10240), which provided root admins an override mechanism to move volumes between storage systems types (local/shared) even when the disk offering would not allow such operation. To complete the work, we will now provide a way for administrators to enter a new disk offering that can reflect the new placement of the volume. We will add an extra parameter to allow the root admin inform a new disk offering for the volume. Therefore, when the volume is being migrated, it will be possible to replace the disk offering to reflect the new placement of the volume.

The API method will have the following parameters:

* storageid (required)
* volumeid (required)
* livemigrate(optional)
* newdiskofferingid (optional) – this is the new parameter

The expected behavior is the following:

* If “newdiskofferingid” is not provided the current behavior is maintained. Override mechanism will also keep working as we have seen so far.
* If the “newdiskofferingid” is provided by the admin, we will execute the following checks
** new disk offering mode (local/shared) must match the target storage mode. If it does not match, an exception will be thrown and the operator will receive a message indicating the problem.
** we will check if the new disk offering tags match the target storage tags. If it does not match, an exception will be thrown and the operator will receive a message indicating the problem.
** check if the target storage has the capacity for the new volume. If it does not have enough space, then an exception is thrown and the operator will receive a message indicating the problem.
** check if the size of the volume is the same as the size of the new disk offering. If it is not the same, we will ALLOW the change of the service offering, and a warning message will be logged.

We execute the change of the Disk offering as soon as the migration of the volume finishes. Therefore, if an error happens during the migration and the volume remains in the original storage system, the disk offering will keep reflecting this situation.

rafaelweingartner added a commit to rafaelweingartner/cloudstack that referenced this pull request Mar 13, 2018

[CLOUDSTACK-10323] Allow changing disk offering during volume migration
This is a continuation of work developed on PR apache#2425 (CLOUDSTACK-10240), which provided root admins an override mechanism to move volumes between storage systems types (local/shared) even when the disk offering would not allow such operation. To complete the work, we will now provide a way for administrators to enter a new disk offering that can reflect the new placement of the volume. We will add an extra parameter to allow the root admin inform a new disk offering for the volume. Therefore, when the volume is being migrated, it will be possible to replace the disk offering to reflect the new placement of the volume.

The API method will have the following parameters:

* storageid (required)
* volumeid (required)
* livemigrate(optional)
* newdiskofferingid (optional) – this is the new parameter

The expected behavior is the following:

* If “newdiskofferingid” is not provided the current behavior is maintained. Override mechanism will also keep working as we have seen so far.
* If the “newdiskofferingid” is provided by the admin, we will execute the following checks
** new disk offering mode (local/shared) must match the target storage mode. If it does not match, an exception will be thrown and the operator will receive a message indicating the problem.
** we will check if the new disk offering tags match the target storage tags. If it does not match, an exception will be thrown and the operator will receive a message indicating the problem.
** check if the target storage has the capacity for the new volume. If it does not have enough space, then an exception is thrown and the operator will receive a message indicating the problem.
** check if the size of the volume is the same as the size of the new disk offering. If it is not the same, we will ALLOW the change of the service offering, and a warning message will be logged.

We execute the change of the Disk offering as soon as the migration of the volume finishes. Therefore, if an error happens during the migration and the volume remains in the original storage system, the disk offering will keep reflecting this situation.

rafaelweingartner added a commit to rafaelweingartner/cloudstack that referenced this pull request Mar 13, 2018

[CLOUDSTACK-10323] Allow changing disk offering during volume migration
This is a continuation of work developed on PR apache#2425 (CLOUDSTACK-10240), which provided root admins an override mechanism to move volumes between storage systems types (local/shared) even when the disk offering would not allow such operation. To complete the work, we will now provide a way for administrators to enter a new disk offering that can reflect the new placement of the volume. We will add an extra parameter to allow the root admin inform a new disk offering for the volume. Therefore, when the volume is being migrated, it will be possible to replace the disk offering to reflect the new placement of the volume.

The API method will have the following parameters:

* storageid (required)
* volumeid (required)
* livemigrate(optional)
* newdiskofferingid (optional) – this is the new parameter

The expected behavior is the following:

* If “newdiskofferingid” is not provided the current behavior is maintained. Override mechanism will also keep working as we have seen so far.
* If the “newdiskofferingid” is provided by the admin, we will execute the following checks
** new disk offering mode (local/shared) must match the target storage mode. If it does not match, an exception will be thrown and the operator will receive a message indicating the problem.
** we will check if the new disk offering tags match the target storage tags. If it does not match, an exception will be thrown and the operator will receive a message indicating the problem.
** check if the target storage has the capacity for the new volume. If it does not have enough space, then an exception is thrown and the operator will receive a message indicating the problem.
** check if the size of the volume is the same as the size of the new disk offering. If it is not the same, we will ALLOW the change of the service offering, and a warning message will be logged.

We execute the change of the Disk offering as soon as the migration of the volume finishes. Therefore, if an error happens during the migration and the volume remains in the original storage system, the disk offering will keep reflecting this situation.

rafaelweingartner added a commit to rafaelweingartner/cloudstack that referenced this pull request Mar 14, 2018

[CLOUDSTACK-10323] Allow changing disk offering during volume migration
This is a continuation of work developed on PR apache#2425 (CLOUDSTACK-10240), which provided root admins an override mechanism to move volumes between storage systems types (local/shared) even when the disk offering would not allow such operation. To complete the work, we will now provide a way for administrators to enter a new disk offering that can reflect the new placement of the volume. We will add an extra parameter to allow the root admin inform a new disk offering for the volume. Therefore, when the volume is being migrated, it will be possible to replace the disk offering to reflect the new placement of the volume.

The API method will have the following parameters:

* storageid (required)
* volumeid (required)
* livemigrate(optional)
* newdiskofferingid (optional) – this is the new parameter

The expected behavior is the following:

* If “newdiskofferingid” is not provided the current behavior is maintained. Override mechanism will also keep working as we have seen so far.
* If the “newdiskofferingid” is provided by the admin, we will execute the following checks
** new disk offering mode (local/shared) must match the target storage mode. If it does not match, an exception will be thrown and the operator will receive a message indicating the problem.
** we will check if the new disk offering tags match the target storage tags. If it does not match, an exception will be thrown and the operator will receive a message indicating the problem.
** check if the target storage has the capacity for the new volume. If it does not have enough space, then an exception is thrown and the operator will receive a message indicating the problem.
** check if the size of the volume is the same as the size of the new disk offering. If it is not the same, we will ALLOW the change of the service offering, and a warning message will be logged.

We execute the change of the Disk offering as soon as the migration of the volume finishes. Therefore, if an error happens during the migration and the volume remains in the original storage system, the disk offering will keep reflecting this situation.

rafaelweingartner added a commit to rafaelweingartner/cloudstack that referenced this pull request Mar 16, 2018

[CLOUDSTACK-10323] Allow changing disk offering during volume migration
This is a continuation of work developed on PR apache#2425 (CLOUDSTACK-10240), which provided root admins an override mechanism to move volumes between storage systems types (local/shared) even when the disk offering would not allow such operation. To complete the work, we will now provide a way for administrators to enter a new disk offering that can reflect the new placement of the volume. We will add an extra parameter to allow the root admin inform a new disk offering for the volume. Therefore, when the volume is being migrated, it will be possible to replace the disk offering to reflect the new placement of the volume.

The API method will have the following parameters:

* storageid (required)
* volumeid (required)
* livemigrate(optional)
* newdiskofferingid (optional) – this is the new parameter

The expected behavior is the following:

* If “newdiskofferingid” is not provided the current behavior is maintained. Override mechanism will also keep working as we have seen so far.
* If the “newdiskofferingid” is provided by the admin, we will execute the following checks
** new disk offering mode (local/shared) must match the target storage mode. If it does not match, an exception will be thrown and the operator will receive a message indicating the problem.
** we will check if the new disk offering tags match the target storage tags. If it does not match, an exception will be thrown and the operator will receive a message indicating the problem.
** check if the target storage has the capacity for the new volume. If it does not have enough space, then an exception is thrown and the operator will receive a message indicating the problem.
** check if the size of the volume is the same as the size of the new disk offering. If it is not the same, we will ALLOW the change of the service offering, and a warning message will be logged.

We execute the change of the Disk offering as soon as the migration of the volume finishes. Therefore, if an error happens during the migration and the volume remains in the original storage system, the disk offering will keep reflecting this situation.

rafaelweingartner added a commit that referenced this pull request Apr 26, 2018

[CLOUDSTACK-10323] Allow changing disk offering during volume migrati…
…on (#2486)

* [CLOUDSTACK-10323] Allow changing disk offering during volume migration

This is a continuation of work developed on PR #2425 (CLOUDSTACK-10240), which provided root admins an override mechanism to move volumes between storage systems types (local/shared) even when the disk offering would not allow such operation. To complete the work, we will now provide a way for administrators to enter a new disk offering that can reflect the new placement of the volume. We will add an extra parameter to allow the root admin inform a new disk offering for the volume. Therefore, when the volume is being migrated, it will be possible to replace the disk offering to reflect the new placement of the volume.

The API method will have the following parameters:

* storageid (required)
* volumeid (required)
* livemigrate(optional)
* newdiskofferingid (optional) – this is the new parameter

The expected behavior is the following:

* If “newdiskofferingid” is not provided the current behavior is maintained. Override mechanism will also keep working as we have seen so far.
* If the “newdiskofferingid” is provided by the admin, we will execute the following checks
** new disk offering mode (local/shared) must match the target storage mode. If it does not match, an exception will be thrown and the operator will receive a message indicating the problem.
** we will check if the new disk offering tags match the target storage tags. If it does not match, an exception will be thrown and the operator will receive a message indicating the problem.
** check if the target storage has the capacity for the new volume. If it does not have enough space, then an exception is thrown and the operator will receive a message indicating the problem.
** check if the size of the volume is the same as the size of the new disk offering. If it is not the same, we will ALLOW the change of the service offering, and a warning message will be logged.

We execute the change of the Disk offering as soon as the migration of the volume finishes. Therefore, if an error happens during the migration and the volume remains in the original storage system, the disk offering will keep reflecting this situation.

* Code formatting

* Adding a test to cover migration with new disk offering (#4)

* Adding a test to cover migration with new disk offering

* Update test_volumes.py

* Update test_volumes.py

* fix test_11_migrate_volume_and_change_offering

* Fix typo in Java doc
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment