Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Migration of volume replicas from one pool to another pool #100

Open
mittachaitu opened this issue Mar 6, 2020 · 2 comments
Open

Migration of volume replicas from one pool to another pool #100

mittachaitu opened this issue Mar 6, 2020 · 2 comments
Labels
enhancement New feature or request

Comments

@mittachaitu
Copy link

Description:
I have cstorpoolcluster(CSPC) resource created on top of 3 nodes(That interns creates CSPI resources) and I deployed CSI-Volumes on top of above CSPC.

Now My Setup looks like:

kubectl get nodes
NAME                                        STATUS   ROLES    AGE    VERSION
gke-sai-test-cluster-pool-1-8d7defe8-37rr   Ready    <none>   102m   v1.14.8-gke.33
gke-sai-test-cluster-pool-1-8d7defe8-8471   Ready    <none>   102m   v1.14.8-gke.33
gke-sai-test-cluster-pool-1-8d7defe8-8nt0   Ready    <none>   102m   v1.14.8-gke.33
gke-sai-test-cluster-pool-1-8d7defe8-chws   Ready    <none>   102m   v1.14.8-gke.33
 kubectl get cspi -n openebs
NAME                     HOSTNAME                                    ALLOCATED   FREE    CAPACITY   STATUS   AGE
cstor-sparse-cspc-kdrs   gke-sai-test-cluster-pool-1-8d7defe8-8471   154K        9.94G   9.94G      ONLINE   13m
cstor-sparse-cspc-nb99   gke-sai-test-cluster-pool-1-8d7defe8-8nt0   158K        9.94G   9.94G      ONLINE   13m
cstor-sparse-cspc-twjx   gke-sai-test-cluster-pool-1-8d7defe8-chws   312K        9.94G   9.94G      ONLINE   13m
kubectl get cvr -n openebs
NAME                                                              USED   ALLOCATED   STATUS    AGE
pvc-3f83cac1-5f80-11ea-85dd-42010a800121-cstor-sparse-cspc-kdrs   6K     6K          Healthy   105s
pvc-3f83cac1-5f80-11ea-85dd-42010a800121-cstor-sparse-cspc-nb99   6K     6K          Healthy   105s
pvc-3f83cac1-5f80-11ea-85dd-42010a800121-cstor-sparse-cspc-twjx   6K     6K          Healthy   105s

Now I performed horizontal scaleup of CSPC which created CSPI on new nodes

 kubectl get cspi -n openebs
NAME                     HOSTNAME                                    ALLOCATED   FREE    CAPACITY   STATUS   AGE
cstor-sparse-cspc-kdrs   gke-sai-test-cluster-pool-1-8d7defe8-8471   161K        9.94G   9.94G      ONLINE   15m
cstor-sparse-cspc-kmt7   gke-sai-test-cluster-pool-1-8d7defe8-37rr   50K         9.94G   9.94G      ONLINE   42s
cstor-sparse-cspc-nb99   gke-sai-test-cluster-pool-1-8d7defe8-8nt0   161K        9.94G   9.94G      ONLINE   15m
cstor-sparse-cspc-twjx   gke-sai-test-cluster-pool-1-8d7defe8-chws   161K        9.94G   9.94G      ONLINE   15m

Scenario:
I want to remove the node gke-sai-test-cluster-pool-1-8d7defe8-chws from my cluster. I performed horizontally scaled of the pool(i.e removed the pool spec of the above node from CSPC), but before scaling down the pool from that node I want to move volume replicas on that pool to the different pool(which was newly created i.e cstor-sparse-cspc-kmt7). How I can achieve that without many manual steps.

I want volume replicas on below pools

kubectl get cvr -n openebs
NAME                                                              USED   ALLOCATED   STATUS    AGE
pvc-3f83cac1-5f80-11ea-85dd-42010a800121-cstor-sparse-cspc-kdrs   6K     6K          Healthy   105s
pvc-3f83cac1-5f80-11ea-85dd-42010a800121-cstor-sparse-cspc-nb99   6K     6K          Healthy   105s
pvc-3f83cac1-5f80-11ea-85dd-42010a800121-cstor-sparse-cspc-kmt7   6K     6K          Healthy   105s

In above migrated the volume replica from pvc-3f83cac1-5f80-11ea-85dd-42010a800121-cstor-sparse-cspc-twjx to pvc-3f83cac1-5f80-11ea-85dd-42010a800121-cstor-sparse-cspc-kmt7

@AmitKumarDas
Copy link

@mittachaitu can you provide the exact steps as well. In addition, can you please give some readable / dummy names in your steps. It becomes difficult to understand when we mention the actual volume names with UIDs & so on.

@AmitKumarDas AmitKumarDas added the enhancement New feature or request label Mar 17, 2020
@mittachaitu
Copy link
Author

mittachaitu commented Mar 17, 2020

I have the following pool configuration on 4 node cluster

kubectl get cspi -n openebs
NAME  HOSTNAME  ALLOCATED    FREE    CAPACITY   STATUS   AGE
pool1    node-1     161K     9.94G     9.94G       ONLINE   15m
pool2    node-2     50K       9.94G     9.94G       ONLINE   42s
pool3    node-3     161K     9.94G     9.94G       ONLINE   15m
pool4    node-4     161K     9.94G     9.94G       ONLINE   15m

Create a volume with three replicas on top of above pools

kubectl get cvr -n openebs
NAME         USED   ALLOCATED   STATUS    AGE
vol1-pool1   6K        6K      Healthy   105s
vol1-pool2   6K        6K      Healthy   105s
vol1-pool3   6K        6K      Healthy   105s

No, I am scaling down my cluster nodes from 4 to 3 to achieve that I am bringing down the node-3
so data on the pool3 should migrate to pool4(node-4) before scaling down the node-3.

OpenEBS is supporting to achieve this via manual steps with PR.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants