Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Persistent Volume Claim migration strategy #312

Closed
adam-resdiary opened this issue Apr 19, 2018 · 22 comments
Closed

Persistent Volume Claim migration strategy #312

adam-resdiary opened this issue Apr 19, 2018 · 22 comments

Comments

@adam-resdiary
Copy link

adam-resdiary commented Apr 19, 2018

Apologies in advance if this isn't the right place to put this question, but I'm just looking for some advice before I go too far with AKS.

When you automatically provision storage using a PVC, the storage gets created in a resource group created automatically by AKS (MC_xyz...). What I'm wondering is what would the strategy be if I needed to remove an existing AKS cluster, and create a new one, but still wanted access to the same data.

Deleting the container service will cause the MC_ resource group to be removed, along with the data, so is there a way to migrate this to another cluster, or should I just avoid using PVCs for any data that might need to live longer than the cluster itself?

This is possibly more of an issue just now because of the lack of multiple node pools (i.e. I might get into a situation where the node size I've chosen isn't suitable anymore, and I currently need to create a new cluster in order to switch to a different size of nodes).

@c-mccutcheon
Copy link

Haven't tried, but i'd be surprised if you couldn't create a persistent storage account (as an example) yourself in another resource group, create the persistent volume then create the persistent volume claim pointing to that volume and use that in the container.

We did something similar when creating our azure-disk (we are going to change to use azure-file) for storing our persistent grafana data we did not wish to lose, we did put the disk in the same resource group but I may try putting this outside of the resource group to avoid what you are describing above in terms of deleting the entire cluster.

@andyzhangx
Copy link
Contributor

@adam-resdiary azure disk dynamic provision would only create storage account inside the resource group as the k8s cluster, in your case if you already has dynamic provioned azure disk, you may copy out the disk inside that resource gorup and use static provisioning, pod template could be like this:
https://github.com/andyzhangx/demo/blob/master/linux/azuredisk/nginx-pod-azuredisk-static-mgrdisk.yaml
Or you may use static provisoning in the beginning, put all disks in a resource group which you will never delete, step by step static provisoning examples could be found here

@adam-resdiary
Copy link
Author

Thanks for the replies - it sounds like static provisioning might make a bit more sense for our situation. It means we can make sure the data isn't tied to the lifetime of the cluster. It also then becomes a bit simpler to migrate the data from an existing VM to the AKS cluster.

The only thing that's blocking me now is that the cluster doesn't seem to have permission to access disks in other resource groups in our subscription. I'll keep looking - I'm guessing I just need to grant it permission somehow.

@adam-resdiary
Copy link
Author

@omgsarge This might not be relevant in your situation, but I tried using an Azure Files volume to store the data for a TeamCity server. At first it all looked good, but after the server had been running for a short period of time, it became completely unable to get the latest changes from GitHub. I don't know for sure what the problem was, but I'm assuming some kind of performance problem with Azure Files. Switching to a managed premium disk seems to have sorted the issue, although I can't be 100% sure just yet.

Something to watch out for anyway.

@andyzhangx
Copy link
Contributor

@adam-resdiary current azure file perf is not good. I would propose a new azure storage plugin: blobfuse flexvolume driver which also supports ReadWriteMany and has better perf than azure file, this feature is in preview now.

@andyzhangx
Copy link
Contributor

andyzhangx commented Apr 20, 2018

@adam-resdiary for the permission issue, I think you could create a service principal in advance, grant it full subscription permission, and then use it to create AKS, you could follow by: https://docs.microsoft.com/en-us/azure/aks/kubernetes-service-principal#use-an-existing-sp-1

@c-mccutcheon
Copy link

@adam-resdiary - cool, thanks for the heads up. I was wondering about Azure File performance, I was potentially going to use it as the persistent volume for a cross-region MQ setup, but will keep an eye on that and change accordingly

@adam-resdiary
Copy link
Author

@andyzhangx great - thanks for that. That sounds like exactly what I need.

@adam-resdiary
Copy link
Author

@andyzhangx I've run into a couple of issues while trying to use a separate resource group:

  • I had to grant my service principal permissions on the specific disk I wanted to use, rather than the at the resource group / subscription level. Otherwise I just got permission errors in the logs.
  • More worryingly, when attaching an existing disk to a pod, the disk seems to be getting formatted.

The first issue is more of an annoyance than anything else, but the second is a bit more of a problem. Here's a quick description of what I'm doing, in case I'm just doing something stupid:

Creating two resource groups - one for the cluster, one for the data:

az group create --name cluster-group
az group create --name cluster-group-data

Creating the service principal:

az ad sp create-for-rbac --name cluster-sp --skip-assignment

Creating the AKS cluster:

az aks create --resource-group cluster-group --name cluster-name --node-vm-size Standard_D2s_v3 --node-count 1 --ssh-key-value ~\.ssh\key.pub --kubernetes-version 1.8.7 --service-principal "<appId>" --client-secret "<password>"

Creating a new disk based on an existing disk that I want to migrate:

az disk create --name data --resource-group cluster-group-data --size-gb 100 --location westeurope --source "https://xyz.blob.core.windows.net/vhds/abcd123.vhd"

Granting the service principal access to the disk:

az role assignment create --assignee <id> --role Contributor --scope <diskId>

At this point I then tried to create a new pod using the following definition:

kind: Pod
apiVersion: v1
metadata:
  name: data-mig
  labels:
    app: data-mig
spec:
  volumes:
    - name: data
      azureDisk:
        kind: Managed
        cachingMode: ReadWrite
        diskName: data
        diskURI: /subscriptions/<sub-id>/resourceGroups/cluster-group-data/providers/Microsoft.Compute/disks/data
  containers:
    - name: data-mig
      image: ubuntu
      volumeMounts:
      - name: data
        mountPath: /data
      command: ["/bin/bash", "-ecx", "while :; do printf '.'; sleep 5 ; done"]

If I then connect to the pod using kubectl exec and take a look at the folder, the only thing in it is lost+found.

I've manually attached and mounted the file system on one of the AKS nodes before doing any of this to check that it definitely had the data on it, so I know that the disk wasn't empty before attaching it to a pod.

Any idea what's going on here? I'm happy to open a separate issue as well if that makes sense, I just figure that maybe I'm doing something stupid.

@adam-resdiary
Copy link
Author

Just to add to that, I've found something in the kubelet logs that maybe explains what's going on:

{"log":"I0424 16:04:50.869263    7374 azure_common_linux.go:194] azureDisk - Disk \"/dev/disk/azure/scsi1/lun0\" appears to be unformatted, attempting to format as type: \"ext4\" with options: [-E lazy_itable_init=0,lazy_journal_init=0 -F /dev/disk/azure/scsi1/lun0]\n","stream":"stderr","time":"2018-04-24T16:04:50.869561569Z"}

Any idea what would make it think the disk isn't formatted? I've been able to manually mount it no problem.

@andyzhangx
Copy link
Contributor

@adam-resdiary what's your k8s version? I may try that later.

@adamconnelly
Copy link

1.8.7. I'll try upgrading the cluster to a newer version today to see whether it makes any difference.

@andyzhangx
Copy link
Contributor

@adam-resdiary I have tried mounting an existing azure disk by resuing same PVC, the disk won't be formatted in the second mount. I think the issue exists in this step:

az disk create --name data --resource-group cluster-group-data --size-gb 100 --location westeurope --source "https://xyz.blob.core.windows.net/vhds/abcd123.vhd"

In k8s, it will use lsblk -nd -o FSTYPE /dev/disk/azure/scsi1/lun0 whether it's formatted or not, and by above disk create command, it regard that disk as unformatted. I will dig into this later.

@andyzhangx
Copy link
Contributor

andyzhangx commented Apr 26, 2018

@adam-resdiary I just follow your way to deplicate the disk, and it works well, k8s won't format the duplicated disk, here is my way:

  • duplicate disk
az disk create --name dupdata --resource-group andy-mg110 --size-gb 5 --location westus2   --source  /subscriptions/4be8920b-2978-43d7-ab14-xxx/resourceGroups/andy-mg110/providers/Microsoft.Compute/disks/andy-mg110-dynamic-pvc-d1d7d240-4948-11e8-b535-000d3af9f967
  • create pod using duplicated disk
apiVersion: v1
kind: Pod
metadata:
 name: nginx-azuredisk
spec:
 containers:
  - image: nginx
    name: nginx-azuredisk
    volumeMounts:
      - name: azure
        mountPath: /mnt/disk
 volumes:
      - name: azure
        azureDisk:
          kind: Managed
          diskName: dupdisk
          diskURI: /subscriptions/4be8920b-2978-43d7-ab14-xxx/resourceGroups/andy-mg110/providers/Microsoft.Compute/disks/dupdisk

The only difference is I used a managed disk to duplicate a disk.
Could you make sure https://xyz.blob.core.windows.net/vhds/abcd123.vhd is already used and formatted?

@adam-resdiary
Copy link
Author

@andyzhangx I've recreated the disk and attached it to a VM, then run the lsblk command. Here's what I'm getting:

azureuser@aks-nodepool1-28371372-0:/$ ls -l /dev/disk/azure/scsi1/
total 0
lrwxrwxrwx 1 root root 12 Apr 27 08:04 lun0 -> ../../../sdc
lrwxrwxrwx 1 root root 13 Apr 27 08:04 lun0-part1 -> ../../../sdc1
azureuser@aks-nodepool1-28371372-0:/$ lsblk -d -no FSTYPE /dev/disk/azure/scsi1/lun0

azureuser@aks-nodepool1-28371372-0:/$ lsblk -d -no FSTYPE /dev/disk/azure/scsi1/lun0-part1
ext4

I'll do some more digging when I get a chance - my knowledge of linux filesystems isn't good enough to know what I'm seeing here, but I'm kind of assuming based on your comment that for this to work lun0 needs to contain the filesystem.

@andyzhangx
Copy link
Contributor

@adam-resdiary
In your case, you are mounting a disk with already formatted paritions, k8s does support this now, correct behavior is return error, details could be found in kubernetes/kubernetes#63235

@andyzhangx
Copy link
Contributor

@adam-resdiary and azure provider could not recognize this issue, and format disk directly which is wrong, I am fixing this issue now: return error instead of format disk in this condition.

@adam-resdiary
Copy link
Author

Ok - thanks a lot for explaining that. I'll try to figure out if I can convert the disk somehow - I don't actually need the data to be in a separate partition, it's just like that because of how the existing VM I'm trying to migrate is setup.

@andyzhangx
Copy link
Contributor

@adam-resdiary existing azure disk PVC won't have this issue since it won't have a seperate partition. While I still think it's a bug, k8s common way is return error for this condition, while azure provider will format this disk which is a totally wrong behavior.

@adam-resdiary
Copy link
Author

Yeah - that's fine. What I'm getting at is that I've got data in that disk that I'd like to migrate, and ideally I would just be able to create the disk based on the VHD, then run some kind of command to remove the separate partition so that it was directly in the root of the disk like Kubernetes expects. I'll try to figure it out - I'm just not 100% sure what to search for yet.

Worst case scenario I can just use rsync to copy the data - but I was trying to avoid that for speed.

@adam-resdiary
Copy link
Author

Just in case this helps anyone else, I was able to come up with a solution for migrating the data. I'm not really sure if it's any better than just using rsync, but it worked anyhow:

  • Create a new Azure disk based on the disk you want to clone.
  • Create a new empty Azure disk that you will use for your new volume.
  • Create a Linux VM or use an existing VM, and attach both disks to them.
  • Use the following command to copy the data from the partition on your existing disk onto the new disk:
sudo dd if=/dev/disk/azure/scsi1/lun1-part1 of=/dev/disk/azure/scsi1/lun0 bs=64M status=progress

Where lun1-part1 is the first partition on your cloned disk, and lun0 is your new, empty disk. After this you end up with an ext4 partition on lun0 that can be mounted correctly by Kubernetes.

Feel free to close this issue now - I've figured out everything I need to. And thanks again for the pointers.

@jnoller
Copy link
Contributor

jnoller commented Apr 3, 2019

Closing as stale / how-to

@jnoller jnoller closed this as completed Apr 3, 2019
@ghost ghost locked as resolved and limited conversation to collaborators Aug 8, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

6 participants