-
Notifications
You must be signed in to change notification settings - Fork 306
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Persistent Volume Claim migration strategy #312
Comments
Haven't tried, but i'd be surprised if you couldn't create a persistent storage account (as an example) yourself in another resource group, create the persistent volume then create the persistent volume claim pointing to that volume and use that in the container. We did something similar when creating our azure-disk (we are going to change to use azure-file) for storing our persistent grafana data we did not wish to lose, we did put the disk in the same resource group but I may try putting this outside of the resource group to avoid what you are describing above in terms of deleting the entire cluster. |
@adam-resdiary azure disk dynamic provision would only create storage account inside the resource group as the k8s cluster, in your case if you already has dynamic provioned azure disk, you may copy out the disk inside that resource gorup and use static provisioning, pod template could be like this: |
Thanks for the replies - it sounds like static provisioning might make a bit more sense for our situation. It means we can make sure the data isn't tied to the lifetime of the cluster. It also then becomes a bit simpler to migrate the data from an existing VM to the AKS cluster. The only thing that's blocking me now is that the cluster doesn't seem to have permission to access disks in other resource groups in our subscription. I'll keep looking - I'm guessing I just need to grant it permission somehow. |
@omgsarge This might not be relevant in your situation, but I tried using an Azure Files volume to store the data for a TeamCity server. At first it all looked good, but after the server had been running for a short period of time, it became completely unable to get the latest changes from GitHub. I don't know for sure what the problem was, but I'm assuming some kind of performance problem with Azure Files. Switching to a managed premium disk seems to have sorted the issue, although I can't be 100% sure just yet. Something to watch out for anyway. |
@adam-resdiary current azure file perf is not good. I would propose a new azure storage plugin: blobfuse flexvolume driver which also supports ReadWriteMany and has better perf than azure file, this feature is in preview now. |
@adam-resdiary for the permission issue, I think you could create a service principal in advance, grant it full subscription permission, and then use it to create AKS, you could follow by: https://docs.microsoft.com/en-us/azure/aks/kubernetes-service-principal#use-an-existing-sp-1 |
@adam-resdiary - cool, thanks for the heads up. I was wondering about Azure File performance, I was potentially going to use it as the persistent volume for a cross-region MQ setup, but will keep an eye on that and change accordingly |
@andyzhangx great - thanks for that. That sounds like exactly what I need. |
@andyzhangx I've run into a couple of issues while trying to use a separate resource group:
The first issue is more of an annoyance than anything else, but the second is a bit more of a problem. Here's a quick description of what I'm doing, in case I'm just doing something stupid: Creating two resource groups - one for the cluster, one for the data:
Creating the service principal:
Creating the AKS cluster:
Creating a new disk based on an existing disk that I want to migrate:
Granting the service principal access to the disk:
At this point I then tried to create a new pod using the following definition:
If I then connect to the pod using I've manually attached and mounted the file system on one of the AKS nodes before doing any of this to check that it definitely had the data on it, so I know that the disk wasn't empty before attaching it to a pod. Any idea what's going on here? I'm happy to open a separate issue as well if that makes sense, I just figure that maybe I'm doing something stupid. |
Just to add to that, I've found something in the kubelet logs that maybe explains what's going on:
Any idea what would make it think the disk isn't formatted? I've been able to manually mount it no problem. |
@adam-resdiary what's your k8s version? I may try that later. |
1.8.7. I'll try upgrading the cluster to a newer version today to see whether it makes any difference. |
@adam-resdiary I have tried mounting an existing azure disk by resuing same PVC, the disk won't be formatted in the second mount. I think the issue exists in this step:
In k8s, it will use |
@adam-resdiary I just follow your way to deplicate the disk, and it works well, k8s won't format the duplicated disk, here is my way:
The only difference is I used a managed disk to duplicate a disk. |
@andyzhangx I've recreated the disk and attached it to a VM, then run the lsblk command. Here's what I'm getting:
I'll do some more digging when I get a chance - my knowledge of linux filesystems isn't good enough to know what I'm seeing here, but I'm kind of assuming based on your comment that for this to work |
@adam-resdiary |
@adam-resdiary and azure provider could not recognize this issue, and format disk directly which is wrong, I am fixing this issue now: return error instead of format disk in this condition. |
Ok - thanks a lot for explaining that. I'll try to figure out if I can convert the disk somehow - I don't actually need the data to be in a separate partition, it's just like that because of how the existing VM I'm trying to migrate is setup. |
@adam-resdiary existing azure disk PVC won't have this issue since it won't have a seperate partition. While I still think it's a bug, k8s common way is return error for this condition, while azure provider will format this disk which is a totally wrong behavior. |
Yeah - that's fine. What I'm getting at is that I've got data in that disk that I'd like to migrate, and ideally I would just be able to create the disk based on the VHD, then run some kind of command to remove the separate partition so that it was directly in the root of the disk like Kubernetes expects. I'll try to figure it out - I'm just not 100% sure what to search for yet. Worst case scenario I can just use rsync to copy the data - but I was trying to avoid that for speed. |
Just in case this helps anyone else, I was able to come up with a solution for migrating the data. I'm not really sure if it's any better than just using rsync, but it worked anyhow:
Where Feel free to close this issue now - I've figured out everything I need to. And thanks again for the pointers. |
Closing as stale / how-to |
Apologies in advance if this isn't the right place to put this question, but I'm just looking for some advice before I go too far with AKS.
When you automatically provision storage using a PVC, the storage gets created in a resource group created automatically by AKS (
MC_xyz...
). What I'm wondering is what would the strategy be if I needed to remove an existing AKS cluster, and create a new one, but still wanted access to the same data.Deleting the container service will cause the
MC_
resource group to be removed, along with the data, so is there a way to migrate this to another cluster, or should I just avoid using PVCs for any data that might need to live longer than the cluster itself?This is possibly more of an issue just now because of the lack of multiple node pools (i.e. I might get into a situation where the node size I've chosen isn't suitable anymore, and I currently need to create a new cluster in order to switch to a different size of nodes).
The text was updated successfully, but these errors were encountered: