-
Notifications
You must be signed in to change notification settings - Fork 18
Lost node labels on upgrade #13
Comments
Actually I seem to have lost all the storage. Stranger and stranger. |
|
Was the storage that you lost local volumes or volumes backed by cloudstor? |
Local volumes. Couldn't get cloudstor working properly. |
Local volumes do not survive upgrades since completely fresh nodes on fresh VHDs with the latest Moby OS and Docker engine are brought up. What was the issue that you ran into with cloudstor? |
Yikes, this deserves a big warning in the docs.
Honestly, I don't remember. I think at the time cloudstor was only available in the experimental channel, but when I installed it, I ran into other issues (not directly related to cloudstor). I may have created other related issues. |
@rocketraman out of curiosity, can I ask what you're using the node labels for? |
@friism I was using them in a poor man's attempt to get persistent storage working in the absence of support in Docker Swarm (via Cloudstor or other solution). For containers that required a persistent volume, I was attaching them to specific nodes via a node label. That way, if that container was stopped and recreated, the new container would be scheduled on the same node as the prior container, and thus have access to the same persistent volume. This approach worked well, until I ran |
@rocketraman thanks, it's good to know that labels and volumes were part of the same problem for you. That means that if cloudstor does the job, you're ok. I agree that it should be prominent in the docs that local Docker entities are not preserved during upgrades (nor are they preserved if a node goes down, for example). Only Docker Swarm entities (eg. services) are preserved. I'll try to update the docs. We've also considered other options, such as blocking non-swarm API calls that are treacherous (for example to create local volumes), but it would be somewhat limiting. I'd love to get more details on your use-case and how you'd like for this to work. |
I think I was simply mistaken on just how ephemeral nodes in Docker Swarm for Azure are supposed to be, and thus approaching my problem the wrong way. Knowing what I know now, I wouldn't have bothered with attempting any type of persistence without having Cloudstor working. The other use for node labels is, of course, having worker nodes with different capabilities e.g. high memory vs high cpu, and then using constraints to schedule containers appropriately. I don't believe this is currently possible with Docker for Azure. And on a related note, I'd want to be able to manage different storage classes of Azure disk based on the type of data being stored, access patterns, and data replication requirements. So when a volume is created, I could decide which storage class applied and "schedule it" on to the appropriate Azure storage. I also don't believe Docker for Azure / Cloudstor is capable of doing this yet. |
Cloudstor on Azure today indeed lacks an option to provision different classes of storage mainly due to Azure platform limitations around exposing Premium Storage options over SMB. We have provided feedback to our Azure contacts to introduce premium storage options to VM Scale Sets either over attachable/detachable regular disks or File Storage. So we will have this in the future in Cloudstor once we get the underlying platform support. Today, only Cloudstor on Docker for AWS supports a MaxIO option for higher throughput and IOPs. |
Closing this since the original issue is by design. |
When running the
upgrade.sh
script, all the node labels are lost and have to be manually re-applied.The text was updated successfully, but these errors were encountered: