New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Understanding managed disks template in Kubernetes #523

Closed
ams0 opened this Issue Apr 23, 2017 · 3 comments

Comments

Projects
None yet
4 participants
@ams0
Member

ams0 commented Apr 23, 2017

I deployed a cluster using this template, since #144 has been merged. After successful deployment, I have:

  • a master with the OS disk in storage account (unmanaged) and a 128GB data disk (managed); however, the latter, reported as /dev/sdc, is not formatted and not mounted
  • agents have one OS disk, managed, of 30GB

I fail to see the purpose of this setup; etcd is still running on the master with --data-dir /var/lib/etcddisk and the agents have no extra disks to present to Kubernetes (specifically, my goal is to automate GlusterFS setup in ACS).

thanks

@JackQuincy

This comment has been minimized.

Show comment
Hide comment
@JackQuincy

JackQuincy Apr 27, 2017

Member

is not formatted and not mounted

This is a bug. That disk is supposed to be mounted and have etcd on it.

The Managed disk support is not for adding more disks to give to the containers. In Kubernetes you should use a PVC and Azure disks or Azure files for that(Kubernetes will dynamically allocate them for you, there are issues here about this which you can look at to try to find a good setup). The point of the Managed disks in the api model is to get your disks storage accounts spread out across fault domains.

Member

JackQuincy commented Apr 27, 2017

is not formatted and not mounted

This is a bug. That disk is supposed to be mounted and have etcd on it.

The Managed disk support is not for adding more disks to give to the containers. In Kubernetes you should use a PVC and Azure disks or Azure files for that(Kubernetes will dynamically allocate them for you, there are issues here about this which you can look at to try to find a good setup). The point of the Managed disks in the api model is to get your disks storage accounts spread out across fault domains.

@khenidak

This comment has been minimized.

Show comment
Hide comment
@khenidak

khenidak May 11, 2017

Contributor

There are two things to consider:

  1. A cluster running on managed disks (OS and/or data disks).
  2. PVCs using managed disks (which will require having the machine running OS (and any pre attached data disk ) as managed disks .

Currently acs- engine supports both managed and unmanaged disks. But K8S supports only unmanaged PVC (support for managed PVC will come in 1.7) . If you are planning to run something such as ClusterFS then you probably looking for pre-attached data disks (using managed or un managed). The trick is, if you created managed VMs you will only be able to add managed disks (and accordingly managed PVC).

Contributor

khenidak commented May 11, 2017

There are two things to consider:

  1. A cluster running on managed disks (OS and/or data disks).
  2. PVCs using managed disks (which will require having the machine running OS (and any pre attached data disk ) as managed disks .

Currently acs- engine supports both managed and unmanaged disks. But K8S supports only unmanaged PVC (support for managed PVC will come in 1.7) . If you are planning to run something such as ClusterFS then you probably looking for pre-attached data disks (using managed or un managed). The trick is, if you created managed VMs you will only be able to add managed disks (and accordingly managed PVC).

@seanknox

This comment has been minimized.

Show comment
Hide comment
@seanknox

seanknox May 26, 2017

Member

Closing due as I believe this has been answered; feel free to re-open if needed.

Member

seanknox commented May 26, 2017

Closing due as I believe this has been answered; feel free to re-open if needed.

@seanknox seanknox closed this May 26, 2017

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment