New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
implement glusterfs volume plugin #6174
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,47 @@ | ||
## Glusterfs | ||
|
||
[Glusterfs](http://www.gluster.org) is an open source scale-out filesystem. These examples provide information about how to allow containers use Glusterfs volumes. | ||
|
||
The example assumes that the Glusterfs client package is installed on all nodes. | ||
|
||
### Prerequisites | ||
|
||
Install Glusterfs client package on the Kubernetes hosts. | ||
|
||
### Create a POD | ||
|
||
The following *volume* spec illustrates a sample configuration. | ||
|
||
```js | ||
{ | ||
"name": "glusterfsvol", | ||
"glusterfs": { | ||
"endpoints": "glusterfs-cluster", | ||
"path": "kube_vol", | ||
"readOnly": true | ||
} | ||
} | ||
``` | ||
|
||
The parameters are explained as the followings. | ||
|
||
- **endpoints** is endpoints name that represents a Gluster cluster configuration. *kubelet* is optimized to avoid mount storm, it will randomly pick one from the endpoints to mount. If this host is unresponsive, the next Gluster host in the endpoints is automatically selected. | ||
- **path** is the Glusterfs volume name. | ||
- **readOnly** is the boolean that sets the mountpoint readOnly or readWrite. | ||
|
||
Detailed POD and Gluster cluster endpoints examples can be found at [v1beta3/](v1beta3/) and [endpoints/](endpoints/) | ||
|
||
```shell | ||
# create gluster cluster endpoints | ||
$ kubectl create -f examples/glusterfs/endpoints/glusterfs-endpoints.json | ||
# create a container using gluster volume | ||
$ kubectl create -f examples/glusterfs/v1beta3/glusterfs.json | ||
``` | ||
Once that's up you can list the pods and endpoint in the cluster, to verify that the master is running: | ||
|
||
```shell | ||
$ kubectl get endpoints | ||
$ kubectl get pods | ||
``` | ||
|
||
If you ssh to that machine, you can run `docker ps` to see the actual pod and `mount` to see if the Glusterfs volume is mounted. |
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,13 @@ | ||
{ | ||
"apiVersion": "v1beta1", | ||
"id": "glusterfs-cluster", | ||
"kind": "Endpoints", | ||
"metadata": { | ||
"name": "glusterfs-cluster" | ||
}, | ||
"Endpoints": [ | ||
"10.16.154.81:0", | ||
"10.16.154.82:0", | ||
"10.16.154.83:0" | ||
] | ||
} |
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,32 @@ | ||
{ | ||
"apiVersion": "v1beta3", | ||
"id": "glusterfs", | ||
"kind": "Pod", | ||
"metadata": { | ||
"name": "glusterfs" | ||
}, | ||
"spec": { | ||
"containers": [ | ||
{ | ||
"name": "glusterfs", | ||
"image": "kubernetes/pause", | ||
"volumeMounts": [ | ||
{ | ||
"mountPath": "/mnt/glusterfs", | ||
"name": "glusterfsvol" | ||
} | ||
] | ||
} | ||
], | ||
"volumes": [ | ||
{ | ||
"name": "glusterfsvol", | ||
"glusterfs": { | ||
"endpoints": "glusterfs-cluster", | ||
"path": "kube_vol", | ||
"readOnly": true | ||
} | ||
} | ||
] | ||
} | ||
} |
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -198,6 +198,8 @@ type VolumeSource struct { | |
// ISCSIVolumeSource represents an ISCSI Disk resource that is attached to a | ||
// kubelet's host machine and then exposed to the pod. | ||
ISCSI *ISCSIVolumeSource `json:"iscsi"` | ||
// Glusterfs represents a Glusterfs mount on the host that shares a pod's lifetime | ||
Glusterfs *GlusterfsVolumeSource `json:"glusterfs"` | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. You'll also want to add this to PersistentVolumeSource to make it a provisionable resource. If Gluster is not to be exposed to the end user (only the admin provisions it, users claim it), then after the PV framework is fully merged you can remove GFS from VolumeSource and leave it in PVS. This hides it completely from the pod author. |
||
} | ||
|
||
// Similar to VolumeSource but meant for the administrator who creates PVs. | ||
|
@@ -210,6 +212,8 @@ type PersistentVolumeSource struct { | |
// This is useful for development and testing only. | ||
// on-host storage is not supported in any way | ||
HostPath *HostPathVolumeSource `json:"hostPath"` | ||
// Glusterfs represents a Glusterfs volume that is attached to a host and exposed to the pod | ||
Glusterfs *GlusterfsVolumeSource `json:"glusterfs"` | ||
} | ||
|
||
type PersistentVolume struct { | ||
|
@@ -421,6 +425,19 @@ type NFSVolumeSource struct { | |
ReadOnly bool `json:"readOnly,omitempty"` | ||
} | ||
|
||
// GlusterfsVolumeSource represents a Glusterfs Mount that lasts the lifetime of a pod | ||
type GlusterfsVolumeSource struct { | ||
// Required: EndpointsName is the endpoint name that details Glusterfs topology | ||
EndpointsName string `json:"endpoints"` | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Is the assumption that endpoints for gluster lie outside the kubernetes cluster? I am a bit anxious about direct creation of endpoints without a service (now that we have headless services) since it lays a trap for a later collision that won't be detected. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. the gluster cluster lies outside the kube cluster. what's the collision case? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. It's not unreasonable to force someone to create an external headless service if they want this behavior. ----- Original Message -----
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. The collision case is that you create endpoints called "foo" then I create
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. got it, thanks. would a special namespace for storage help? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Perhaps the real question is whether you expect an external-to-kubernetes gluster cluster to be namespace-scoped or not? A) The set of gluster endpoints is namespaced - use a headless service There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
Only thing I can think of for C is "a DNS address" or "a list of ips".
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. We do not currently have any concept of non-namespaced endpoints or services. Are we going to accumulate these things in a random namespace and then violate the cross-namespace principles? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. So hypothetically, an admin might run gluster in namespace "foo" and have a real service "gluster" (headless or no). They then want to use volumes from that gluster service in other namespaces. So an admin would automate / manually create persistent volumes that point to that gluster cluster. The volume settings would be "use the gluster cluster in namespace foo with name gluster". When a volume source is created for that persistent volume, it would be referencing that service. ----- Original Message -----
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I can buy that argument for persistent volumes, where it is an admin that is crossing the namespace boundary. It's a bit less shiny when it's a user's pod that is referencing Endpoints or Services in another namespace. As it stands, the endpoints must be in the same namespace as the pod. I don't think this is sufficient to handle what you are describing. This should probably become an ObjectRef, or else we should make it target a multi-record DNS name and treat that as an endpoints set (or something). Then we have to decide if it is kosher to write an Endpoints object that does not have an associated Service object. |
||
|
||
// Required: Path is the Glusterfs volume path | ||
Path string `json:"path"` | ||
|
||
// Optional: Defaults to false (read/write). ReadOnly here will force | ||
// the Glusterfs to be mounted with read-only permissions | ||
ReadOnly bool `json:"readOnly,omitempty"` | ||
} | ||
|
||
// ContainerPort represents a network port in a single container | ||
type ContainerPort struct { | ||
// Optional: If specified, this must be a DNS_LABEL. Each named port | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: Clarify that this is the volume spec for the POD.