Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fixed typos and issues in examples/volumes/glusterfs/README.md #43703

Merged
merged 1 commit into from
Apr 23, 2017
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
71 changes: 35 additions & 36 deletions examples/volumes/glusterfs/README.md
Original file line number Diff line number Diff line change
@@ -1,34 +1,35 @@
## Glusterfs
## GlusterFS

[Glusterfs](http://www.gluster.org) is an open source scale-out filesystem. These examples provide information about how to allow containers use Glusterfs volumes.
[GlusterFS](http://www.gluster.org) is an open source scale-out filesystem. These examples provide information about how to allow containers use GlusterFS volumes.

The example assumes that you have already set up a Glusterfs server cluster and the Glusterfs client package is installed on all Kubernetes nodes.
The example assumes that you have already set up a GlusterFS server cluster and have a working GlusterFS volume ready to use in the containers.

### Prerequisites

Set up Glusterfs server cluster; install Glusterfs client package on the Kubernetes nodes. ([Guide](http://gluster.readthedocs.io/en/latest/Administrator%20Guide/))
* Set up a GlusterFS server cluster
* Create a GlusterFS volume
* If you are not using hyperkube, you may need to install the GlusterFS client package on the Kubernetes nodes ([Guide](http://gluster.readthedocs.io/en/latest/Administrator%20Guide/))

### Create endpoints

Here is a snippet of [glusterfs-endpoints.json](glusterfs-endpoints.json),
The first step is to create the GlusterFS endpoints definition in Kubernetes. Here is a snippet of [glusterfs-endpoints.json](glusterfs-endpoints.json):

```
"addresses": [
{
"IP": "10.240.106.152"
}
],
"ports": [
{
"port": 1
}
]

"subsets": [
{
"addresses": [{ "ip": "10.240.106.152" }],
"ports": [{ "port": 1 }]
},
{
"addresses": [{ "ip": "10.240.79.157" }],
"ports": [{ "port": 1 }]
}
]
```

The "IP" field should be filled with the address of a node in the Glusterfs server cluster. In this example, it is fine to give any valid value (from 1 to 65535) to the "port" field.
The `subsets` field should be populated with the addresses of the nodes in the GlusterFS cluster. It is fine to provide any valid value (from 1 to 65535) in the `port` field.

Create the endpoints,
Create the endpoints:

```sh
$ kubectl create -f examples/volumes/glusterfs/glusterfs-endpoints.json
Expand All @@ -42,7 +43,7 @@ NAME ENDPOINTS
glusterfs-cluster 10.240.106.152:1,10.240.79.157:1
```

We need also create a service for this endpoints, so that the endpoints will be persistented. We will add this service without a selector to tell Kubernetes we want to add its endpoints manually. You can see [glusterfs-service.json](glusterfs-service.json) for details.
We also need to create a service for these endpoints, so that they will persist. We will add this service without a selector to tell Kubernetes we want to add its endpoints manually. You can see [glusterfs-service.json](glusterfs-service.json) for details.

Use this command to create the service:

Expand All @@ -51,24 +52,26 @@ $ kubectl create -f examples/volumes/glusterfs/glusterfs-service.json
```


### Create a POD
### Create a Pod

The following *volume* spec in [glusterfs-pod.json](glusterfs-pod.json) illustrates a sample configuration.
The following *volume* spec in [glusterfs-pod.json](glusterfs-pod.json) illustrates a sample configuration:

```json
{
"name": "glusterfsvol",
"glusterfs": {
"endpoints": "glusterfs-cluster",
"path": "kube_vol",
"readOnly": true
"volumes": [
{
"name": "glusterfsvol",
"glusterfs": {
"endpoints": "glusterfs-cluster",
"path": "kube_vol",
"readOnly": true
}
}
}
]
```

The parameters are explained as the followings.

- **endpoints** is endpoints name that represents a Gluster cluster configuration. *kubelet* is optimized to avoid mount storm, it will randomly pick one from the endpoints to mount. If this host is unresponsive, the next Gluster host in the endpoints is automatically selected.
- **endpoints** is the name of the Endpoints object that represents a Gluster cluster configuration. *kubelet* is optimized to avoid mount storm, it will randomly pick one from the endpoints to mount. If this host is unresponsive, the next Gluster host in the endpoints is automatically selected.
- **path** is the Glusterfs volume name.
- **readOnly** is the boolean that sets the mountpoint readOnly or readWrite.

Expand All @@ -84,17 +87,13 @@ You can verify that the pod is running:
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
glusterfs 1/1 Running 0 3m

$ kubectl get pods glusterfs -t '{{.status.hostIP}}{{"\n"}}'
10.240.169.172
```

You may ssh to the host (the hostIP) and run 'mount' to see if the Glusterfs volume is mounted,
You may execute the command `mount` inside the container to see if the GlusterFS volume is mounted correctly:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The below mount command is from the host. So, this correction is invalid.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How come? I believe kubectl exec <pod> -- <cmd> runs <cmd> inside the container. See also Running Commands in a Container with kubectl exec.

For instance when I execute this command, I also see the processes from within the container:

$ kubectl exec hello-2096222913-x8p6b -- ps
PID   USER     TIME   COMMAND
    1 root       0:00 /usr/bin/hello
   19 root       0:00 ps

Copy link
Contributor

@humblec humblec Apr 6, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@metachris I am aware of the method to run command from the container:). What I was trying to point out here is:

-$ mount | grep kube_vol
-10.240.106.152:kube_vol on /var/lib/kubelet/pods/f164a571-fa68-11e4-ad5c-42010af019b7/volumes/kubernetes.io~glusterfs/glusterfsvol type fuse.glusterfs (rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072)

This is from the host. Gluster plugin mount the share in host and then bind mount to the container. Does that make sense ?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, what you say makes sense.

I think the correction is still an improvement, because with this command you can still verify that the container has the GlusterFS volume mounted correctly, it's easier to execute with kubectl exec instead of ssh'ing to the node, and it's easier to see if multiple container mount a gluster vol on this host.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@metachris What abt keeping both ?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@humblec Of course this would be an option. But the reason I removed the old example in first place was that the given command kubectl get pods glusterfs -t '{{.status.hostIP}}{{"\n"}}' does not work (Error: unknown shorthand flag: 't' in -t).

The new example works, covers the complete use-case, and is easier (only a single exec command, no ssh'ing into the node).

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ACK.


```sh
$ mount | grep kube_vol
10.240.106.152:kube_vol on /var/lib/kubelet/pods/f164a571-fa68-11e4-ad5c-42010af019b7/volumes/kubernetes.io~glusterfs/glusterfsvol type fuse.glusterfs (rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072)
```
$ kubectl exec glusterfs -- mount | grep gluster
10.240.106.152:kube_vol on /mnt/glusterfs type fuse.glusterfs (rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072)```

You may also run `docker ps` on the host to see the actual container.

Expand Down
2 changes: 1 addition & 1 deletion examples/volumes/glusterfs/glusterfs-pod.json
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@
"containers": [
{
"name": "glusterfs",
"image": "kubernetes/pause",
"image": "nginx",
"volumeMounts": [
{
"mountPath": "/mnt/glusterfs",
Expand Down