Skip to content

Commit

Permalink
update README with details about broker push and broker registrar err…
Browse files Browse the repository at this point in the history
…ands, as well as PXC and CredHub support [#155384701]
  • Loading branch information
Julian Hjortshoj committed Jun 21, 2018
1 parent 76a3f31 commit fbcb91c
Showing 1 changed file with 39 additions and 33 deletions.
72 changes: 39 additions & 33 deletions README.md
Expand Up @@ -3,11 +3,12 @@
This is a bosh release that packages:
- an [nfsv3driver](https://github.com/cloudfoundry-incubator/nfsv3driver)
- [nfsbroker](https://github.com/cloudfoundry-incubator/nfsbroker)
- a test NFS server
- a sample NFS server with test shares
- a sample LDAP server with prepopulated accounts to match the NFS test server

The broker and driver allow you to provision existing NFS volumes and bind those volumes to your applications for shared file access.

The test server provides an easy test target with which you can try out volume mounts.
The test NFS and LDAP servers provide easy test targets with which you can try out volume mounts.

# Deploying to Cloud Foundry

Expand All @@ -31,16 +32,18 @@ As of version 1.2.0 we no longer support old cf-release deployments with bosh v1
```bash
$ bosh -e my-env -d cf deploy cf.yml -v deployment-vars.yml -o operations/enable-nfs-volume-service.yml
```

**Note:** the above command is an example, but your deployment command should match the one you used to deploy Cloud Foundry initially, with the addition of a `-o operations/enable-nfs-volume-service.yml` option.
**Note:** the above command is an example, but your deployment command should match the one you used to deploy Cloud Foundry initially, with the addition of a `-o operations/enable-nfs-volume-service.yml` option.

Your CF deployment will now have a running service broker and volume drivers, ready to mount nfs volumes. Unless you have explicitly defined a variable for your nfsbroker password, BOSH will generate one for you. You can find the password for use in broker registration via the `bosh interpolate` command:
```bash
bosh int deployment-vars.yml --path /nfs-broker-password
```
3. **If you are using cf-deployment version >= 2.0** then the ops file will deploy the `nfsbrokerpush` bosh errand rather than running nfsbroker as a bosh job. You must invoke the errand to push the broker to cloud foundry where it will run as an application.
```bash
$ bosh -e my-env -d cf run-errand nfs-broker-push
```


Your CF deployment will now have a running service broker and volume drivers, ready to mount nfs volumes.

If you wish to also deploy the NFS test server, you can include this [operations file](https://github.com/cloudfoundry/nfs-volume-release/blob/master/operations/enable-nfs-test-server.yml) with a `-o` flag also. That will create a separate VM with nfs exports that you can use to experiment with volume mounts.
> NB: by default, the nfs test server expects that your CF deployment is deployed to a 10.x.x.x subnet. If you are deploying to a subnet that is not 10.x.x.x (e.g. 192.168.x.x) then you will need to override the `export_cidr` property.
> Note: by default, the nfs test server expects that your CF deployment is deployed to a 10.x.x.x subnet. If you are deploying to a subnet that is not 10.x.x.x (e.g. 192.168.x.x) then you will need to override the `export_cidr` property.
> Edit the generated manifest, and replace this line:
> ` nfstestserver: {}`
> with something like this:
Expand All @@ -52,10 +55,9 @@ If you wish to also deploy the NFS test server, you can include this [operations
* Register the broker and grant access to its service with the following commands:

```bash
$ cf create-service-broker nfsbroker nfs-broker <BROKER_PASSWORD> http://nfs-broker.YOUR.DOMAIN.com
$ bosh -e my-env -d cf run-errand nfs-broker-registrar
$ cf enable-service-access nfs
```
Again, if you have not explicitly set a variable value for your service broker password, you can find the value bosh has assigned using the `bosh interpolate` command described above.

## Create an NFS volume service
* If you are testing against the `nfstestserver` job packaged in this release, type the following:
Expand All @@ -69,7 +71,7 @@ If you wish to also deploy the NFS test server, you can include this [operations
### NFS v4 (Experimental):

To provide our existing `nfs` service capabilities we use a libfuse implementation that only supports nfsv3 and has some performance constraints.

If you require nfsv4 or better performance or both then you can try the new nfsv4 (experimental) support offered through a new nfsbroker plan called `nfs-experimental`. The `nfs-experimental` plan accepts a `version` parameter to determine which nfs protocol version to use.

* type the following:
Expand Down Expand Up @@ -100,11 +102,13 @@ If you require nfsv4 or better performance or both then you can try the new nfsv
```bash
$ cf bind-service pora myVolume -c '{"uid":"1000","gid":"1000"}'
```
> ####Bind Parameters####
> * **uid** and **gid:** When binding the nfs service to the application, the uid and gid specified are supplied to the nfs driver. The nfs driver tranlates the application user id and group id to the specified uid and gid when sending traffic to the nfs server, and translates this uid and gid back to the running user uid and default gid when returning attributes from the server. This allows you to interact with your nfs server as a specific user while allowing Cloud Foundry to run your application as an arbitrary user.
> * **mount:** By default, volumes are mounted into the application container in an arbitrarily named folder under /var/vcap/data. If you prefer to mount your directory to some specific path where your application expects it, you can control the container mount path by specifying the `mount` option. The resulting bind command would look something like
> ``` cf bind-service pora myVolume -c '{"uid":"0","gid":"0","mount":"/var/path"}'```
> * **readonly:** Set true if you want the mounted volume to be read only.
> #### Bind Parameters
> * **uid** and **gid:** When binding the nfs service to the application, the uid and gid specified are supplied to the nfs driver. The nfs driver tranlates the application user id and group id to the specified uid and gid when sending traffic to the nfs server, and translates this uid and gid back to the running user uid and default gid when returning attributes from the server. This allows you to interact with your nfs server as a specific user while allowing Cloud Foundry to run your application as an arbitrary user.
> * **mount:** By default, volumes are mounted into the application container in an arbitrarily named folder under /var/vcap/data. If you prefer to mount your directory to some specific path where your application expects it, you can control the container mount path by specifying the `mount` option. The resulting bind command would look something like
> ``` cf bind-service pora myVolume -c '{"uid":"0","gid":"0","mount":"/var/path"}'```
> * **readonly:** Set true if you want the mounted volume to be read only.
>
> As of nfs-volume-release version 1.3.1, bind parameters may also be specified in configuration during service instance creation. Specifying bind parameters in advance when creating the service instance is particularly helpful when binding services to an application in the application manifest, where bind configuration is not supported.
* Start the application
```bash
Expand All @@ -115,10 +119,7 @@ If you require nfsv4 or better performance or both then you can try the new nfsv
* to check if the app is running, `curl http://pora.YOUR.DOMAIN.com` should return the instance index for your app
* to check if the app can access the shared volume `curl http://pora.YOUR.DOMAIN.com/write` writes a file to the share and then reads it back out again.

# Application specifics
For most buildpack applications, the workflow described above will enable NFS volume services (we have tested go, java, php and python). There are special situations to note however when using a Docker image as discussed below:

> ## Security Note
> # Security Note
> Because connecting to NFS shares will require you to open your NFS mountpoint to all Diego cells, and outbound traffic from application containers is NATed to the Diego cell IP address, there is a risk that an application could initiate an NFS IP connection to your share and gain unauthorized access to data.
>
> To mitigate this risk, consider one or more of the following steps:
Expand All @@ -135,20 +136,25 @@ For better security, it is recommended to configure your deployment of nfs-volum
# BBR Support
If you are using [Bosh Backup and Restore](https://docs.cloudfoundry.org/bbr/) (BBR) to keep backups of your Cloud Foundry deployment, consider including the [enable-nfs-broker-backup.yml](https://github.com/cloudfoundry/cf-deployment/blob/master/operations/experimental/enable-nfs-broker-backup.yml) operations file from cf-deployment when you redeploy Cloud Foundry. This file will install the requiste backup and restore scripts for nfs service broker metadata on the backup/restore VM.

# Deploying the NFS Broker to Cloud Foundry via `cf push`
You may wish to run the service broker in Cloud Foundry using `cf push` instead of bosh deploying it. That has the benefit of using
one less virtual machine. To do that, you will first need to modify the `enable-nfs-volume-service.yml` operations file to
remove the service broker, and then push the broker instead using the steps below.
# (Experimental) Support for PXC databases
If you plan to enable the [PXC database](https://github.com/cloudfoundry/cf-deployment/blob/master/operations/experimental/use-pxc.yml) in your Cloud Foundry deployment, then you will need to apply the following ops file to allow the nfs broker to connect to PXC instead of MySql:
- [use-pxc-for-nfs-broker.yml](https://github.com/cloudfoundry/nfs-volume-release/blob/master/operations/use-pxc-for-nfs-broker.yml)

Note that because PXC enables TLS using a server certification, nfs broker will no longer be able to connect to it using an IP address. As a result, you must also apply ops files to enable BOSH DNS, and to apply BOSH DNS to application containers in order to allow the nfs broker to connect to PXC using a host name:
- [use-bosh-dns.yml](https://github.com/cloudfoundry/cf-deployment/blob/master/operations/experimental/use-bosh-dns.yml)
- [use-bosh-dns-for-containers.yml](https://github.com/cloudfoundry/cf-deployment/blob/master/operations/experimental/use-bosh-dns-for-containers.yml)

# (Experimental) Support for CredHub as a backing store for nfs broker

When the service broker is `cf push`ed, it must be bound to a MySql or Postgres database service instance. (Since Cloud Foundry applications are stateless, it is not safe to store state on the local filesystem, so we require a database to do simple bookkeeping.)
Version 1.4.0 introduces support for using CredHub instead of a SQL database to store state for nfs broker. CredHub has the advantage that it encrypts data at rest and is therefore a more secure store for service instance and service binding metadata. CredHub is required if you are using the LDAP integration, and you wish to specify user credentials at service instance creation time, rather than at service binding time. To use CredHub as the backing store for nfs broker, apply this ops file:
- [enable-nfs-volume-service-credhub.yml](https://github.com/cloudfoundry/nfs-volume-release/blob/master/operations/enable-nfs-volume-service-credhub.yml)

Once you have a database service instance available in the space where you will push your service broker application, follow the following steps:
- `cd src/code.cloudfoundry.org/nfsbroker`
- `GOOS=linux GOARCH=amd64 go build -o bin/nfsbroker`
- edit `manifest.yml` to set up broker username/password and sql db driver name and cf service name. If you are using the [cf-mysql-release](http://bosh.io/releases/github.com/cloudfoundry/cf-mysql-release) from bosh.io, then the database parameters in manifest.yml will already be correct.
- `cf push <broker app name> --no-start`
- `cf bind-service <broker app name> <sql service instance name>`
- `cf start <broker app name>`
Note that this ops file will install a separate errand for the credhub enabled broker. To push that broker and register it you should type the following:

```bash
$ bosh -e my-env -d cf run-errand nfs-broker-credhub-push
$ bosh -e my-env -d cf run-errand nfs-broker-credhub-registrar
```

# Troubleshooting
If you have trouble getting this release to operate properly, try consulting the [Volume Services Troubleshooting Page](https://github.com/cloudfoundry-incubator/volman/blob/master/TROUBLESHOOTING.md)

0 comments on commit fbcb91c

Please sign in to comment.