Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update README with secrets #24

Merged
merged 2 commits into from Feb 13, 2019
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
1 change: 1 addition & 0 deletions .travis.yml
Expand Up @@ -15,5 +15,6 @@ script:
- make
- ./hack/verify-all
- make test
- make test-sanity
- go test -covermode=count -coverprofile=profile.cov ./pkg/...
- $GOPATH/bin/goveralls -coverprofile=profile.cov -service=travis-ci
8 changes: 8 additions & 0 deletions deploy/kubernetes/secret.yaml
@@ -0,0 +1,8 @@
apiVersion: v1
kind: Secret
metadata:
name: aws-secret
namespace: kube-system
stringData:
key_id: "[aws_access_key_id]"
access_key: "[aws_secret_access_key]"
18 changes: 15 additions & 3 deletions docs/README.md
Expand Up @@ -33,11 +33,23 @@ Following sections are Kubernetes specific. If you are Kubernetes user, use foll
* Dynamic provisioning - uses persistence volume claim (PVC) to let the Kuberenetes to create the FSx for Lustre filesystem for you and consumes the volume from inside container.

### Installation
Deploy the driver using followings step:
Checkout the project:
```sh
>> git clone https://github.com/aws/aws-fsx-csi-driver.git
>> cd aws-fsx-csi-driver
```

Edit the [secret manifest](../deploy/kubernetes/secret.yaml) using your favorite text editor. The secret should have enough permission to create FSx for Lustre filesystem. Then deploy the secret:

```sh
>> kubectl apply -f deploy/kubernetes/secret.yaml
```
kubectl apply -f https://raw.githubusercontent.com/aws/aws-fsx-csi-driver/master/deploy/kubernetes/controller.yaml
kubectl apply -f https://raw.githubusercontent.com/aws/aws-fsx-csi-driver/master/deploy/kubernetes/node.yaml

Deploy the driver:

```sh
>> kubectl apply -f deploy/kubernetes/controller.yaml
>> kubectl apply -f deploy/kubernetes/node.yaml
```

### Examples
Expand Down
16 changes: 8 additions & 8 deletions examples/kubernetes/dynamic_provisioning/README.md
Expand Up @@ -34,21 +34,21 @@ Update `spec.resource.requests.storage` with the storage capacity to request. Th

### Deploy the Application
Create PVC, storageclass and the pod that consumes the PV:
```
kubectl apply -f examples/kubernetes/dynamic_provisioning/storageclass.yaml
kubectl apply -f examples/kubernetes/dynamic_provisioning/claim.yaml
kubectl apply -f examples/kubernetes/dynamic_provisioning/pod.yaml
```sh
>> kubectl apply -f examples/kubernetes/dynamic_provisioning/storageclass.yaml
>> kubectl apply -f examples/kubernetes/dynamic_provisioning/claim.yaml
>> kubectl apply -f examples/kubernetes/dynamic_provisioning/pod.yaml
```

### Check the Application uses FSx for Lustre filesystem
After the objects are created, verify that pod is running:

```
kubectl get pods
```sh
>> kubectl get pods
```

Also verify that data is written onto FSx for Luster filesystem:

```
kubectl exec -ti fsx-app -- tail -f /data/out.txt
```sh
>> kubectl exec -ti fsx-app -- tail -f /data/out.txt
```
26 changes: 13 additions & 13 deletions examples/kubernetes/multiple_pods/README.md
Expand Up @@ -26,32 +26,32 @@ Replace `volumeHandle` with `FileSystemId` and `dnsname` with `DNSName`. Note th

You can get both `FileSystemId` and `DNSName` using AWS CLI:

```
aws fsx describe-file-systems
```sh
>> aws fsx describe-file-systems
```

### Deploy the Application
Create PV, persistence volume claim (PVC), storageclass and the pods that consume the PV:
```
kubectl apply -f examples/kubernetes/multiple_pods/storageclass.yaml
kubectl apply -f examples/kubernetes/multiple_pods/pv.yaml
kubectl apply -f examples/kubernetes/multiple_pods/claim.yaml
kubectl apply -f examples/kubernetes/multiple_pods/pod1.yaml
kubectl apply -f examples/kubernetes/multiple_pods/pod2.yaml
```sh
>> kubectl apply -f examples/kubernetes/multiple_pods/storageclass.yaml
>> kubectl apply -f examples/kubernetes/multiple_pods/pv.yaml
>> kubectl apply -f examples/kubernetes/multiple_pods/claim.yaml
>> kubectl apply -f examples/kubernetes/multiple_pods/pod1.yaml
>> kubectl apply -f examples/kubernetes/multiple_pods/pod2.yaml
```

Both pod1 and pod2 are writing to the same FSx for Lustre filesystem at the same time.

### Check the Application uses FSx for Lustre filesystem
After the objects are created, verify that pod is running:

```
kubectl get pods
```sh
>> kubectl get pods
```

Also verify that data is written onto FSx for Luster filesystem:

```
kubectl exec -ti app1 -- tail -f /data/out1.txt
kubectl exec -ti app2 -- tail -f /data/out2.txt
```sh
>> kubectl exec -ti app1 -- tail -f /data/out1.txt
>> kubectl exec -ti app2 -- tail -f /data/out2.txt
```
22 changes: 11 additions & 11 deletions examples/kubernetes/static_provisioning/README.md
Expand Up @@ -23,28 +23,28 @@ spec:
```
Replace `volumeHandle` with `FileSystemId` and `dnsname` with `DNSName`. You can get both `FileSystemId` and `DNSName` using AWS CLI:

```
aws fsx describe-file-systems
```sh
>> aws fsx describe-file-systems
```

### Deploy the Application
Create PV, persistence volume claim (PVC), storageclass and the pod that consumes the PV:
```
kubectl apply -f examples/kubernetes/static_provisioning/storageclass.yaml
kubectl apply -f examples/kubernetes/static_provisioning/pv.yaml
kubectl apply -f examples/kubernetes/static_provisioning/claim.yaml
kubectl apply -f examples/kubernetes/static_provisioning/pod.yaml
```sh
>> kubectl apply -f examples/kubernetes/static_provisioning/storageclass.yaml
>> kubectl apply -f examples/kubernetes/static_provisioning/pv.yaml
>> kubectl apply -f examples/kubernetes/static_provisioning/claim.yaml
>> kubectl apply -f examples/kubernetes/static_provisioning/pod.yaml
```

### Check the Application uses FSx for Lustre filesystem
After the objects are created, verify that pod is running:

```
kubectl get pods
```sh
>> kubectl get pods
```

Also verify that data is written onto FSx for Luster filesystem:

```
kubectl exec -ti fsx-app -- tail -f /data/out.txt
```sh
>> kubectl exec -ti fsx-app -- tail -f /data/out.txt
```