Skip to content

Commit

Permalink
Adding install doc and updating example
Browse files Browse the repository at this point in the history
  • Loading branch information
dlakhaws committed Oct 30, 2023
1 parent fcc3bce commit 8674e05
Show file tree
Hide file tree
Showing 2 changed files with 129 additions and 7 deletions.
117 changes: 117 additions & 0 deletions docs/install.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,117 @@
# Installation

## Prerequisites

* Kubernetes Version >= 1.20

* If you are using a self managed cluster, ensure the flag `--allow-privileged=true` for `kube-apiserver`.

## Installation

### Cluster setup (optional)
If you don't have an existing cluster, you can follow these steps to setup an EKS cluster.

#### Set cluster-name and a region:
```
export CLUSTER_NAME=mountpoint-s3-csi-cluster
export REGION=us-west-2
```

#### Create cluster

```
eksctl create cluster \
--name $CLUSTER_NAME \
--region $REGION \
--with-oidc \
--ssh-access \
--ssh-public-key <my-key>
```

#### Setup kubectl context

> Ensure that you are using aws cli v2 before executing
```
aws eks update-kubeconfig --region $REGION --name $CLUSTER_NAME
```

### Set up driver permissions
The driver requires IAM permissions to talk to Amazon S3 to manage the volume on user's behalf. AWS maintains a managed policy, available at ARN `arn:aws:iam::aws:policy/AmazonS3FullAccess`.

For more information, review ["Creating the Amazon Mountpoint for S3 CSI driver IAM role for service accounts" from the EKS User Guide.](TODO: add AWS docs link)

### Deploy driver
You may deploy the Mountpoint for S3 CSI driver via Kustomize, Helm, or as an [Amazon EKS managed add-on].

#### Kustomize
```sh
kubectl apply -k "github.com/awslabs/mountpoint-s3-csi-driver/deploy/kubernetes/overlays/stable"
```
*Note: Using the main branch to deploy the driver is not supported as the main branch may contain upcoming features incompatible with the currently released stable version of the driver.*

#### Helm
- Add the `aws-mountpoint-s3-csi-driver` Helm repository.
```sh
helm repo add aws-mountpoint-s3-csi-driver ???
helm repo update
```

- Install the latest release of the driver.
```sh
helm upgrade --install aws-mountpoint-s3-csi-driver \
--namespace kube-system \
aws-mountpoint-s3-csi-driver/aws-mountpoint-s3-csi-driver
```

Review the [configuration values](https://github.com/awslabs/mountpoint-s3-csi-driver/blob/main/charts/aws-mountpoint-s3-csi-driver/values.yaml) for the Helm chart.

#### Once the driver has been deployed, verify the pods are running:
```sh
kubectl get pods -n kube-system -l app.kubernetes.io/name=aws-mountpoint-s3-csi-driver
```

### Uninstalling the driver

Uninstall the self-managed Mountpoint for S3 CSI Driver with either Helm or Kustomize, depending on your installation method. If you are using the driver as an EKS add-on, see the [EKS documentation](https://docs.aws.amazon.com/eks/latest/userguide/managing-add-ons.html).

#### Helm

```
helm uninstall aws-mountpoint-s3-csi-driver --namespace kube-system
```

#### Kustomize

```
kubectl delete -k "github.com/awslabs/aws-mountpoint-s3-csi-driver/deploy/kubernetes/overlays/stable/?ref=release-<YOUR-CSI-DRIVER-VERION-NUMBER>"
```

### Cleanup
#### Kustomize
Delete the pod
```
kubectl delete -f examples/kubernetes/static_provisioning/static_provisioning.yaml
```

Note: If you use `kubectl delete -k deploy/kubernetes/overlays/dev` to delete the driver itself, it will also delete the service account. You can change the `node-serviceaccount.yaml` file to this to prevent having to re-connect it when deploying the driver next
```
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: s3-csi-driver-sa
labels:
app.kubernetes.io/name: aws-mountpoint-s3-csi-driver
app.kubernetes.io/managed-by: eksctl
annotations:
eks.amazonaws.com/role-arn: arn:aws:iam::151381207180:role/AmazonS3CSIDriverFullAccess # CHANGE THIS ARN
```

#### Helm
Uninstall the driver
```
helm uninstall aws-mountpoint-s3-csi-driver --namespace kube-system
```
Note: This will not delete the service account.
19 changes: 12 additions & 7 deletions examples/kubernetes/static_provisioning/README.md
Original file line number Diff line number Diff line change
@@ -1,26 +1,31 @@
# Static Provisioning Example
This example shows how to make a static provisioned EFS persistent volume (PV) mounted inside container.

# Configure
- bucket name: `PersistentVolume -> csi -> volumeHandle`
- bucket region (if bucket and cluster are in different regions): `PersistentVolume -> csi -> volumeAttributes -> bucketRegion`
## Configure
### Edit [Persistent Volume](https://github.com/awslabs/mountpoint-s3-csi-driver/blob/main/examples/kubernetes/static_provisioning/static_provisioning.yaml)
> Note: This example assumes your S3 bucket has already been created. If you need to create a bucket, follow the [S3 documentation](https://docs.aws.amazon.com/AmazonS3/latest/userguide/creating-bucket.html).
- Bucket name (required): `PersistentVolume -> csi -> volumeHandle`
- Bucket region (if bucket and cluster are in different regions): `PersistentVolume -> csi -> mountOptions`
- [Mountpoint configurations](https://github.com/awslabs/mountpoint-s3/blob/main/doc/CONFIGURATION.md) can be added in the `mountOptions` of the Persistent Volume spec.

# Deploy

## Deploy
```
kubectl apply -f examples/kubernetes/static_provisioning/static_provisioning.yaml
```

# Check the pod is running
## Check the pod is running
```
kubectl get pod fc-app
```

# [Optional] Check fc-app created a file in s3
## [Optional] Check fc-app created a file in s3
```
$ aws s3 ls <bucket_name>
> 2023-09-18 17:36:17 26 Mon Sep 18 17:36:14 UTC 2023.txt
```

# Cleanup
## Cleanup
```
kubectl delete -f examples/kubernetes/static_provisioning/static_provisioning.yaml
```

0 comments on commit 8674e05

Please sign in to comment.