Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add example for lusture max cached size tuning #109

Merged
merged 1 commit into from Oct 22, 2019
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
35 changes: 35 additions & 0 deletions examples/kubernetes/max_cache_tuning/README.md
@@ -0,0 +1,35 @@
## Tuning Lustre Max Memory Cache
This example shows how to set lustre `llite.*.max_cached_mb` using init container. Lustre client interacts with lustre kernel module for data caching at host level. Since the cache resides in kernel space, it won't be counted toward application container's memory limit. Sometimes it is desireable to reduce the lustre cache size to limit memory consumption at host level. In this example, the max cache size is set to 32MB, but other values may be selected depending on what makes sense for the workload.

### Edit [Pod](./specs/pod.yaml)
```
apiVersion: v1
kind: Pod
metadata:
name: fsx-app
spec:
initContainers:
- name: set-lustre-cache
image: amazon/aws-fsx-csi-driver:latest
securityContext:
privileged: true
command: ["/sbin/lctl"]
args: ["set_param", "llite.*.max_cached_mb=32"]
containers:
- name: app
image: amazonlinux:2
command: ["/bin/sh"]
args: ["-c", "sleep 999999"]
volumeMounts:
- name: persistent-storage
mountPath: /data
volumes:
- name: persistent-storage
persistentVolumeClaim:
claimName: fsx-claim
```
The `fsx-app` pod has an init container that sets `llite.*.max_cached_mb` using `lctl`.

## Notes
* The aws-fsx-csi-driver image is reused in the init container for the `lctl` command. You could chose your own container image for this purpose as long as the lustre client user space tools `lctl` is available inside the image.
* The init container needs to be privileged as required by `lctl`
11 changes: 11 additions & 0 deletions examples/kubernetes/max_cache_tuning/specs/claim.yaml
@@ -0,0 +1,11 @@
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: fsx-claim
spec:
accessModes:
- ReadWriteMany
storageClassName: fsx-sc
resources:
requests:
storage: 1200Gi
24 changes: 24 additions & 0 deletions examples/kubernetes/max_cache_tuning/specs/pod.yaml
@@ -0,0 +1,24 @@
apiVersion: v1
kind: Pod
metadata:
name: fsx-app
spec:
initContainers:
- name: set-lustre-cache
image: amazon/aws-fsx-csi-driver:latest
securityContext:
privileged: true
command: ["/sbin/lctl"]
args: ["set_param", "llite.*.max_cached_mb=32"]
containers:
- name: app
image: amazonlinux:2
command: ["/bin/sh"]
args: ["-c", "sleep 999999"]
volumeMounts:
- name: persistent-storage
mountPath: /data
volumes:
- name: persistent-storage
persistentVolumeClaim:
claimName: fsx-claim
10 changes: 10 additions & 0 deletions examples/kubernetes/max_cache_tuning/specs/storageclass.yaml
@@ -0,0 +1,10 @@
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: fsx-sc
provisioner: fsx.csi.aws.com
parameters:
subnetId: subnet-0d7b5e117ad7b4961
securityGroupIds: sg-05a37bfe01467059a
mountOptions:
- flock