Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update information about changing Fluentd persistence #1294

Merged
merged 1 commit into from
Jan 12, 2021
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
5 changes: 4 additions & 1 deletion deploy/docs/Best_Practices.md
Original file line number Diff line number Diff line change
Expand Up @@ -146,10 +146,13 @@ key as follows:
fluentd:
## Persist data to a persistent volume; When enabled, fluentd uses the file buffer instead of memory buffer.
persistence:
## After setting the value to true, run the helm upgrade command with the --force flag.
## After changing this value please follow steps described in:
## https://github.com/SumoLogic/sumologic-kubernetes-collection/blob/main/deploy/docs/FluentdPersistence.md
enabled: true
```

After changing Fluentd persistence setting (enable or disable) follow steps described in [Fluentd Persistence](FluentdPersistence.md).

Additional buffering and flushing parameters can be added in the `extraConf`,
in the `fluentd` buffer section.

Expand Down
367 changes: 367 additions & 0 deletions deploy/docs/FluentdPersistence.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,367 @@
# Fluentd persistence

Starting with `v2.0.0` we're using file-based buffer for Fluentd instead of less
reliable in-memory buffer by default.

The buffer configuration can be set in the `values.yaml` file under the `fluentd`
key as follows:

```yaml
fluentd:
persistence:
enabled: true
```

When the Fluentd persistence setting is to be changed (enabled or disabled)
it is required to recreate or delete existing Fluentd StatefulSet,
as it is not possible to add/remove `volumeClaimTemplate` for StatefulSet.

**Note:** The below commands are using `yq` in version `3.4.0` <= `x` < `4.0.0`.

## Enabling Fluentd persistence

To enable the Fluentd persistence modify `values.yaml` file under the `fluentd` key as follows:

```yaml
fluentd:
persistence:
enabled: true
## If defined, storageClassName: <storageClass>
## If set to "-", storageClassName: "", which disables dynamic provisioning
## If undefined (the default) or set to null, no storageClassName spec is
## set, choosing the default provisioner (gp2 on AWS, standard on
## GKE, Azure & OpenStack)
##
# storageClass: "-"
# annotations: {}
accessMode: ReadWriteOnce
size: 10Gi
```

Use one of following two strategies to prepare existing collection for enabling Fluentd persistence:

- ### Enabling Fluentd persistence by recreating Fluentd StatefulSet

In a heavy used clusters with high load of logs and metrics it might be possible that
recreating Fluentd StatefulSet with new `volumeClaimTemplate` may cause logs and metrics
being unavailable for the time of recreation. It usually shouldn't take more than several seconds.

To recreate Fluentd StatefulSets with new `volumeClaimTemplate` one can run
the following commands for all Fluentd StatefulSets.

Remember to adjust `volumeClaimTemplate` (`VOLUME_CLAIM_TEMPLATE` variable in command below)
which will be added to `volumeClaimTemplates` in StatefulSet `spec` according to your needs,
for details please check `PersistentVolumeClaim` in Kubernetes API specification.

Also remember to replace the `NAMESPACE` and `RELEASE_NAME` variables with proper values.

```bash
NAMESPACE=sumologic && \
RELEASE_NAME=collection && \
VOLUME_CLAIM_TEMPLATE=$(cat <<-"EOF"
metadata:
name: buffer
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
EOF
) && \
BUFFER_VOLUME=$(cat <<-"EOF"
mountPath: /fluentd/buffer
name: buffer
EOF
)&& \
kubectl --namespace ${NAMESPACE} get statefulset ${RELEASE_NAME}-sumologic-fluentd-logs --output yaml | \
yq w - "spec.volumeClaimTemplates[+]" --from <(echo "${VOLUME_CLAIM_TEMPLATE}") | \
yq w - "spec.template.spec.containers[0].volumeMounts[+]" --from <(echo "${BUFFER_VOLUME}") | \
kubectl apply --namespace ${NAMESPACE} --force --filename -
```

```bash
NAMESPACE=sumologic && \
RELEASE_NAME=collection && \
VOLUME_CLAIM_TEMPLATE=$(cat <<-"EOF"
metadata:
name: buffer
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
EOF
) && \
BUFFER_VOLUME=$(cat <<-"EOF"
mountPath: /fluentd/buffer
name: buffer
EOF
)&& \
kubectl --namespace ${NAMESPACE} get statefulset ${RELEASE_NAME}-sumologic-fluentd-metrics --output yaml | \
yq w - "spec.volumeClaimTemplates[+]" --from <(echo "${VOLUME_CLAIM_TEMPLATE}") | \
yq w - "spec.template.spec.containers[0].volumeMounts[+]" --from <(echo "${BUFFER_VOLUME}") | \
kubectl apply --namespace ${NAMESPACE} --force --filename -
```

```bash
NAMESPACE=sumologic && \
RELEASE_NAME=collection && \
VOLUME_CLAIM_TEMPLATE=$(cat <<-"EOF"
metadata:
name: buffer
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
EOF
) && \
BUFFER_VOLUME=$(cat <<-"EOF"
mountPath: /fluentd/buffer
name: buffer
EOF
)&& \
kubectl --namespace ${NAMESPACE} get statefulset ${RELEASE_NAME}-sumologic-fluentd-events --output yaml | \
yq w - "spec.volumeClaimTemplates[+]" --from <(echo "${VOLUME_CLAIM_TEMPLATE}") | \
yq w - "spec.template.spec.containers[0].volumeMounts[+]" --from <(echo "${BUFFER_VOLUME}") | \
kubectl apply --namespace ${NAMESPACE} --force --filename -
```

**Notice** When StatefulSets managed by helm are modified by commands specified above,
one might expect a warning similar to this one:
`Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply`

Upgrade collection with Fluentd persistence enabled, e.g.

```bash
helm upgrade <RELEASE-NAME> sumologic/sumologic --version=<VERSION> -f <VALUES>
```

- ### Enabling Fluentd persistence by preparing temporary instances of Fluentd and removing earlier created

To create a temporary instances of Fluentd StatefulSets and avoid a loss of logs or metrics one can run the following commands.

Remember to replace the `NAMESPACE` and `RELEASE_NAME` variables with proper values.

```bash
NAMESPACE=sumologic && \
RELEASE_NAME=collection && \
kubectl get statefulset --namespace ${NAMESPACE} ${RELEASE_NAME}-sumologic-fluentd-logs --output yaml | \
yq w - "metadata.name" tmp-${RELEASE_NAME}-sumologic-fluentd-logs | \
yq w - "metadata.labels[heritage]" "tmp" | \
yq w - "spec.template.metadata.labels[heritage]" "tmp" | \
yq w - "spec.selector.matchLabels[heritage]" "tmp" | \
kubectl create --filename -
```

```bash
NAMESPACE=sumologic && \
RELEASE_NAME=collection && \
kubectl get statefulset --namespace ${NAMESPACE} ${RELEASE_NAME}-sumologic-fluentd-metrics --output yaml | \
yq w - "metadata.name" tmp-${RELEASE_NAME}-sumologic-fluentd-metrics | \
yq w - "metadata.labels[heritage]" "tmp" | \
yq w - "spec.template.metadata.labels[heritage]" "tmp" | \
yq w - "spec.selector.matchLabels[heritage]" "tmp" | \
kubectl create --filename -
```

```bash
NAMESPACE=sumologic && \
RELEASE_NAME=collection && \
kubectl get statefulset --namespace ${NAMESPACE} ${RELEASE_NAME}-sumologic-fluentd-events --output yaml | \
yq w - "metadata.name" tmp-${RELEASE_NAME}-sumologic-fluentd-events | \
yq w - "metadata.labels[heritage]" "tmp" | \
yq w - "spec.template.metadata.labels[heritage]" "tmp" | \
yq w - "spec.selector.matchLabels[heritage]" "tmp" | \
kubectl create --filename -
```

Delete old instances of Fluentd StatefulSets:

```bash
NAMESPACE=sumologic && \
RELEASE_NAME=collection && \
kubectl wait --for=condition=ready pod \
--namespace ${NAMESPACE} \
--selector "release==${RELEASE_NAME},heritage=tmp" && \
kubectl delete statefulset --namespace ${NAMESPACE} ${RELEASE_NAME}-sumologic-fluentd-events && \
kubectl delete statefulset --namespace ${NAMESPACE} ${RELEASE_NAME}-sumologic-fluentd-logs && \
kubectl delete statefulset --namespace ${NAMESPACE} ${RELEASE_NAME}-sumologic-fluentd-metrics
```

Upgrade collection with Fluentd persistence enabled, e.g.

```bash
helm upgrade <RELEASE-NAME> sumologic/sumologic --version=<VERSION> -f <VALUES>
```

**Notice:** After the Helm chart upgrade is done, in order to remove temporary Fluentd
StatefulSets run the following command:

```bash
NAMESPACE=sumologic && \
RELEASE_NAME=collection && \
kubectl wait --for=condition=ready pod \
--namespace ${NAMESPACE} \
--selector "release==${RELEASE_NAME},heritage=Helm" && \
kubectl delete statefulset \
--namespace ${NAMESPACE} \
--selector "release==${RELEASE_NAME},heritage=tmp"
```

## Disabling Fluentd persistence

To disable Fluentd persistence in existing collection modify `values.yaml` file under the `fluentd`
key as follows:

```yaml
fluentd:
persistence:
enabled: false
```

Use one of following two strategies to prepare existing collection for disabling Fluentd persistence:

- ### Disabling Fluentd persistence by recreating Fluentd StatefulSet

In a heavy used clusters with high load of logs and metrics it might be possible that
recreating Fluentd StatefulSet without `volumeClaimTemplate` may cause logs and metrics
being unavailable for the time of recreation. It usually shouldn't take more than several seconds.

To recreate Fluentd StatefulSets without `volumeClaimTemplate` one can run
the following commands for all Fluentd StatefulSets.

Remember to replace the `NAMESPACE` and `RELEASE_NAME` variables with proper values.

```bash
NAMESPACE=sumologic && \
RELEASE_NAME=collection && \
kubectl --namespace ${NAMESPACE} get statefulset ${RELEASE_NAME}-sumologic-fluentd-logs --output yaml | \
yq d - "spec.template.spec.containers[*].volumeMounts(name==buffer)" | \
yq d - "spec.volumeClaimTemplates(metadata.name==buffer)" | \
kubectl apply --namespace ${NAMESPACE} --force --filename -
```

```bash
NAMESPACE=sumologic && \
RELEASE_NAME=collection && \
kubectl --namespace ${NAMESPACE} get statefulset ${RELEASE_NAME}-sumologic-fluentd-metrics --output yaml | \
yq d - "spec.template.spec.containers[*].volumeMounts(name==buffer)" | \
yq d - "spec.volumeClaimTemplates(metadata.name==buffer)" | \
kubectl apply --namespace ${NAMESPACE} --force --filename -
```

```bash
NAMESPACE=sumologic && \
RELEASE_NAME=collection && \
kubectl --namespace ${NAMESPACE} get statefulset ${RELEASE_NAME}-sumologic-fluentd-events --output yaml | \
yq d - "spec.template.spec.containers[*].volumeMounts(name==buffer)" | \
yq d - "spec.volumeClaimTemplates(metadata.name==buffer)" | \
kubectl apply --namespace ${NAMESPACE} --force --filename -
```

**Notice** When StatefulSets managed by helm are modified by commands specified above,
one might expect a warning similar to this one:
`Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply`

Upgrade collection with Fluentd persistence disabled, e.g.

```bash
helm upgrade <RELEASE-NAME> sumologic/sumologic --version=<VERSION> -f <VALUES>
```

**Notice:** After the Helm chart upgrade is done, it is needed to remove remaining `PersistentVolumeClaims`
which are no longer used by Fluend Statefulsets.

To remove remaining `PersistentVolumeClaims`:

```bash
kubectl delete pvc --namespace ${NAMESPACE} --selector app=${RELEASE_NAME}-sumologic-fluentd-logs
kubectl delete pvc --namespace ${NAMESPACE} --selector app=${RELEASE_NAME}-sumologic-fluentd-metrics
kubectl delete pvc --namespace ${NAMESPACE} --selector app=${RELEASE_NAME}-sumologic-fluentd-events
```

- ### Disabling Fluentd persistence by preparing temporary instances of Fluentd and removing earlier created

To create a temporary instances of Fluentd StatefulSets and avoid a loss of logs or metrics one can run the following commands.

Remember to replace the `NAMESPACE` and `RELEASE_NAME` variables with proper values.

```bash
NAMESPACE=sumologic && \
RELEASE_NAME=collection && \
kubectl get statefulset --namespace ${NAMESPACE} ${RELEASE_NAME}-sumologic-fluentd-logs --output yaml | \
yq w - "metadata.name" tmp-${RELEASE_NAME}-sumologic-fluentd-logs | \
yq w - "metadata.labels[heritage]" "tmp" | \
yq w - "spec.template.metadata.labels[heritage]" "tmp" | \
yq w - "spec.selector.matchLabels[heritage]" "tmp" | \
kubectl create --filename -
```

```bash
NAMESPACE=sumologic && \
RELEASE_NAME=collection && \
kubectl get statefulset --namespace ${NAMESPACE} ${RELEASE_NAME}-sumologic-fluentd-metrics --output yaml | \
yq w - "metadata.name" tmp-${RELEASE_NAME}-sumologic-fluentd-metrics | \
yq w - "metadata.labels[heritage]" "tmp" | \
yq w - "spec.template.metadata.labels[heritage]" "tmp" | \
yq w - "spec.selector.matchLabels[heritage]" "tmp" | \
kubectl create --filename -
```

```bash
NAMESPACE=sumologic && \
RELEASE_NAME=collection && \
kubectl get statefulset --namespace ${NAMESPACE} ${RELEASE_NAME}-sumologic-fluentd-events --output yaml | \
yq w - "metadata.name" tmp-${RELEASE_NAME}-sumologic-fluentd-events | \
yq w - "metadata.labels[heritage]" "tmp" | \
yq w - "spec.template.metadata.labels[heritage]" "tmp" | \
yq w - "spec.selector.matchLabels[heritage]" "tmp" | \
kubectl create --filename -
```

Delete old instances of Fluentd StatefulSets:

```bash
NAMESPACE=sumologic && \
RELEASE_NAME=collection && \
kubectl wait --for=condition=ready pod \
--namespace ${NAMESPACE} \
--selector "release==${RELEASE_NAME},heritage=tmp" && \
kubectl delete statefulset --namespace ${NAMESPACE} ${RELEASE_NAME}-sumologic-fluentd-events && \
kubectl delete statefulset --namespace ${NAMESPACE} ${RELEASE_NAME}-sumologic-fluentd-logs && \
kubectl delete statefulset --namespace ${NAMESPACE} ${RELEASE_NAME}-sumologic-fluentd-metrics
```

Upgrade collection with Fluentd persistence disabled, e.g.

```bash
helm upgrade <RELEASE-NAME> sumologic/sumologic --version=<VERSION> -f <VALUES>
```

**Notice:** After the Helm chart upgrade is done, it is needed to remove temporary Fluentd
StatefulSets and remaining `PersistentVolumeClaims` which are no longer used by Fluend Statefulsets.

To remove temporary Fluentd StatefulSets:

```bash
NAMESPACE=sumologic && \
RELEASE_NAME=collection && \
kubectl wait --for=condition=ready pod \
--namespace ${NAMESPACE} \
--selector "release==${RELEASE_NAME},heritage=Helm" && \
kubectl delete statefulset \
--namespace ${NAMESPACE} \
--selector "release==${RELEASE_NAME},heritage=tmp"
```

To remove remaining `PersistentVolumeClaims`:

```bash
kubectl delete pvc --namespace ${NAMESPACE} --selector app=${RELEASE_NAME}-sumologic-fluentd-logs
kubectl delete pvc --namespace ${NAMESPACE} --selector app=${RELEASE_NAME}-sumologic-fluentd-metrics
kubectl delete pvc --namespace ${NAMESPACE} --selector app=${RELEASE_NAME}-sumologic-fluentd-events
```
Loading