Skip to content

Commit

Permalink
docs: mitigate logs duplication in fluentd (#2278)
Browse files Browse the repository at this point in the history
* docs: mitigate logs duplication in fluentd

Signed-off-by: Dominik Rosiek <drosiek@sumologic.com>

* Apply suggestions from code review

Co-authored-by: Mikołaj Świątek <mswiatek@sumologic.com>
  • Loading branch information
sumo-drosiek and Mikołaj Świątek committed May 13, 2022
1 parent e9f2c2f commit 7c3957d
Showing 1 changed file with 33 additions and 0 deletions.
33 changes: 33 additions & 0 deletions deploy/docs/Troubleshoot_Collection.md
Original file line number Diff line number Diff line change
Expand Up @@ -614,3 +614,36 @@ We have a couple of possible solutions for this issue:
[v2_3]: https://github.com/SumoLogic/sumologic-kubernetes-collection/releases/tag/v2.3.0
[storage_class]: https://kubernetes.io/docs/concepts/storage/storage-classes/
[security_context]: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/

### Duplicated logs

We observed than under certain conditions, it's possible for FluentD to duplicate logs:

- there are several requests made of one chunk
- one of those requests is failing, resulting in the whole batch being retried

In order to mitigate this, please use [fluentd-output-sumologic] with `use_internal_retry` option.
See the following example:

```yaml
fluentd:
logs:
output:
extraConf: |-
use_internal_retry true
retry_min_interval 5s
retry_max_interval 10m
retry_timeout 72h
retry_max_times 0
max_request_size 16m
metrics:
extraOutputConf: |-
use_internal_retry true
retry_min_interval 5s
retry_max_interval 10m
retry_timeout 72h
retry_max_times 0
max_request_size 16m
```

[fluentd-output-sumologic]: https://github.com/SumoLogic/fluentd-output-sumologic

0 comments on commit 7c3957d

Please sign in to comment.