Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The plugin is not retrying on specific errors and dropping the data on error 400 #133

Closed
1 of 2 tasks
sdwerwed opened this issue Apr 4, 2024 · 2 comments
Closed
1 of 2 tasks

Comments

@sdwerwed
Copy link

sdwerwed commented Apr 4, 2024

(check apply)

  • read the contribution guideline
  • (optional) already reported 3rd party upstream repository or mailing list if you use k8s addon or helm charts.

Steps to replicate

He had a case that the max open shards had been reached in Openeasrch so fluentd was getting an error

Error:

[warn]: #0 send an error event to @ERROR: error_class=Fluent::Plugin::OpenSearchErrorHandler::OpenSearchError error="400 - Rejected by OpenSearch [error type]: illegal_argument_exception [reason]: 'Validation Failed: 1: this action would add [2] total shards, but this cluster currently has [2999]/[3000] maximum shards open;'"

The error is ok and expected, but we did not expect to lose the data.
Once we increased the maximum number of open shards in opensearch the old logs were never being pushed.
Looks like the fluentd is not retrying on error 400 and dropping data.
We want to not lose the data due to some temporary misconfiguration on the Opensearch or if there is some limit being reached.

Configuration

<match **>
  @log_level info
  @type opensearch
  host "#{ENV['OPENSEARCH_URL']}"
  port 443
  user "#{ENV['OPENSEARCH_USERNAME']}"
  password "#{ENV['OPENSEARCH_FLUENTD_PASSWORD']}"
  include_timestamp true
  scheme https
  ssl_verify false
  ssl_version TLSv1_2

  id_key _hash
  index_date_pattern "now/d"
  target_index_key target_index
  index_name xxxxxx
  templates {"fluentd-logs-template": "/opt/bitnami/fluentd/conf/template.conf"}
  reload_connections false
  reconnect_on_error true
  reload_on_failure true
  log_os_400_reason true
  bulk_message_request_threshold 20m
  tag_key fluentd
  request_timeout 15s

  <buffer>
    @type file
    path /opt/bitnami/fluentd/logs/buffers/
    flush_thread_count 2
    flush_interval 10s
    chunk_limit_size 160m
    total_limit_size 58g
  </buffer>
</match>

Expected Behavior or What you need to ask

We expected the data to be stored in the buffer and retry till it was successful and not lose the data. How to achieve that when getting similar errors?

Using Fluentd and OpenSearch plugin versions

  • Ubuntu

  • Kubernetes

  • Fluentd
    fluentd 1.16.2

  • OpenSearch plugin version
    fluent-plugin-opensearch (1.1.4)
    opensearch-ruby (3.0.1)

  • OpenSearch version
    v 2.10.0

Copy link

github-actions bot commented Apr 4, 2024

@sdwerwed this issue was automatically closed because it did not follow the issue template.

@github-actions github-actions bot closed this as completed Apr 4, 2024
@sdwerwed
Copy link
Author

sdwerwed commented Apr 4, 2024

Created duplicate #134 since the bot does not reopen it

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant