Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

From fluent-bit to es: [ warn] [engine] failed to flush chunk #5145

Closed
yangtian9999 opened this issue Mar 22, 2022 · 21 comments
Closed

From fluent-bit to es: [ warn] [engine] failed to flush chunk #5145

yangtian9999 opened this issue Mar 22, 2022 · 21 comments
Labels
waiting-for-user Waiting for more information, tests or requested changes

Comments

@yangtian9999
Copy link

yangtian9999 commented Mar 22, 2022

Bug Report

Describe the bug

Continuing logging in pod fluent-bit-84pj9

[2022/03/22 03:48:51] [ warn] [engine] failed to flush chunk '1-1647920930.175942635.flb', retry in 11 seconds: task_id=735, input=tail.0 > output=es.0 (out_id=0)
[2022/03/22 03:48:51] [ warn] [engine] failed to flush chunk '1-1647920894.173241698.flb', retry in 58 seconds: task_id=700, input=tail.0 > output=es.0 (out_id=0)
[2022/03/22 03:57:46] [ warn] [engine] failed to flush chunk '1-1647920587.172892529.flb', retry in 92 seconds: task_id=394, input=tail.0 > output=es.0 (out_id=0)
[2022/03/22 03:57:47] [ warn] [engine] failed to flush chunk '1-1647920384.178898202.flb', retry in 181 seconds: task_id=190, input=tail.0 > output=es.0 (out_id=0)
[2022/03/22 03:57:47] [ warn] [engine] failed to flush chunk '1-1647920812.174022994.flb', retry in 746 seconds: task_id=619, input=tail.0 > output=es.0 (out_id=0)
[2022/03/22 03:57:48] [ warn] [engine] failed to flush chunk '1-1647920205.172447077.flb', retry in 912 seconds: task_id=12, input=tail.0 > output=es.0 (out_id=0)
[2022/03/22 03:57:48] [ warn] [engine] failed to flush chunk '1-1647920426.171646994.flb', retry in 632 seconds: task_id=233, input=tail.0 > output=es.0 (out_id=0)
[2022/03/22 03:57:48] [ warn] [engine] failed to flush chunk '1-1647920802.180669296.flb', retry in 1160 seconds: task_id=608, input=tail.0 > output=es.0 (out_id=0)
[2022/03/22 03:57:48] [ warn] [engine] failed to flush chunk '1-1647920969.178403746.flb', retry in 130 seconds: task_id=774, input=tail.0 > output=es.0 (out_id=0)
[2022/03/22 03:57:48] [ warn] [engine] failed to flush chunk '1-1647920657.177210280.flb', retry in 1048 seconds: task_id=464, input=tail.0 > output=es.0 (out_id=0)
[2022/03/22 03:57:49] [ warn] [engine] failed to flush chunk '1-1647920670.171006292.flb', retry in 1657 seconds: task_id=477, input=tail.0 > output=es.0 (out_id=0)
[2022/03/22 03:57:49] [ warn] [engine] failed to flush chunk '1-1647920934.181870214.flb', retry in 786 seconds: task_id=739, input=tail.0 > output=es.0 (out_id=0)

To Reproduce

  • Rubular link if applicable:
  • Example log message if applicable:
{"log":"YOUR LOG MESSAGE HERE","stream":"stdout","time":"2018-06-11T14:37:30.681701731Z"}
  • Steps to reproduce the problem:
  1. use helm to install helm-charts-fluent-bit-0.19.19. fluent/fluent-bit 1.8.12

  2. edit the value.yaml, change to es ip, 10.3.4.84 is the es ip address.
    [OUTPUT]
    Name es
    Match kube.*
    Host 10.3.4.84
    Logstash_Format On
    Retry_Limit False

    [OUTPUT]
    Name es
    Match host.*
    Host 10.3.4.84
    Logstash_Format On
    Logstash_Prefix node
    Retry_Limit False

keep other configs in value.yaml file by default.

  1. helm install helm-charts-fluent-bit-0.19.19. with the updated value.yaml file.
  2. wait and see the fluent-bit pod logs.

Expected behavior

all logs sent to es, and display at kibana

Screenshots

Your Environment

  • Version used: helm-charts-fluent-bit-0.19.19.

  • Configuration:
    [OUTPUT]
    Name es
    Match kube.*
    Host 10.3.4.84
    Logstash_Format On
    Retry_Limit False

    [OUTPUT]
    Name es
    Match host.*
    Host 10.3.4.84
    Logstash_Format On
    Logstash_Prefix node
    Retry_Limit False

  • Environment name and version (e.g. Kubernetes? What version?): k3s 1.19.8, use docker-ce backend, 20.10.12. Kibana 7.6.2 management. es 7.6.2 fluent/fluent-bit 1.8.12

  • Server type and version:

  • Operating System and version: centos 7.9, kernel 5.4 LT

  • Filters and plugins:
    edit the value.yaml, change to es ip, 10.3.4.84 is the es ip address. no tls required for es.
    [OUTPUT]
    Name es
    Match kube.*
    Host 10.3.4.84
    Logstash_Format On
    Retry_Limit False

    [OUTPUT]
    Name es
    Match host.*
    Host 10.3.4.84
    Logstash_Format On
    Logstash_Prefix node
    Retry_Limit False

keep other configs in value.yaml file by default.

Additional context

no new logs send to es

@dezhishen
Copy link

version of docker imagefluent/fluent-bit:1.9.0-debug
There same issues
and after set Trace_Error On
error logs here

 {"took":0,"errors":true,"items":[{"create":{"_index":"ks-logstash-log-2022.03.22","_type":"_doc","_id":"657cf183-0c49-91d4-e95d-ab2426b6367a","status":403,"error":{"type":"cluster_block_exception","reason":"blocked by: [FORBIDDEN/12/index read-only / allow delete (api)];"}}},{"create":{"_index":"ks-logstash-log-2022.03.22","_type":"_doc","_id":"bc2be16b-ef3e-7ef3-01f6-305782f27a13","status":403,"error":{"type":"cluster_block_exception","reason":"blocked by: [FORBIDDEN/12/index read-only / allow delete (api)];"}}}]}

and the index ks-logstash-log-2022.03.22 already exists

is any way to skip create index if exists?

@dezhishen
Copy link

dezhishen commented Mar 23, 2022

version of docker imagefluent/fluent-bit:1.9.0-debug There same issues and after set Trace_Error On error logs here

 {"took":0,"errors":true,"items":[{"create":{"_index":"ks-logstash-log-2022.03.22","_type":"_doc","_id":"657cf183-0c49-91d4-e95d-ab2426b6367a","status":403,"error":{"type":"cluster_block_exception","reason":"blocked by: [FORBIDDEN/12/index read-only / allow delete (api)];"}}},{"create":{"_index":"ks-logstash-log-2022.03.22","_type":"_doc","_id":"bc2be16b-ef3e-7ef3-01f6-305782f27a13","status":403,"error":{"type":"cluster_block_exception","reason":"blocked by: [FORBIDDEN/12/index read-only / allow delete (api)];"}}}]}

and the index ks-logstash-log-2022.03.22 already exists

is any way to skip create index if exists?

set config of es'Output

Write_Operation upsert

works on fluent/fluent-bit:1.9.0

@lecaros
Copy link
Contributor

lecaros commented Mar 23, 2022

Hi @yangtian9999. Can you please enable debug log level and share the log? If you see network-related messages, this may be an issue we already fixed in 1.8.15. Otherwise, share steps to reproduce, including your config.

@lecaros lecaros added waiting-for-user Waiting for more information, tests or requested changes and removed status: waiting-for-triage labels Mar 23, 2022
@yangtian9999
Copy link
Author

[2022/03/24 04:19:20] [debug] [input chunk] tail.0 is paused, cannot append records
[2022/03/24 04:19:20] [debug] [input chunk] tail.0 is paused, cannot append records
[2022/03/24 04:19:20] [debug] [input chunk] tail.0 is paused, cannot append records
[2022/03/24 04:19:20] [debug] [input chunk] tail.0 is paused, cannot append records
[2022/03/24 04:19:20] [debug] [input chunk] tail.0 is paused, cannot append records
[2022/03/24 04:19:20] [debug] [input chunk] tail.0 is paused, cannot append records
[2022/03/24 04:19:20] [debug] [input chunk] tail.0 is paused, cannot append records
[2022/03/24 04:19:20] [debug] [input chunk] tail.0 is paused, cannot append records
[2022/03/24 04:19:20] [debug] [input chunk] tail.0 is paused, cannot append records
[2022/03/24 04:19:20] [debug] [input chunk] tail.0 is paused, cannot append records
[2022/03/24 04:19:20] [debug] [input chunk] tail.0 is paused, cannot append records
[2022/03/24 04:19:20] [debug] [input chunk] tail.0 is paused, cannot append records
[2022/03/24 04:19:20] [debug] [input chunk] tail.0 is paused, cannot append records
[2022/03/24 04:19:20] [debug] [input chunk] tail.0 is paused, cannot append records
[2022/03/24 04:19:20] [debug] [input chunk] tail.0 is paused, cannot append records
[2022/03/24 04:19:20] [debug] [input chunk] tail.0 is paused, cannot append records
[2022/03/24 04:19:20] [debug] [input chunk] tail.0 is paused, cannot append records
[2022/03/24 04:19:20] [debug] [input chunk] tail.0 is paused, cannot append records
[2022/03/24 04:19:20] [debug] [input chunk] tail.0 is paused, cannot append records
[2022/03/24 04:19:20] [debug] [input chunk] tail.0 is paused, cannot append records
[2022/03/24 04:19:20] [debug] [input chunk] tail.0 is paused, cannot append records
[2022/03/24 04:19:20] [debug] [input chunk] tail.0 is paused, cannot append records
[2022/03/24 04:19:20] [debug] [input:tail:tail.0] [static files] processed 23.1K
[2022/03/24 04:19:21] [debug] [task] created task=0x7f7671e38540 id=0 OK
[2022/03/24 04:19:21] [debug] [task] created task=0x7f7671e38680 id=1 OK
[2022/03/24 04:19:21] [debug] [task] created task=0x7f7671e387c0 id=2 OK
[2022/03/24 04:19:21] [debug] [output:es:es.0] task_id=0 assigned to thread #0
[2022/03/24 04:19:21] [debug] [output:es:es.0] task_id=1 assigned to thread #1
[2022/03/24 04:19:21] [debug] [output:es:es.0] task_id=2 assigned to thread #0
[2022/03/24 04:19:21] [debug] [http_client] not using http_proxy for header
[2022/03/24 04:19:21] [debug] [http_client] not using http_proxy for header
[2022/03/24 04:19:21] [debug] [http_client] not using http_proxy for header
[2022/03/24 04:19:22] [debug] [output:es:es.0] HTTP Status=200 URI=/_bulk
[2022/03/24 04:19:22] [debug] [upstream] KA connection #103 to 10.3.4.84:9200 is now available
[2022/03/24 04:19:22] [debug] [out coro] cb_destroy coro_id=1
[2022/03/24 04:19:22] [debug] [retry] new retry created for task_id=2 attempts=1
[2022/03/24 04:19:22] [ warn] [engine] failed to flush chunk '1-1648095560.297175793.flb', retry in 7 seconds: task_id=2, input=tail.0 > output=es.0 (out_id=0)
[2022/03/24 04:19:24] [ warn] [http_client] cannot increase buffer: current=512000 requested=544768 max=512000
[2022/03/24 04:19:24] [debug] [output:es:es.0] HTTP Status=200 URI=/_bulk
[2022/03/24 04:19:24] [error] [output:es:es.0] could not pack/validate JSON response
{"took":2579,"errors":true,"items":[{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"G-Mmun8BI6SaBP9lh4vZ","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Existing mapping for [kubernetes.labels.app] must be of type object but found [text]."}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"HOMmun8BI6SaBP9lh4vZ","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Existing mapping for [kubernetes.labels.app] must be of type object but found [text]."}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"HeMmun8BI6SaBP9lh4vZ","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Existing mapping for [kubernetes.labels.app] must be of type object but found [text]."}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"HuMmun8BI6SaBP9lh4vZ","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Existing mapping for [kubernetes.labels.app] must be of type object but found [text]."}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"H-Mmun8BI6SaBP9lh4vZ","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Existing mapping for [kubernetes.labels.app] must be of type object but found [text]."}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"IOMmun8BI6SaBP9lh4vZ","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Existing mapping for [kubernetes.labels.app] must be of type object but found [text]."}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"IeMmun8BI6SaBP9lh4vZ","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Existing mapping for [kubernetes.labels.app] must be of type object but found [text]."}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"IuMmun8BI6SaBP9lh4vZ","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Existing mapping for [kubernetes.labels.app] must be of type object but found [text]."}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"I-Mmun8BI6SaBP9lh4vZ","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Existing mapping for [kubernetes.labels.app] must be of type object but found [text]."}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"JOMmun8BI6SaBP9lh4vZ","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Existing mapping for [kubernetes.labels.app] must be of type object but found [text]."}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"JeMmun8BI6SaBP9lh4vZ","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Existing mapping for [kubernetes.labels.app] must be of type object but found [text]."}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"JuMmun8BI6SaBP9lh4vZ","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Existing mapping for [kubernetes.labels.app] must be of type object but found [text]."}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"J-Mmun8BI6SaBP9lh4vZ","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamicall
[2022/03/24 04:19:24] [debug] [out coro] cb_destroy coro_id=0
[2022/03/24 04:19:24] [debug] [retry] new retry created for task_id=1 attempts=1
[2022/03/24 04:19:24] [ warn] [engine] failed to flush chunk '1-1648095560.254537600.flb', retry in 10 seconds: task_id=1, input=tail.0 > output=es.0 (out_id=0)
[2022/03/24 04:19:24] [ warn] [http_client] cannot increase buffer: current=512000 requested=544768 max=512000
[2022/03/24 04:19:24] [debug] [output:es:es.0] HTTP Status=200 URI=/_bulk
[2022/03/24 04:19:24] [error] [output:es:es.0] could not pack/validate JSON response
{"took":2433,"errors":true,"items":[{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"zuMmun8BI6SaBP9liJWQ","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Existing mapping for [kubernetes.labels.app] must be of type object but found [text]."}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"z-Mmun8BI6SaBP9liJWQ","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Existing mapping for [kubernetes.labels.app] must be of type object but found [text]."}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"0OMmun8BI6SaBP9liJWQ","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Existing mapping for [kubernetes.labels.app] must be of type object but found [text]."}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"0eMmun8BI6SaBP9liJWQ","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Existing mapping for [kubernetes.labels.app] must be of type object but found [text]."}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"0uMmun8BI6SaBP9liJWQ","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Existing mapping for [kubernetes.labels.app] must be of type object but found [text]."}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"0-Mmun8BI6SaBP9liJWQ","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Existing mapping for [kubernetes.labels.app] must be of type object but found [text]."}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"1OMmun8BI6SaBP9liJWQ","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Existing mapping for [kubernetes.labels.app] must be of type object but found [text]."}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"1eMmun8BI6SaBP9liJWQ","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Existing mapping for [kubernetes.labels.app] must be of type object but found [text]."}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"1uMmun8BI6SaBP9liJWQ","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Existing mapping for [kubernetes.labels.app] must be of type object but found [text]."}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"1-Mmun8BI6SaBP9liJWQ","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Existing mapping for [kubernetes.labels.app] must be of type object but found [text]."}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"2OMmun8BI6SaBP9liJWQ","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Existing mapping for [kubernetes.labels.app] must be of type object but found [text]."}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"2eMmun8BI6SaBP9liJWQ","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Existing mapping for [kubernetes.labels.app] must be of type object but found [text]."}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"2uMmun8BI6SaBP9liJWQ","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamicall
[2022/03/24 04:19:24] [debug] [out coro] cb_destroy coro_id=0
[2022/03/24 04:19:24] [debug] [retry] new retry created for task_id=0 attempts=1
[2022/03/24 04:19:24] [ warn] [engine] failed to flush chunk '1-1648095560.205735907.flb', retry in 10 seconds: task_id=0, input=tail.0 > output=es.0 (out_id=0)
[2022/03/24 04:19:29] [debug] [output:es:es.0] task_id=2 assigned to thread #1
[2022/03/24 04:19:29] [debug] [http_client] not using http_proxy for header
[2022/03/24 04:19:30] [debug] [output:es:es.0] HTTP Status=200 URI=/_bulk
[2022/03/24 04:19:30] [debug] [upstream] KA connection #104 to 10.3.4.84:9200 is now available
[2022/03/24 04:19:30] [debug] [out coro] cb_destroy coro_id=1
[2022/03/24 04:19:30] [debug] [retry] re-using retry for task_id=2 attempts=2
[2022/03/24 04:19:30] [ warn] [engine] failed to flush chunk '1-1648095560.297175793.flb', retry in 19 seconds: task_id=2, input=tail.0 > output=es.0 (out_id=0)
[2022/03/24 04:19:34] [debug] [output:es:es.0] task_id=1 assigned to thread #0
[2022/03/24 04:19:34] [debug] [output:es:es.0] task_id=0 assigned to thread #1
[2022/03/24 04:19:34] [debug] [upstream] KA connection #103 to 10.3.4.84:9200 has been assigned (recycled)
[2022/03/24 04:19:34] [debug] [upstream] KA connection #104 to 10.3.4.84:9200 has been assigned (recycled)
[2022/03/24 04:19:34] [debug] [http_client] not using http_proxy for header
[2022/03/24 04:19:34] [debug] [http_client] not using http_proxy for header
[2022/03/24 04:19:38] [ warn] [http_client] cannot increase buffer: current=512000 requested=544768 max=512000
[2022/03/24 04:19:38] [debug] [output:es:es.0] HTTP Status=200 URI=/_bulk
[2022/03/24 04:19:38] [error] [output:es:es.0] could not pack/validate JSON response
{"took":3354,"errors":true,"items":[{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"MeMmun8BI6SaBP9luqVq","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Existing mapping for [kubernetes.labels.app] must be of type object but found [text]."}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"MuMmun8BI6SaBP9luqVq","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Existing mapping for [kubernetes.labels.app] must be of type object but found [text]."}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"M-Mmun8BI6SaBP9luqVq","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Existing mapping for [kubernetes.labels.app] must be of type object but found [text]."}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"NOMmun8BI6SaBP9luqVq","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Existing mapping for [kubernetes.labels.app] must be of type object but found [text]."}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"NeMmun8BI6SaBP9luqVq","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Existing mapping for [kubernetes.labels.app] must be of type object but found [text]."}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"NuMmun8BI6SaBP9luqVq","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Existing mapping for [kubernetes.labels.app] must be of type object but found [text]."}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"N-Mmun8BI6SaBP9luqVq","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Existing mapping for [kubernetes.labels.app] must be of type object but found [text]."}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"OOMmun8BI6SaBP9luqVq","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Existing mapping for [kubernetes.labels.app] must be of type object but found [text]."}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"OeMmun8BI6SaBP9luqVq","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Existing mapping for [kubernetes.labels.app] must be of type object but found [text]."}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"OuMmun8BI6SaBP9luqVq","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Existing mapping for [kubernetes.labels.app] must be of type object but found [text]."}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"O-Mmun8BI6SaBP9luqVq","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Existing mapping for [kubernetes.labels.app] must be of type object but found [text]."}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"POMmun8BI6SaBP9luqVq","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Existing mapping for [kubernetes.labels.app] must be of type object but found [text]."}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"PeMmun8BI6SaBP9luqVq","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamicall
[2022/03/24 04:19:38] [debug] [out coro] cb_destroy coro_id=2
[2022/03/24 04:19:38] [debug] [retry] re-using retry for task_id=0 attempts=2
[2022/03/24 04:19:38] [ warn] [engine] failed to flush chunk '1-1648095560.205735907.flb', retry in 14 seconds: task_id=0, input=tail.0 > output=es.0 (out_id=0)
[2022/03/24 04:19:38] [ warn] [http_client] cannot increase buffer: current=512000 requested=544768 max=512000
[2022/03/24 04:19:38] [debug] [output:es:es.0] HTTP Status=200 URI=/_bulk
[2022/03/24 04:19:38] [error] [output:es:es.0] could not pack/validate JSON response
{"took":3473,"errors":true,"items":[{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"2-Mmun8BI6SaBP9luq99","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Existing mapping for [kubernetes.labels.app] must be of type object but found [text]."}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"3OMmun8BI6SaBP9luq99","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Existing mapping for [kubernetes.labels.app] must be of type object but found [text]."}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"3eMmun8BI6SaBP9luq99","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Existing mapping for [kubernetes.labels.app] must be of type object but found [text]."}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"3uMmun8BI6SaBP9luq99","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Existing mapping for [kubernetes.labels.app] must be of type object but found [text]."}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"3-Mmun8BI6SaBP9luq99","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Existing mapping for [kubernetes.labels.app] must be of type object but found [text]."}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"4OMmun8BI6SaBP9luq99","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Existing mapping for [kubernetes.labels.app] must be of type object but found [text]."}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"4eMmun8BI6SaBP9luq99","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Existing mapping for [kubernetes.labels.app] must be of type object but found [text]."}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"4uMmun8BI6SaBP9luq99","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Existing mapping for [kubernetes.labels.app] must be of type object but found [text]."}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"4-Mmun8BI6SaBP9luq99","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Existing mapping for [kubernetes.labels.app] must be of type object but found [text]."}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"5OMmun8BI6SaBP9luq99","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Existing mapping for [kubernetes.labels.app] must be of type object but found [text]."}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"5eMmun8BI6SaBP9luq99","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Existing mapping for [kubernetes.labels.app] must be of type object but found [text]."}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"5uMmun8BI6SaBP9luq99","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Existing mapping for [kubernetes.labels.app] must be of type object but found [text]."}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"5-Mmun8BI6SaBP9luq99","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamicall
[2022/03/24 04:19:38] [debug] [out coro] cb_destroy coro_id=2
[2022/03/24 04:19:38] [debug] [retry] re-using retry for task_id=1 attempts=2
[2022/03/24 04:19:38] [ warn] [engine] failed to flush chunk '1-1648095560.254537600.flb', retry in 9 seconds: task_id=1, input=tail.0 > output=es.0 (out_id=0)
[2022/03/24 04:19:47] [debug] [output:es:es.0] task_id=1 assigned to thread #0
[2022/03/24 04:19:47] [debug] [http_client] not using http_proxy for header
[2022/03/24 04:19:49] [debug] [output:es:es.0] task_id=2 assigned to thread #1
[2022/03/24 04:19:49] [debug] [http_client] not using http_proxy for header
[2022/03/24 04:19:49] [ warn] [http_client] cannot increase buffer: current=512000 requested=544768 max=512000
[2022/03/24 04:19:49] [debug] [output:es:es.0] HTTP Status=200 URI=/_bulk
[2022/03/24 04:19:49] [error] [output:es:es.0] could not pack/validate JSON response
{"took":2414,"errors":true,"items":[{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"juMmun8BI6SaBP9l7LpS","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Existing mapping for [kubernetes.labels.app] must be of type object but found [text]."}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"j-Mmun8BI6SaBP9l7LpS","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Existing mapping for [kubernetes.labels.app] must be of type object but found [text]."}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"kOMmun8BI6SaBP9l7LpS","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Existing mapping for [kubernetes.labels.app] must be of type object but found [text]."}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"keMmun8BI6SaBP9l7LpS","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Existing mapping for [kubernetes.labels.app] must be of type object but found [text]."}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"kuMmun8BI6SaBP9l7LpS","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Existing mapping for [kubernetes.labels.app] must be of type object but found [text]."}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"k-Mmun8BI6SaBP9l7LpS","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Existing mapping for [kubernetes.labels.app] must be of type object but found [text]."}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"lOMmun8BI6SaBP9l7LpS","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Existing mapping for [kubernetes.labels.app] must be of type object but found [text]."}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"leMmun8BI6SaBP9l7LpS","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Existing mapping for [kubernetes.labels.app] must be of type object but found [text]."}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"luMmun8BI6SaBP9l7LpS","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Existing mapping for [kubernetes.labels.app] must be of type object but found [text]."}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"l-Mmun8BI6SaBP9l7LpS","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Existing mapping for [kubernetes.labels.app] must be of type object but found [text]."}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"mOMmun8BI6SaBP9l7LpS","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Existing mapping for [kubernetes.labels.app] must be of type object but found [text]."}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"meMmun8BI6SaBP9l7LpS","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Existing mapping for [kubernetes.labels.app] must be of type object but found [text]."}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"muMmun8BI6SaBP9l7LpS","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamicall
[2022/03/24 04:19:49] [debug] [out coro] cb_destroy coro_id=3
[2022/03/24 04:19:49] [debug] [retry] re-using retry for task_id=1 attempts=3
[2022/03/24 04:19:49] [ warn] [engine] failed to flush chunk '1-1648095560.254537600.flb', retry in 15 seconds: task_id=1, input=tail.0 > output=es.0 (out_id=0)
[2022/03/24 04:19:50] [debug] [output:es:es.0] HTTP Status=200 URI=/_bulk
[2022/03/24 04:19:50] [debug] [upstream] KA connection #102 to 10.3.4.84:9200 is now available
[2022/03/24 04:19:50] [debug] [out coro] cb_destroy coro_id=3
[2022/03/24 04:19:50] [debug] [retry] re-using retry for task_id=2 attempts=3
[2022/03/24 04:19:50] [ warn] [engine] failed to flush chunk '1-1648095560.297175793.flb', retry in 9 seconds: task_id=2, input=tail.0 > output=es.0 (out_id=0)
[2022/03/24 04:19:52] [debug] [output:es:es.0] task_id=0 assigned to thread #0
[2022/03/24 04:19:52] [debug] [http_client] not using http_proxy for header
[2022/03/24 04:19:54] [ warn] [http_client] cannot increase buffer: current=512000 requested=544768 max=512000
[2022/03/24 04:19:54] [debug] [output:es:es.0] HTTP Status=200 URI=/_bulk
[2022/03/24 04:19:54] [error] [output:es:es.0] could not pack/validate JSON response
{"took":2250,"errors":true,"items":[{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"-uMmun8BI6SaBP9l_8nZ","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Existing mapping for [kubernetes.labels.app] must be of type object but found [text]."}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"--Mmun8BI6SaBP9l_8nZ","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Existing mapping for [kubernetes.labels.app] must be of type object but found [text]."}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"_OMmun8BI6SaBP9l_8nZ","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Existing mapping for [kubernetes.labels.app] must be of type object but found [text]."}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"_eMmun8BI6SaBP9l_8nZ","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Existing mapping for [kubernetes.labels.app] must be of type object but found [text]."}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"_uMmun8BI6SaBP9l_8nZ","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Existing mapping for [kubernetes.labels.app] must be of type object but found [text]."}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","id":"-Mmun8BI6SaBP9l_8nZ","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Existing mapping for [kubernetes.labels.app] must be of type object but found [text]."}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"AOMmun8BI6SaBP9l_8rZ","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Existing mapping for [kubernetes.labels.app] must be of type object but found [text]."}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"AeMmun8BI6SaBP9l_8rZ","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Existing mapping for [kubernetes.labels.app] must be of type object but found [text]."}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"AuMmun8BI6SaBP9l_8rZ","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Existing mapping for [kubernetes.labels.app] must be of type object but found [text]."}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"A-Mmun8BI6SaBP9l_8rZ","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Existing mapping for [kubernetes.labels.app] must be of type object but found [text]."}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"BOMmun8BI6SaBP9l_8rZ","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Existing mapping for [kubernetes.labels.app] must be of type object but found [text]."}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"BeMmun8BI6SaBP9l_8rZ","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Existing mapping for [kubernetes.labels.app] must be of type object but found [text]."}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"BuMmun8BI6SaBP9l_8rZ","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamicall
[2022/03/24 04:19:54] [debug] [out coro] cb_destroy coro_id=4
[2022/03/24 04:19:54] [debug] [retry] re-using retry for task_id=0 attempts=3
[2022/03/24 04:19:54] [ warn] [engine] failed to flush chunk '1-1648095560.205735907.flb', retry in 40 seconds: task_id=0, input=tail.0 > output=es.0 (out_id=0)
[2022/03/24 04:19:59] [debug] [output:es:es.0] task_id=2 assigned to thread #1
[2022/03/24 04:19:59] [debug] [upstream] KA connection #102 to 10.3.4.84:9200 has been assigned (recycled)
[2022/03/24 04:19:59] [debug] [http_client] not using http_proxy for header
[2022/03/24 04:20:00] [debug] [output:es:es.0] HTTP Status=200 URI=/_bulk
[2022/03/24 04:20:00] [debug] [upstream] KA connection #102 to 10.3.4.84:9200 is now available
[2022/03/24 04:20:00] [debug] [out coro] cb_destroy coro_id=4
[2022/03/24 04:20:00] [debug] [retry] re-using retry for task_id=2 attempts=4
[2022/03/24 04:20:00] [ warn] [engine] failed to flush chunk '1-1648095560.297175793.flb', retry in 25 seconds: task_id=2, input=tail.0 > output=es.0 (out_id=0)
[2022/03/24 04:20:04] [debug] [output:es:es.0] task_id=1 assigned to thread #0
[2022/03/24 04:20:04] [debug] [http_client] not using http_proxy for header
[2022/03/24 04:20:06] [ warn] [http_client] cannot increase buffer: current=512000 requested=544768 max=512000
[2022/03/24 04:20:06] [debug] [output:es:es.0] HTTP Status=200 URI=/_bulk
[2022/03/24 04:20:06] [error] [output:es:es.0] could not pack/validate JSON response
{"took":2033,"errors":true,"items":[{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"XeMnun8BI6SaBP9lLtm1","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Existing mapping for [kubernetes.labels.app] must be of type object but found [text]."}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"XuMnun8BI6SaBP9lLtm1","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Existing mapping for [kubernetes.labels.app] must be of type object but found [text]."}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"X-Mnun8BI6SaBP9lLtm1","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Existing mapping for [kubernetes.labels.app] must be of type object but found [text]."}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"YOMnun8BI6SaBP9lLtm1","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Existing mapping for [kubernetes.labels.app] must be of type object but found [text]."}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"YeMnun8BI6SaBP9lLtm1","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Existing mapping for [kubernetes.labels.app] must be of type object but found [text]."}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"YuMnun8BI6SaBP9lLtm1","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Existing mapping for [kubernetes.labels.app] must be of type object but found [text]."}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"Y-Mnun8BI6SaBP9lLtm1","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Existing mapping for [kubernetes.labels.app] must be of type object but found [text]."}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"ZOMnun8BI6SaBP9lLtm1","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Existing mapping for [kubernetes.labels.app] must be of type object but found [text]."}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"ZeMnun8BI6SaBP9lLtm1","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Existing mapping for [kubernetes.labels.app] must be of type object but found [text]."}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"ZuMnun8BI6SaBP9lLtm1","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Existing mapping for [kubernetes.labels.app] must be of type object but found [text]."}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"Z-Mnun8BI6SaBP9lLtm1","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Existing mapping for [kubernetes.labels.app] must be of type object but found [text]."}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"aOMnun8BI6SaBP9lLtm1","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Existing mapping for [kubernetes.labels.app] must be of type object but found [text]."}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"aeMnun8BI6SaBP9lLtm1","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamicall
[2022/03/24 04:20:06] [debug] [out coro] cb_destroy coro_id=5
[2022/03/24 04:20:06] [debug] [retry] re-using retry for task_id=1 attempts=4
[2022/03/24 04:20:06] [ warn] [engine] failed to flush chunk '1-1648095560.254537600.flb', retry in 60 seconds: task_id=1, input=tail.0 > output=es.0 (out_id=0)
[2022/03/24 04:20:20] [debug] [input:tail:tail.0] scanning path /var/log/containers/.log
[2022/03/24 04:20:20] [debug] [input:tail:tail.0] scan_blog add(): dismissed: /var/log/containers/argo-server-6d7cf9c977-dlwnk_argo_argo-server-7e1ccfbd60b7539a1b2984f2f46de601d567ce83e87d434e173df195e44b5224.log, inode 101715266
[2022/03/24 04:20:20] [debug] [input:tail:tail.0] scan_blog add(): dismissed: /var/log/containers/coredns-66c464876b-4g64d_kube-system_coredns-3081b7d8e172858ec380f707cf6195c93c8b90b797b6475fe3ab21820386fc0d.log, inode 67178299
[2022/03/24 04:20:20] [debug] [input:tail:tail.0] scan_blog add(): dismissed: /var/log/containers/ffffhello-world-dcqbx_argo_main-4522cea91646c207c4aa9ad008d19d9620bc8c6a81ae6135922fb2d99ee834c7.log, inode 34598706
[2022/03/24 04:20:20] [debug] [input:tail:tail.0] scan_blog add(): dismissed: /var/log/containers/ffffhello-world-dcqbx_argo_wait-6b82c7411c8433b5e5f14c56f4b810dc3e25a2e7cfb9e9b107b9b1d50658f5e2.log, inode 67891711
[2022/03/24 04:20:20] [debug] [input:tail:tail.0] scan_blog add(): dismissed: /var/log/containers/fluent-bit-9hwpg_logging_fluent-bit-a7e85dd8e51db82db787e3386358a885ccff94c3411c8ba80a9a71598c01f387.log, inode 35353988
[2022/03/24 04:20:20] [debug] [input:tail:tail.0] inode=35353618 with offset=0 appended as /var/log/containers/hello-world-dsxks_argo_main-3bba9f6587b663e2ec8fde9f40424e43ccf8783cf5eafafc64486d405304f470.log
[2022/03/24 04:20:20] [debug] [input:tail:tail.0] scan_glob add(): /var/log/containers/hello-world-dsxks_argo_main-3bba9f6587b663e2ec8fde9f40424e43ccf8783cf5eafafc64486d405304f470.log, inode 35353618
[2022/03/24 04:20:20] [debug] [input:tail:tail.0] inode=1885019 with offset=0 appended as /var/log/containers/hello-world-dsxks_argo_wait-114879608f2fe019cd6cfce8e3777f9c0a4f34db2f6dc72bb39b2b5ceb917d4b.log
[2022/03/24 04:20:20] [debug] [input:tail:tail.0] scan_glob add(): /var/log/containers/hello-world-dsxks_argo_wait-114879608f2fe019cd6cfce8e3777f9c0a4f34db2f6dc72bb39b2b5ceb917d4b.log, inode 1885019
[2022/03/24 04:20:20] [debug] [input:tail:tail.0] inode=35353617 with offset=0 appended as /var/log/containers/hello-world-g74nr_argo_main-11e24136e914d43a8ab97af02c091f0261ea8cee717937886f25501974359726.log
[2022/03/24 04:20:20] [debug] [input:tail:tail.0] scan_glob add(): /var/log/containers/hello-world-g74nr_argo_main-11e24136e914d43a8ab97af02c091f0261ea8cee717937886f25501974359726.log, inode 35353617
[2022/03/24 04:20:20] [debug] [input:tail:tail.0] inode=1885001 with offset=0 appended as /var/log/containers/hello-world-g74nr_argo_wait-227a0fdb4663e03fecebe61f7b6bfb6fdd2867292cacfe692dc15d50a73f29ff.log
[2022/03/24 04:20:20] [debug] [input:tail:tail.0] scan_glob add(): /var/log/containers/hello-world-g74nr_argo_wait-227a0fdb4663e03fecebe61f7b6bfb6fdd2867292cacfe692dc15d50a73f29ff.log, inode 1885001
[2022/03/24 04:20:20] [debug] [input:tail:tail.0] scan_blog add(): dismissed: /var/log/containers/helm-install-traefik-j2ncv_kube-system_helm-4554d6945ad4a135678c69aae3fb44bf003479edc450b256421a51ce68a37c59.log, inode 622082
[2022/03/24 04:20:20] [debug] [input:tail:tail.0] scan_blog add(): dismissed: /var/log/containers/local-path-provisioner-7ff9579c6-mcwsb_kube-system_local-path-provisioner-47a630b5c79ea227664d87ae336d6a7b80fdce7028230c6031175099461cd221.log, inode 444123
[2022/03/24 04:20:20] [debug] [input:tail:tail.0] scan_blog add(): dismissed: /var/log/containers/metrics-server-7b4f8b595-v67pp_kube-system_metrics-server-e1e425c84b9462fb800c3655c86c1fd8320b98067c0f43305806cb81b7120b4c.log, inode 67182317
[2022/03/24 04:20:20] [debug] [input:tail:tail.0] scan_blog add(): dismissed: /var/log/containers/svclb-traefik-twmt7_kube-system_lb-port-443-ab3854479885ed2d0db7202276fdb1d2142db002b93c0c88d3d9383fc2d8068b.log, inode 34105877
[2022/03/24 04:20:20] [debug] [input:tail:tail.0] scan_blog add(): dismissed: /var/log/containers/svclb-traefik-twmt7_kube-system_lb-port-80-10ce439b02864f9075c8e41c716e394a6a6cda391ae753798cde988271ff35ef.log, inode 67186751
[2022/03/24 04:20:20] [debug] [input:tail:tail.0] scan_blog add(): dismissed: /var/log/containers/traefik-5dd496474-84cj4_kube-system_traefik-686ff216b0c3b70ad7c33ceddf441433ae1fbf9e01b3c57c59bab53e69304722.log, inode 34105409
[2022/03/24 04:20:20] [debug] [input:tail:tail.0] scan_blog add(): dismissed: /var/log/containers/workflow-controller-bb7c78c7b-w2n5c_argo_workflow-controller-7f4797ff53352e50ff21cf9625ec02ffb226172a2a3ed9b0cee0cb1d071a2990.log, inode 34598688
[2022/03/24 04:20:20] [debug] [input:tail:tail.0] 4 new files found on path '/var/log/containers/
.log'
[2022/03/24 04:20:20] [debug] [input:tail:tail.0] purge: monitored file has been deleted: /var/log/containers/hello-world-6lqzf_argo_main-5f73e32f330b82717357220ce404309cd9c3f62e1d75f241f74cbc3086597fa4.log
[2022/03/24 04:20:20] [debug] [input:tail:tail.0] inode=103386716 removing file name /var/log/containers/hello-world-6lqzf_argo_main-5f73e32f330b82717357220ce404309cd9c3f62e1d75f241f74cbc3086597fa4.log
[2022/03/24 04:20:20] [ info] [input:tail:tail.0] inotify_fs_remove(): inode=103386716 watch_fd=5
[2022/03/24 04:20:20] [debug] [input:tail:tail.0] purge: monitored file has been deleted: /var/log/containers/hello-world-6lqzf_argo_wait-6939f915dcb1d1e0050739f656afcd8636884b83c4d26692024699930b263fad.log
[2022/03/24 04:20:20] [debug] [input:tail:tail.0] inode=69179340 removing file name /var/log/containers/hello-world-6lqzf_argo_wait-6939f915dcb1d1e0050739f656afcd8636884b83c4d26692024699930b263fad.log
[2022/03/24 04:20:20] [ info] [input:tail:tail.0] inotify_fs_remove(): inode=69179340 watch_fd=6
[2022/03/24 04:20:20] [debug] [input:tail:tail.0] purge: monitored file has been deleted: /var/log/containers/hello-world-7mwzw_argo_main-4a2ecde2fd5310129cdf3e3c7eacc17fc1ae0eb6b5e88bed0fdf8fd7fd1100f4.log
[2022/03/24 04:20:20] [debug] [input:tail:tail.0] inode=103386717 removing file name /var/log/containers/hello-world-7mwzw_argo_main-4a2ecde2fd5310129cdf3e3c7eacc17fc1ae0eb6b5e88bed0fdf8fd7fd1100f4.log
[2022/03/24 04:20:20] [ info] [input:tail:tail.0] inotify_fs_remove(): inode=103386717 watch_fd=7
[2022/03/24 04:20:20] [debug] [input:tail:tail.0] purge: monitored file has been deleted: /var/log/containers/hello-world-7mwzw_argo_wait-970c00b906c36cb89ed77fe3fa3cd1abc2702078fee737da0062d3b25680bf9c.log
[2022/03/24 04:20:20] [debug] [input:tail:tail.0] inode=1756313 removing file name /var/log/containers/hello-world-7mwzw_argo_wait-970c00b906c36cb89ed77fe3fa3cd1abc2702078fee737da0062d3b25680bf9c.log
[2022/03/24 04:20:20] [ info] [input:tail:tail.0] inotify_fs_remove(): inode=1756313 watch_fd=8
[2022/03/24 04:20:25] [debug] [output:es:es.0] task_id=2 assigned to thread #1
[2022/03/24 04:20:25] [debug] [upstream] KA connection #102 to 10.3.4.84:9200 has been assigned (recycled)
[2022/03/24 04:20:25] [debug] [http_client] not using http_proxy for header
[2022/03/24 04:20:26] [debug] [output:es:es.0] HTTP Status=200 URI=/_bulk
[2022/03/24 04:20:26] [debug] [upstream] KA connection #102 to 10.3.4.84:9200 is now available
[2022/03/24 04:20:26] [debug] [out coro] cb_destroy coro_id=5
[2022/03/24 04:20:26] [debug] [retry] re-using retry for task_id=2 attempts=5
[2022/03/24 04:20:26] [ warn] [engine] failed to flush chunk '1-1648095560.297175793.flb', retry in 161 seconds: task_id=2, input=tail.0 > output=es.0 (out_id=0)
[2022/03/24 04:20:34] [debug] [output:es:es.0] task_id=0 assigned to thread #0
[2022/03/24 04:20:34] [debug] [http_client] not using http_proxy for header
[2022/03/24 04:20:36] [ warn] [http_client] cannot increase buffer: current=512000 requested=544768 max=512000
[2022/03/24 04:20:36] [debug] [output:es:es.0] HTTP Status=200 URI=/_bulk
[2022/03/24 04:20:36] [error] [output:es:es.0] could not pack/validate JSON response
{"took":2217,"errors":true,"items":[{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"yeMnun8BI6SaBP9lo-jn","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Existing mapping for [kubernetes.labels.app] must be of type object but found [text]."}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"yuMnun8BI6SaBP9lo-jn","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Existing mapping for [kubernetes.labels.app] must be of type object but found [text]."}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"y-Mnun8BI6SaBP9lo-jn","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Existing mapping for [kubernetes.labels.app] must be of type object but found [text]."}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"zOMnun8BI6SaBP9lo-jn","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Existing mapping for [kubernetes.labels.app] must be of type object but found [text]."}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"zeMnun8BI6SaBP9lo-jn","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Existing mapping for [kubernetes.labels.app] must be of type object but found [text]."}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"zuMnun8BI6SaBP9lo-jn","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Existing mapping for [kubernetes.labels.app] must be of type object but found [text]."}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"z-Mnun8BI6SaBP9lo-jn","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Existing mapping for [kubernetes.labels.app] must be of type object but found [text]."}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"0OMnun8BI6SaBP9lo-jn","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Existing mapping for [kubernetes.labels.app] must be of type object but found [text]."}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"0eMnun8BI6SaBP9lo-jn","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Existing mapping for [kubernetes.labels.app] must be of type object but found [text]."}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"0uMnun8BI6SaBP9lo-jn","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Existing mapping for [kubernetes.labels.app] must be of type object but found [text]."}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"0-Mnun8BI6SaBP9lo-jn","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Existing mapping for [kubernetes.labels.app] must be of type object but found [text]."}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"1OMnun8BI6SaBP9lo-jn","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Existing mapping for [kubernetes.labels.app] must be of type object but found [text]."}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"1eMnun8BI6SaBP9lo-jn","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamicall
[2022/03/24 04:20:36] [debug] [out coro] cb_destroy coro_id=6
[2022/03/24 04:20:36] [debug] [retry] re-using retry for task_id=0 attempts=4
[2022/03/24 04:20:36] [ warn] [engine] failed to flush chunk '1-1648095560.205735907.flb', retry in 13 seconds: task_id=0, input=tail.0 > output=es.0 (out_id=0)
[2022/03/24 04:20:49] [debug] [output:es:es.0] task_id=0 assigned to thread #1
[2022/03/24 04:20:49] [debug] [upstream] KA connection #102 to 10.3.4.84:9200 has been assigned (recycled)
[2022/03/24 04:20:49] [debug] [http_client] not using http_proxy for header
[2022/03/24 04:20:51] [ warn] [http_client] cannot increase buffer: current=512000 requested=544768 max=512000
[2022/03/24 04:20:51] [debug] [output:es:es.0] HTTP Status=200 URI=/_bulk
[2022/03/24 04:20:51] [error] [output:es:es.0] could not pack/validate JSON response
{"took":1935,"errors":true,"items":[{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"c-Mnun8BI6SaBP9l3vN-","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Existing mapping for [kubernetes.labels.app] must be of type object but found [text]."}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"dOMnun8BI6SaBP9l3vN-","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Existing mapping for [kubernetes.labels.app] must be of type object but found [text]."}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"deMnun8BI6SaBP9l3vN-","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Existing mapping for [kubernetes.labels.app] must be of type object but found [text]."}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"duMnun8BI6SaBP9l3vN-","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Existing mapping for [kubernetes.labels.app] must be of type object but found [text]."}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"d-Mnun8BI6SaBP9l3vN-","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Existing mapping for [kubernetes.labels.app] must be of type object but found [text]."}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"eOMnun8BI6SaBP9l3vN-","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Existing mapping for [kubernetes.labels.app] must be of type object but found [text]."}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"eeMnun8BI6SaBP9l3vN-","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Existing mapping for [kubernetes.labels.app] must be of type object but found [text]."}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"euMnun8BI6SaBP9l3vN-","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Existing mapping for [kubernetes.labels.app] must be of type object but found [text]."}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"e-Mnun8BI6SaBP9l3vN-","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Existing mapping for [kubernetes.labels.app] must be of type object but found [text]."}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"fOMnun8BI6SaBP9l3vN-","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Existing mapping for [kubernetes.labels.app] must be of type object but found [text]."}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"feMnun8BI6SaBP9l3vN-","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Existing mapping for [kubernetes.labels.app] must be of type object but found [text]."}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"fuMnun8BI6SaBP9l3vN-","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Existing mapping for [kubernetes.labels.app] must be of type object but found [text]."}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"f-Mnun8BI6SaBP9l3vN-","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamicall
[2022/03/24 04:20:51] [debug] [out coro] cb_destroy coro_id=6
[2022/03/24 04:20:51] [debug] [retry] re-using retry for task_id=0 attempts=5
[2022/03/24 04:20:51] [ warn] [engine] failed to flush chunk '1-1648095560.205735907.flb', retry in 111 seconds: task_id=0, input=tail.0 > output=es.0 (out_id=0)
[2022/03/24 04:21:06] [debug] [output:es:es.0] task_id=1 assigned to thread #0
[2022/03/24 04:21:06] [debug] [http_client] not using http_proxy for header
[2022/03/24 04:21:08] [ warn] [http_client] cannot increase buffer: current=512000 requested=544768 max=512000
[2022/03/24 04:21:08] [debug] [output:es:es.0] HTTP Status=200 URI=/_bulk
[2022/03/24 04:21:08] [error] [output:es:es.0] could not pack/validate JSON response
{"took":1923,"errors":true,"items":[{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"HeMoun8BI6SaBP9lIP7t","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Existing mapping for [kubernetes.labels.app] must be of type object but found [text]."}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"HuMoun8BI6SaBP9lIP7t","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Existing mapping for [kubernetes.labels.app] must be of type object but found [text]."}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"H-Moun8BI6SaBP9lIP7t","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Existing mapping for [kubernetes.labels.app] must be of type object but found [text]."}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"IOMoun8BI6SaBP9lIP7t","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Existing mapping for [kubernetes.labels.app] must be of type object but found [text]."}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"IeMoun8BI6SaBP9lIP7t","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Existing mapping for [kubernetes.labels.app] must be of type object but found [text]."}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"IuMoun8BI6SaBP9lIP7t","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Existing mapping for [kubernetes.labels.app] must be of type object but found [text]."}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"I-Moun8BI6SaBP9lIP7t","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Existing mapping for [kubernetes.labels.app] must be of type object but found [text]."}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"JOMoun8BI6SaBP9lIP7t","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Existing mapping for [kubernetes.labels.app] must be of type object but found [text]."}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"JeMoun8BI6SaBP9lIP7t","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Existing mapping for [kubernetes.labels.app] must be of type object but found [text]."}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"JuMoun8BI6SaBP9lIP7t","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Existing mapping for [kubernetes.labels.app] must be of type object but found [text]."}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"J-Moun8BI6SaBP9lIP7t","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Existing mapping for [kubernetes.labels.app] must be of type object but found [text]."}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"KOMoun8BI6SaBP9lIP7t","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Existing mapping for [kubernetes.labels.app] must be of type object but found [text]."}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"KeMoun8BI6SaBP9lIP7t","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamicall
[2022/03/24 04:21:08] [debug] [out coro] cb_destroy coro_id=7
[2022/03/24 04:21:08] [debug] [retry] re-using retry for task_id=1 attempts=5
[2022/03/24 04:21:08] [ warn] [engine] failed to flush chunk '1-1648095560.254537600.flb', retry in 108 seconds: task_id=1, input=tail.0 > output=es.0 (out_id=0)
[2022/03/24 04:21:20] [debug] [input:tail:tail.0] scanning path /var/log/containers/.log
[2022/03/24 04:21:20] [debug] [input:tail:tail.0] scan_blog add(): dismissed: /var/log/containers/argo-server-6d7cf9c977-dlwnk_argo_argo-server-7e1ccfbd60b7539a1b2984f2f46de601d567ce83e87d434e173df195e44b5224.log, inode 101715266
[2022/03/24 04:21:20] [debug] [input:tail:tail.0] scan_blog add(): dismissed: /var/log/containers/coredns-66c464876b-4g64d_kube-system_coredns-3081b7d8e172858ec380f707cf6195c93c8b90b797b6475fe3ab21820386fc0d.log, inode 67178299
[2022/03/24 04:21:20] [debug] [input:tail:tail.0] scan_blog add(): dismissed: /var/log/containers/ffffhello-world-dcqbx_argo_main-4522cea91646c207c4aa9ad008d19d9620bc8c6a81ae6135922fb2d99ee834c7.log, inode 34598706
[2022/03/24 04:21:20] [debug] [input:tail:tail.0] scan_blog add(): dismissed: /var/log/containers/ffffhello-world-dcqbx_argo_wait-6b82c7411c8433b5e5f14c56f4b810dc3e25a2e7cfb9e9b107b9b1d50658f5e2.log, inode 67891711
[2022/03/24 04:21:20] [debug] [input:tail:tail.0] scan_blog add(): dismissed: /var/log/containers/fluent-bit-9hwpg_logging_fluent-bit-a7e85dd8e51db82db787e3386358a885ccff94c3411c8ba80a9a71598c01f387.log, inode 35353988
[2022/03/24 04:21:20] [debug] [input:tail:tail.0] inode=35326801 with offset=0 appended as /var/log/containers/hello-world-89knq_argo_main-f011b1f724e7c495af7d5b545d658efd4bff6ae88489a16581f492d744142807.log
[2022/03/24 04:21:20] [debug] [input:tail:tail.0] scan_glob add(): /var/log/containers/hello-world-89knq_argo_main-f011b1f724e7c495af7d5b545d658efd4bff6ae88489a16581f492d744142807.log, inode 35326801
[2022/03/24 04:21:20] [debug] [input:tail:tail.0] inode=1772851 with offset=0 appended as /var/log/containers/hello-world-89knq_argo_wait-a7f77229883282b7aebce253b8c371dd28e0606575ded307669b43b272d9a2f4.log
[2022/03/24 04:21:20] [debug] [input:tail:tail.0] scan_glob add(): /var/log/containers/hello-world-89knq_argo_wait-a7f77229883282b7aebce253b8c371dd28e0606575ded307669b43b272d9a2f4.log, inode 1772851
[2022/03/24 04:21:20] [debug] [input:tail:tail.0] inode=35326802 with offset=0 appended as /var/log/containers/hello-world-wpr5j_argo_main-55a61ed18250cc1e46ac98d918072e16dab1c6a73f7f9cf0a5dd096959cf6964.log
[2022/03/24 04:21:20] [debug] [input:tail:tail.0] scan_glob add(): /var/log/containers/hello-world-wpr5j_argo_main-55a61ed18250cc1e46ac98d918072e16dab1c6a73f7f9cf0a5dd096959cf6964.log, inode 35326802
[2022/03/24 04:21:20] [debug] [input:tail:tail.0] inode=1772861 with offset=0 appended as /var/log/containers/hello-world-wpr5j_argo_wait-76bcd0771f3cc7b5f6b5f15f16ee01cc0c671fb047b93910271bc73e753e26ee.log
[2022/03/24 04:21:20] [debug] [input:tail:tail.0] scan_glob add(): /var/log/containers/hello-world-wpr5j_argo_wait-76bcd0771f3cc7b5f6b5f15f16ee01cc0c671fb047b93910271bc73e753e26ee.log, inode 1772861
[2022/03/24 04:21:20] [debug] [input:tail:tail.0] scan_blog add(): dismissed: /var/log/containers/helm-install-traefik-j2ncv_kube-system_helm-4554d6945ad4a135678c69aae3fb44bf003479edc450b256421a51ce68a37c59.log, inode 622082
[2022/03/24 04:21:20] [debug] [input:tail:tail.0] scan_blog add(): dismissed: /var/log/containers/local-path-provisioner-7ff9579c6-mcwsb_kube-system_local-path-provisioner-47a630b5c79ea227664d87ae336d6a7b80fdce7028230c6031175099461cd221.log, inode 444123
[2022/03/24 04:21:20] [debug] [input:tail:tail.0] scan_blog add(): dismissed: /var/log/containers/metrics-server-7b4f8b595-v67pp_kube-system_metrics-server-e1e425c84b9462fb800c3655c86c1fd8320b98067c0f43305806cb81b7120b4c.log, inode 67182317
[2022/03/24 04:21:20] [debug] [input:tail:tail.0] scan_blog add(): dismissed: /var/log/containers/svclb-traefik-twmt7_kube-system_lb-port-443-ab3854479885ed2d0db7202276fdb1d2142db002b93c0c88d3d9383fc2d8068b.log, inode 34105877
[2022/03/24 04:21:20] [debug] [input:tail:tail.0] scan_blog add(): dismissed: /var/log/containers/svclb-traefik-twmt7_kube-system_lb-port-80-10ce439b02864f9075c8e41c716e394a6a6cda391ae753798cde988271ff35ef.log, inode 67186751
[2022/03/24 04:21:20] [debug] [input:tail:tail.0] scan_blog add(): dismissed: /var/log/containers/traefik-5dd496474-84cj4_kube-system_traefik-686ff216b0c3b70ad7c33ceddf441433ae1fbf9e01b3c57c59bab53e69304722.log, inode 34105409
[2022/03/24 04:21:20] [debug] [input:tail:tail.0] scan_blog add(): dismissed: /var/log/containers/workflow-controller-bb7c78c7b-w2n5c_argo_workflow-controller-7f4797ff53352e50ff21cf9625ec02ffb226172a2a3ed9b0cee0cb1d071a2990.log, inode 34598688
[2022/03/24 04:21:20] [debug] [input:tail:tail.0] 4 new files found on path '/var/log/containers/
.log'
[2022/03/24 04:21:20] [debug] [input:tail:tail.0] purge: monitored file has been deleted: /var/log/containers/hello-world-dsxks_argo_main-3bba9f6587b663e2ec8fde9f40424e43ccf8783cf5eafafc64486d405304f470.log
[2022/03/24 04:21:20] [debug] [input:tail:tail.0] inode=35353618 removing file name /var/log/containers/hello-world-dsxks_argo_main-3bba9f6587b663e2ec8fde9f40424e43ccf8783cf5eafafc64486d405304f470.log
[2022/03/24 04:21:20] [debug] [input:tail:tail.0] purge: monitored file has been deleted: /var/log/containers/hello-world-dsxks_argo_wait-114879608f2fe019cd6cfce8e3777f9c0a4f34db2f6dc72bb39b2b5ceb917d4b.log
[2022/03/24 04:21:20] [debug] [input:tail:tail.0] inode=1885019 removing file name /var/log/containers/hello-world-dsxks_argo_wait-114879608f2fe019cd6cfce8e3777f9c0a4f34db2f6dc72bb39b2b5ceb917d4b.log
[2022/03/24 04:21:20] [debug] [input:tail:tail.0] purge: monitored file has been deleted: /var/log/containers/hello-world-g74nr_argo_main-11e24136e914d43a8ab97af02c091f0261ea8cee717937886f25501974359726.log
[2022/03/24 04:21:20] [debug] [input:tail:tail.0] inode=35353617 removing file name /var/log/containers/hello-world-g74nr_argo_main-11e24136e914d43a8ab97af02c091f0261ea8cee717937886f25501974359726.log
[2022/03/24 04:21:20] [debug] [input:tail:tail.0] purge: monitored file has been deleted: /var/log/containers/hello-world-g74nr_argo_wait-227a0fdb4663e03fecebe61f7b6bfb6fdd2867292cacfe692dc15d50a73f29ff.log
[2022/03/24 04:21:20] [debug] [input:tail:tail.0] inode=1885001 removing file name /var/log/containers/hello-world-g74nr_argo_wait-227a0fdb4663e03fecebe61f7b6bfb6fdd2867292cacfe692dc15d50a73f29ff.log

@yangtian9999
Copy link
Author

tried 1.9.0, 1.8.15. 1.8.12 all got same error

@yangtian9999
Copy link
Author

@dezhishen I set the "Write_Operation upsert", then pod error, did not start fluent-bit normally. This error happened for 1.8.12/1.8.15/1.9.0

@lecaros
Copy link
Contributor

lecaros commented Mar 24, 2022

It seems that you're trying to create a new index with dots on its name.

[2022/03/24 04:19:24] [error] [output🇪🇸es.0] could not pack/validate JSON response
{"took":2579,"errors":true,"items":[{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"G-Mmun8BI6SaBP9lh4vZ","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Existing mapping for [kubernetes.labels.app] must be of type object but found [text]."}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"HOMmun8BI6SaBP9lh4vZ","status":400,"error":

Try setting replace_dots to true/on.

@yangtian9999
Copy link
Author

[2022/03/25 07:08:21] [debug] [input:tail:tail.0] inode=69336502 file has been deleted: /var/log/containers/hello-world-bjfnf_argo_wait-8f0faa126a1c36d4e0d76e1dc75485a39ecc2d43a4efc46ae7306f4b86ea9964.log

Fri, Mar 25 2022 3:08:21 pm | [2022/03/25 07:08:21] [debug] [input:tail:tail.0] inode=69336502 removing file name /var/log/containers/hello-world-bjfnf_argo_wait-8f0faa126a1c36d4e0d76e1dc75485a39ecc2d43a4efc46ae7306f4b86ea9964.log
Fri, Mar 25 2022 3:08:21 pm | [2022/03/25 07:08:21] [ info] [input:tail:tail.0] inotify_fs_remove(): inode=69336502 watch_fd=12
Fri, Mar 25 2022 3:08:21 pm | [2022/03/25 07:08:21] [debug] [output:es:es.0] HTTP Status=200 URI=/_bulk
Fri, Mar 25 2022 3:08:21 pm | [2022/03/25 07:08:21] [debug] [upstream] KA connection #120 to 10.3.4.84:9200 is now available
Fri, Mar 25 2022 3:08:21 pm | [2022/03/25 07:08:21] [debug] [out coro] cb_destroy coro_id=1
Fri, Mar 25 2022 3:08:21 pm | [2022/03/25 07:08:21] [debug] [retry] new retry created for task_id=3 attempts=1
Fri, Mar 25 2022 3:08:21 pm | [2022/03/25 07:08:21] [ warn] [engine] failed to flush chunk '1-1648192100.653122953.flb', retry in 11 seconds: task_id=3, input=tail.0 > output=es.0 (out_id=0)
Fri, Mar 25 2022 3:08:21 pm | [2022/03/25 07:08:21] [debug] [input:tail:tail.0] inode=69464185 events: IN_ATTRIB
Fri, Mar 25 2022 3:08:21 pm | [2022/03/25 07:08:21] [debug] [input:tail:tail.0] inode=69464185 file has been deleted: /var/log/containers/hello-world-ctlp5_argo_main-276b9a264b409e931e48ca768d7a3f304b89c6673be86a8cc1e957538e9dd7ce.log
Fri, Mar 25 2022 3:08:21 pm | [2022/03/25 07:08:21] [debug] [input:tail:tail.0] inode=69464185 removing file name /var/log/containers/hello-world-ctlp5_argo_main-276b9a264b409e931e48ca768d7a3f304b89c6673be86a8cc1e957538e9dd7ce.log
Fri, Mar 25 2022 3:08:21 pm | [2022/03/25 07:08:21] [ info] [input:tail:tail.0] inotify_fs_remove(): inode=69464185 watch_fd=13
Fri, Mar 25 2022 3:08:21 pm | [2022/03/25 07:08:21] [debug] [input:tail:tail.0] inode=104048677 events: IN_ATTRIB
Fri, Mar 25 2022 3:08:21 pm | [2022/03/25 07:08:21] [debug] [input:tail:tail.0] inode=104048677 file has been deleted: /var/log/containers/hello-world-hxn5d_argo_main-ce2dea5b2661227ee3931c554317a97e7b958b46d79031f1c48b840cd10b3d78.log
Fri, Mar 25 2022 3:08:21 pm | [2022/03/25 07:08:21] [debug] [input:tail:tail.0] inode=104048677 removing file name /var/log/containers/hello-world-hxn5d_argo_main-ce2dea5b2661227ee3931c554317a97e7b958b46d79031f1c48b840cd10b3d78.log
Fri, Mar 25 2022 3:08:21 pm | [2022/03/25 07:08:21] [ info] [input:tail:tail.0] inotify_fs_remove(): inode=104048677 watch_fd=17
Fri, Mar 25 2022 3:08:21 pm | [2022/03/25 07:08:21] [debug] [input:tail:tail.0] inode=1931990 events: IN_ATTRIB
Fri, Mar 25 2022 3:08:21 pm | [2022/03/25 07:08:21] [debug] [input:tail:tail.0] inode=1931990 file has been deleted: /var/log/containers/hello-world-swxx6_argo_main-8738378bea8bd6d3dfd18bf8ef2c5a5687c900539317432114c7472eff9e63c2.log
Fri, Mar 25 2022 3:08:21 pm | [2022/03/25 07:08:21] [debug] [input:tail:tail.0] inode=1931990 removing file name /var/log/containers/hello-world-swxx6_argo_main-8738378bea8bd6d3dfd18bf8ef2c5a5687c900539317432114c7472eff9e63c2.log
Fri, Mar 25 2022 3:08:21 pm | [2022/03/25 07:08:21] [ info] [input:tail:tail.0] inotify_fs_remove(): inode=1931990 watch_fd=19
Fri, Mar 25 2022 3:08:21 pm | [2022/03/25 07:08:21] [debug] [input:tail:tail.0] inode=34055641 events: IN_ATTRIB
Fri, Mar 25 2022 3:08:21 pm | [2022/03/25 07:08:21] [debug] [input:tail:tail.0] inode=34055641 file has been deleted: /var/log/containers/hello-world-bjfnf_argo_main-0b26876c79c5790bdaf62ba2d9512269459746b1c5711a6445256dc5a4930b65.log
Fri, Mar 25 2022 3:08:21 pm | [2022/03/25 07:08:21] [debug] [input:tail:tail.0] inode=34055641 removing file name /var/log/containers/hello-world-bjfnf_argo_main-0b26876c79c5790bdaf62ba2d9512269459746b1c5711a6445256dc5a4930b65.log
Fri, Mar 25 2022 3:08:21 pm | [2022/03/25 07:08:21] [ info] [input:tail:tail.0] inotify_fs_remove(): inode=34055641 watch_fd=11
Fri, Mar 25 2022 3:08:21 pm | [2022/03/25 07:08:21] [debug] [input:tail:tail.0] inode=3070975 events: IN_ATTRIB
Fri, Mar 25 2022 3:08:21 pm | [2022/03/25 07:08:21] [debug] [input:tail:tail.0] inode=3070975 file has been deleted: /var/log/containers/hello-world-hxn5d_argo_wait-be32f13608de76af5bd4616dc826eebc306fb25eeb340049de8d3b8e5d40ba4b.log
Fri, Mar 25 2022 3:08:21 pm | [2022/03/25 07:08:21] [debug] [input:tail:tail.0] inode=3070975 removing file name /var/log/containers/hello-world-hxn5d_argo_wait-be32f13608de76af5bd4616dc826eebc306fb25eeb340049de8d3b8e5d40ba4b.log
Fri, Mar 25 2022 3:08:21 pm | [2022/03/25 07:08:21] [ info] [input:tail:tail.0] inotify_fs_remove(): inode=3070975 watch_fd=18
Fri, Mar 25 2022 3:08:21 pm | [2022/03/25 07:08:21] [debug] [input:tail:tail.0] inode=104226845 events: IN_ATTRIB
Fri, Mar 25 2022 3:08:21 pm | [2022/03/25 07:08:21] [debug] [input:tail:tail.0] inode=104226845 file has been deleted: /var/log/containers/hello-world-dsfcz_argo_wait-3a9bd9a90cc08322e96d0b7bcc9b6aeffd7e5e6a71754073ca1092db862fcfb7.log
Fri, Mar 25 2022 3:08:21 pm | [2022/03/25 07:08:21] [debug] [input:tail:tail.0] inode=104226845 removing file name /var/log/containers/hello-world-dsfcz_argo_wait-3a9bd9a90cc08322e96d0b7bcc9b6aeffd7e5e6a71754073ca1092db862fcfb7.log
Fri, Mar 25 2022 3:08:21 pm | [2022/03/25 07:08:21] [ info] [input:tail:tail.0] inotify_fs_remove(): inode=104226845 watch_fd=16
Fri, Mar 25 2022 3:08:21 pm | [2022/03/25 07:08:21] [debug] [input:tail:tail.0] inode=3076476 events: IN_ATTRIB
Fri, Mar 25 2022 3:08:21 pm | [2022/03/25 07:08:21] [debug] [input:tail:tail.0] inode=3076476 file has been deleted: /var/log/containers/hello-world-89skv_argo_wait-5d919c301d4709b0304c6c65a8389aac10f30b8617bd935a9680a84e1873542b.log
Fri, Mar 25 2022 3:08:21 pm | [2022/03/25 07:08:21] [debug] [input:tail:tail.0] inode=3076476 removing file name /var/log/containers/hello-world-89skv_argo_wait-5d919c301d4709b0304c6c65a8389aac10f30b8617bd935a9680a84e1873542b.log
Fri, Mar 25 2022 3:08:21 pm | [2022/03/25 07:08:21] [ info] [input:tail:tail.0] inotify_fs_remove(): inode=3076476 watch_fd=10
Fri, Mar 25 2022 3:08:21 pm | [2022/03/25 07:08:21] [debug] [input:tail:tail.0] inode=104048905 events: IN_ATTRIB
Fri, Mar 25 2022 3:08:21 pm | [2022/03/25 07:08:21] [debug] [input:tail:tail.0] inode=104048905 file has been deleted: /var/log/containers/hello-world-ctlp5_argo_wait-f817c7cb9f30a0ba99fb3976757b495771f6d8f23e1ae5474ef191a309db70fc.log
Fri, Mar 25 2022 3:08:21 pm | [2022/03/25 07:08:21] [debug] [input:tail:tail.0] inode=104048905 removing file name /var/log/containers/hello-world-ctlp5_argo_wait-f817c7cb9f30a0ba99fb3976757b495771f6d8f23e1ae5474ef191a309db70fc.log
Fri, Mar 25 2022 3:08:21 pm | [2022/03/25 07:08:21] [ info] [input:tail:tail.0] inotify_fs_remove(): inode=104048905 watch_fd=14
Fri, Mar 25 2022 3:08:21 pm | [2022/03/25 07:08:21] [debug] [input:tail:tail.0] inode=35359369 events: IN_ATTRIB
Fri, Mar 25 2022 3:08:21 pm | [2022/03/25 07:08:21] [debug] [input:tail:tail.0] inode=35359369 file has been deleted: /var/log/containers/hello-world-swxx6_argo_wait-dc29bc4a400f91f349d4efd144f2a57728ea02b3c2cd527fcd268e3147e9af7d.log
Fri, Mar 25 2022 3:08:21 pm | [2022/03/25 07:08:21] [debug] [input:tail:tail.0] inode=35359369 removing file name /var/log/containers/hello-world-swxx6_argo_wait-dc29bc4a400f91f349d4efd144f2a57728ea02b3c2cd527fcd268e3147e9af7d.log
Fri, Mar 25 2022 3:08:21 pm | [2022/03/25 07:08:21] [ info] [input:tail:tail.0] inotify_fs_remove(): inode=35359369 watch_fd=20
Fri, Mar 25 2022 3:08:21 pm | [2022/03/25 07:08:21] [debug] [input:tail:tail.0] inode=69479190 events: IN_ATTRIB
Fri, Mar 25 2022 3:08:21 pm | [2022/03/25 07:08:21] [debug] [input:tail:tail.0] inode=69479190 file has been deleted: /var/log/containers/hello-world-dsfcz_argo_main-13bb1b2c7e9d3e70003814aa3900bb9aef645cf5e3270e3ee4db0988240b9eff.log
Fri, Mar 25 2022 3:08:21 pm | [2022/03/25 07:08:21] [debug] [input:tail:tail.0] inode=69479190 removing file name /var/log/containers/hello-world-dsfcz_argo_main-13bb1b2c7e9d3e70003814aa3900bb9aef645cf5e3270e3ee4db0988240b9eff.log
Fri, Mar 25 2022 3:08:21 pm | [2022/03/25 07:08:21] [ info] [input:tail:tail.0] inotify_fs_remove(): inode=69479190 watch_fd=15
Fri, Mar 25 2022 3:08:21 pm | [2022/03/25 07:08:21] [debug] [input:tail:tail.0] inode=104051102 events: IN_ATTRIB
Fri, Mar 25 2022 3:08:21 pm | [2022/03/25 07:08:21] [debug] [input:tail:tail.0] inode=104051102 file has been deleted: /var/log/containers/hello-world-89skv_argo_main-41261a71eea53f67b43c6e1b643d273e59fade2d8d16ee9f4d70e01766e5cc1d.log
Fri, Mar 25 2022 3:08:21 pm | [2022/03/25 07:08:21] [debug] [input:tail:tail.0] inode=104051102 removing file name /var/log/containers/hello-world-89skv_argo_main-41261a71eea53f67b43c6e1b643d273e59fade2d8d16ee9f4d70e01766e5cc1d.log
Fri, Mar 25 2022 3:08:21 pm | [2022/03/25 07:08:21] [ info] [input:tail:tail.0] inotify_fs_remove(): inode=104051102 watch_fd=9
Fri, Mar 25 2022 3:08:21 pm | [2022/03/25 07:08:21] [debug] [input:tail:tail.0] inode=69179617 events: IN_MODIFY
Fri, Mar 25 2022 3:08:21 pm | [2022/03/25 07:08:21] [debug] [input chunk] update output instances with new chunk size diff=633
Fri, Mar 25 2022 3:08:22 pm | [2022/03/25 07:08:22] [debug] [task] created task=0x7ff2f1839760 id=4 OK
Fri, Mar 25 2022 3:08:22 pm | [2022/03/25 07:08:22] [debug] [output:es:es.0] task_id=4 assigned to thread #0
Fri, Mar 25 2022 3:08:22 pm | [2022/03/25 07:08:22] [debug] [upstream] KA connection #118 to 10.3.4.84:9200 has been assigned (recycled)
Fri, Mar 25 2022 3:08:22 pm | [2022/03/25 07:08:22] [debug] [http_client] not using http_proxy for header
Fri, Mar 25 2022 3:08:22 pm | [2022/03/25 07:08:22] [debug] [output:es:es.0] HTTP Status=200 URI=/_bulk
Fri, Mar 25 2022 3:08:22 pm | [2022/03/25 07:08:22] [debug] [upstream] KA connection #118 to 10.3.4.84:9200 is now available
Fri, Mar 25 2022 3:08:22 pm | [2022/03/25 07:08:22] [debug] [out coro] cb_destroy coro_id=2
Fri, Mar 25 2022 3:08:22 pm | [2022/03/25 07:08:22] [debug] [retry] new retry created for task_id=4 attempts=1
Fri, Mar 25 2022 3:08:22 pm | [2022/03/25 07:08:22] [ warn] [engine] failed to flush chunk '1-1648192101.677940929.flb', retry in 9 seconds: task_id=4, input=tail.0 > output=es.0 (out_id=0)
Fri, Mar 25 2022 3:08:23 pm | [2022/03/25 07:08:22] [debug] [input:tail:tail.0] inode=69179617 events: IN_MODIFY
Fri, Mar 25 2022 3:08:23 pm | [2022/03/25 07:08:22] [debug] [input chunk] update output instances with new chunk size diff=641
Fri, Mar 25 2022 3:08:23 pm | [2022/03/25 07:08:23] [debug] [input:tail:tail.0] inode=69179617 events: IN_MODIFY
Fri, Mar 25 2022 3:08:23 pm | [2022/03/25 07:08:23] [debug] [input chunk] update output instances with new chunk size diff=634
Fri, Mar 25 2022 3:08:23 pm | [2022/03/25 07:08:23] [debug] [task] created task=0x7ff2f1839940 id=5 OK
Fri, Mar 25 2022 3:08:23 pm | [2022/03/25 07:08:23] [debug] [output:es:es.0] task_id=5 assigned to thread #1
Fri, Mar 25 2022 3:08:23 pm | [2022/03/25 07:08:23] [debug] [upstream] KA connection #120 to 10.3.4.84:9200 has been assigned (recycled)
Fri, Mar 25 2022 3:08:23 pm | [2022/03/25 07:08:23] [debug] [http_client] not using http_proxy for header
Fri, Mar 25 2022 3:08:23 pm | [2022/03/25 07:08:23] [debug] [output:es:es.0] HTTP Status=200 URI=/_bulk
Fri, Mar 25 2022 3:08:23 pm | [2022/03/25 07:08:23] [debug] [upstream] KA connection #120 to 10.3.4.84:9200 is now available
Fri, Mar 25 2022 3:08:23 pm | [2022/03/25 07:08:23] [debug] [out coro] cb_destroy coro_id=2
Fri, Mar 25 2022 3:08:23 pm | [2022/03/25 07:08:23] [debug] [retry] new retry created for task_id=5 attempts=1
Fri, Mar 25 2022 3:08:23 pm | [2022/03/25 07:08:23] [ warn] [engine] failed to flush chunk '1-1648192103.858183.flb', retry in 7 seconds: task_id=5, input=tail.0 > output=es.0 (out_id=0)
Fri, Mar 25 2022 3:08:27 pm | [2022/03/25 07:08:27] [debug] [output:es:es.0] task_id=2 assigned to thread #0
Fri, Mar 25 2022 3:08:27 pm | [2022/03/25 07:08:27] [debug] [output:es:es.0] task_id=0 assigned to thread #1
Fri, Mar 25 2022 3:08:27 pm | [2022/03/25 07:08:27] [debug] [upstream] KA connection #118 to 10.3.4.84:9200 has been assigned (recycled)
Fri, Mar 25 2022 3:08:27 pm | [2022/03/25 07:08:27] [debug] [upstream] KA connection #120 to 10.3.4.84:9200 has been assigned (recycled)
Fri, Mar 25 2022 3:08:27 pm | [2022/03/25 07:08:27] [debug] [http_client] not using http_proxy for header
Fri, Mar 25 2022 3:08:27 pm | [2022/03/25 07:08:27] [debug] [http_client] not using http_proxy for header
Fri, Mar 25 2022 3:08:27 pm | [2022/03/25 07:08:27] [debug] [output:es:es.0] HTTP Status=200 URI=/_bulk
Fri, Mar 25 2022 3:08:27 pm | [2022/03/25 07:08:27] [debug] [upstream] KA connection #118 to 10.3.4.84:9200 is now available
Fri, Mar 25 2022 3:08:27 pm | [2022/03/25 07:08:27] [debug] [out coro] cb_destroy coro_id=3
Fri, Mar 25 2022 3:08:27 pm | [2022/03/25 07:08:27] [debug] [retry] re-using retry for task_id=2 attempts=2
Fri, Mar 25 2022 3:08:27 pm | [2022/03/25 07:08:27] [ warn] [engine] failed to flush chunk '1-1648192099.641327100.flb', retry in 9 seconds: task_id=2, input=tail.0 > output=es.0 (out_id=0)
Fri, Mar 25 2022 3:08:27 pm | [2022/03/25 07:08:27] [debug] [output:es:es.0] HTTP Status=200 URI=/_bulk
Fri, Mar 25 2022 3:08:27 pm | [2022/03/25 07:08:27] [debug] [upstream] KA connection #120 to 10.3.4.84:9200 is now available
Fri, Mar 25 2022 3:08:27 pm | [2022/03/25 07:08:27] [debug] [out coro] cb_destroy coro_id=3
Fri, Mar 25 2022 3:08:27 pm | [2022/03/25 07:08:27] [debug] [retry] re-using retry for task_id=0 attempts=2
Fri, Mar 25 2022 3:08:27 pm | [2022/03/25 07:08:27] [ warn] [engine] failed to flush chunk '1-1648192097.600252923.flb', retry in 14 seconds: task_id=0, input=tail.0 > output=es.0 (out_id=0)
Fri, Mar 25 2022 3:08:27 pm | [2022/03/25 07:08:27] [debug] [input:tail:tail.0] inode=69179617 events: IN_MODIFY
Fri, Mar 25 2022 3:08:27 pm | [2022/03/25 07:08:27] [debug] [input chunk] update output instances with new chunk size diff=649
Fri, Mar 25 2022 3:08:27 pm | [2022/03/25 07:08:27] [debug] [input chunk] update output instances with new chunk size diff=656
Fri, Mar 25 2022 3:08:27 pm | [2022/03/25 07:08:27] [debug] [input:tail:tail.0] inode=69179617 events: IN_MODIFY
Fri, Mar 25 2022 3:08:27 pm | [2022/03/25 07:08:27] [debug] [input chunk] update output instances with new chunk size diff=862
Fri, Mar 25 2022 3:08:27 pm | [2022/03/25 07:08:27] [debug] [input:tail:tail.0] inode=69179617 events: IN_MODIFY
Fri, Mar 25 2022 3:08:27 pm | [2022/03/25 07:08:27] [debug] [input chunk] update output instances with new chunk size diff=681
Fri, Mar 25 2022 3:08:27 pm | [2022/03/25 07:08:27] [debug] [input:tail:tail.0] inode=69179617 events: IN_MODIFY
Fri, Mar 25 2022 3:08:27 pm | [2022/03/25 07:08:27] [debug] [input chunk] update output instances with new chunk size diff=695
Fri, Mar 25 2022 3:08:28 pm | [2022/03/25 07:08:28] [debug] [task] created task=0x7ff2f1839b20 id=6 OK
Fri, Mar 25 2022 3:08:28 pm | [2022/03/25 07:08:28] [debug] [output:es:es.0] task_id=6 assigned to thread #0
Fri, Mar 25 2022 3:08:28 pm | [2022/03/25 07:08:28] [debug] [upstream] KA connection #118 to 10.3.4.84:9200 has been assigned (recycled)
Fri, Mar 25 2022 3:08:28 pm | [2022/03/25 07:08:28] [debug] [output:es:es.0] task_id=1 assigned to thread #1
Fri, Mar 25 2022 3:08:28 pm | [2022/03/25 07:08:28] [debug] [upstream] KA connection #120 to 10.3.4.84:9200 has been assigned (recycled)
Fri, Mar 25 2022 3:08:28 pm | [2022/03/25 07:08:28] [debug] [http_client] not using http_proxy for header
Fri, Mar 25 2022 3:08:28 pm | [2022/03/25 07:08:28] [debug] [http_client] not using http_proxy for header
Fri, Mar 25 2022 3:08:28 pm | [2022/03/25 07:08:28] [debug] [output:es:es.0] HTTP Status=200 URI=/_bulk
Fri, Mar 25 2022 3:08:28 pm | [2022/03/25 07:08:28] [debug] [upstream] KA connection #118 to 10.3.4.84:9200 is now available
Fri, Mar 25 2022 3:08:28 pm | [2022/03/25 07:08:28] [debug] [out coro] cb_destroy coro_id=4
Fri, Mar 25 2022 3:08:28 pm | [2022/03/25 07:08:28] [debug] [retry] new retry created for task_id=6 attempts=1
Fri, Mar 25 2022 3:08:28 pm | [2022/03/25 07:08:28] [ warn] [engine] failed to flush chunk '1-1648192107.811048259.flb', retry in 11 seconds: task_id=6, input=tail.0 > output=es.0 (out_id=0)
Fri, Mar 25 2022 3:08:28 pm | [2022/03/25 07:08:28] [debug] [output:es:es.0] HTTP Status=200 URI=/_bulk
Fri, Mar 25 2022 3:08:28 pm | [2022/03/25 07:08:28] [debug] [upstream] KA connection #120 to 10.3.4.84:9200 is now available
Fri, Mar 25 2022 3:08:28 pm | [2022/03/25 07:08:28] [debug] [out coro] cb_destroy coro_id=4
Fri, Mar 25 2022 3:08:28 pm | [2022/03/25 07:08:28] [debug] [retry] re-using retry for task_id=1 attempts=2
Fri, Mar 25 2022 3:08:28 pm | [2022/03/25 07:08:28] [ warn] [engine] failed to flush chunk '1-1648192098.623024610.flb', retry in 11 seconds: task_id=1, input=tail.0 > output=es.0 (out_id=0)
Fri, Mar 25 2022 3:08:28 pm | [2022/03/25 07:08:28] [debug] [input:tail:tail.0] inode=69179617 events: IN_MODIFY
Fri, Mar 25 2022 3:08:28 pm | [2022/03/25 07:08:28] [debug] [input chunk] update output instances with new chunk size diff=650
Fri, Mar 25 2022 3:08:28 pm | [2022/03/25 07:08:28] [debug] [input:tail:tail.0] inode=69179617 events: IN_MODIFY
Fri, Mar 25 2022 3:08:28 pm | [2022/03/25 07:08:28] [debug] [input chunk] update output instances with new chunk size diff=1083
Fri, Mar 25 2022 3:08:28 pm | [2022/03/25 07:08:28] [debug] [input:tail:tail.0] inode=69179617 events: IN_MODIFY
Fri, Mar 25 2022 3:08:28 pm | [2022/03/25 07:08:28] [debug] [input chunk] update output instances with new chunk size diff=695
Fri, Mar 25 2022 3:08:29 pm | [2022/03/25 07:08:29] [debug] [task] created task=0x7ff2f1839d00 id=7 OK
Fri, Mar 25 2022 3:08:29 pm | [2022/03/25 07:08:29] [debug] [output:es:es.0] task_id=7 assigned to thread #0
Fri, Mar 25 2022 3:08:29 pm | [2022/03/25 07:08:29] [debug] [upstream] KA connection #118 to 10.3.4.84:9200 has been assigned (recycled)
Fri, Mar 25 2022 3:08:29 pm | [2022/03/25 07:08:29] [debug] [http_client] not using http_proxy for header
Fri, Mar 25 2022 3:08:29 pm | [2022/03/25 07:08:29] [debug] [output:es:es.0] HTTP Status=200 URI=/_bulk
Fri, Mar 25 2022 3:08:29 pm | [2022/03/25 07:08:29] [debug] [upstream] KA connection #118 to 10.3.4.84:9200 is now available
Fri, Mar 25 2022 3:08:29 pm | [2022/03/25 07:08:29] [debug] [out coro] cb_destroy coro_id=5
Fri, Mar 25 2022 3:08:29 pm | [2022/03/25 07:08:29] [debug] [retry] new retry created for task_id=7 attempts=1
Fri, Mar 25 2022 3:08:29 pm | [2022/03/25 07:08:29] [ warn] [engine] failed to flush chunk '1-1648192108.829100670.flb', retry in 8 seconds: task_id=7, input=tail.0 > output=es.0 (out_id=0)
Fri, Mar 25 2022 3:08:29 pm | [2022/03/25 07:08:29] [debug] [input:tail:tail.0] inode=69179617 events: IN_MODIFY
Fri, Mar 25 2022 3:08:29 pm | [2022/03/25 07:08:29] [debug] [input chunk] update output instances with new chunk size diff=650
Fri, Mar 25 2022 3:08:29 pm | [2022/03/25 07:08:29] [debug] [input:tail:tail.0] inode=69179617 events: IN_MODIFY
Fri, Mar 25 2022 3:08:29 pm | [2022/03/25 07:08:29] [debug] [input chunk] update output instances with new chunk size diff=1085
Fri, Mar 25 2022 3:08:29 pm | [2022/03/25 07:08:29] [debug] [input chunk] update output instances with new chunk size diff=1182
Fri, Mar 25 2022 3:08:29 pm | [2022/03/25 07:08:29] [debug] [input:tail:tail.0] inode=69179617 events: IN_MODIFY
Fri, Mar 25 2022 3:08:29 pm | [2022/03/25 07:08:29] [debug] [input:tail:tail.0] inode=69179617 events: IN_MODIFY
Fri, Mar 25 2022 3:08:29 pm | [2022/03/25 07:08:29] [debug] [input chunk] update output instances with new chunk size diff=695
Fri, Mar 25 2022 3:08:30 pm | [2022/03/25 07:08:30] [debug] [output:es:es.0] task_id=5 assigned to thread #1
Fri, Mar 25 2022 3:08:30 pm | [2022/03/25 07:08:30] [debug] [task] created task=0x7ff2f1839ee0 id=8 OK
Fri, Mar 25 2022 3:08:30 pm | [2022/03/25 07:08:30] [debug] [upstream] KA connection #120 to 10.3.4.84:9200 has been assigned (recycled)
Fri, Mar 25 2022 3:08:30 pm | [2022/03/25 07:08:30] [debug] [output:es:es.0] task_id=8 assigned to thread #0
Fri, Mar 25 2022 3:08:30 pm | [2022/03/25 07:08:30] [debug] [upstream] KA connection #118 to 10.3.4.84:9200 has been assigned (recycled)
Fri, Mar 25 2022 3:08:30 pm | [2022/03/25 07:08:30] [debug] [http_client] not using http_proxy for header
Fri, Mar 25 2022 3:08:30 pm | [2022/03/25 07:08:30] [debug] [http_client] not using http_proxy for header
Fri, Mar 25 2022 3:08:30 pm | [2022/03/25 07:08:30] [debug] [output:es:es.0] HTTP Status=200 URI=/_bulk
Fri, Mar 25 2022 3:08:30 pm | [2022/03/25 07:08:30] [debug] [upstream] KA connection #120 to 10.3.4.84:9200 is now available
Fri, Mar 25 2022 3:08:30 pm | [2022/03/25 07:08:30] [debug] [out coro] cb_destroy coro_id=5
Fri, Mar 25 2022 3:08:30 pm | [2022/03/25 07:08:30] [debug] [retry] re-using retry for task_id=5 attempts=2
Fri, Mar 25 2022 3:08:30 pm | [2022/03/25 07:08:30] [ warn] [engine] failed to flush chunk '1-1648192103.858183.flb', retry in 18 seconds: task_id=5, input=tail.0 > output=es.0 (out_id=0)
Fri, Mar 25 2022 3:08:30 pm | [2022/03/25 07:08:30] [debug] [output:es:es.0] HTTP Status=200 URI=/_bulk
Fri, Mar 25 2022 3:08:30 pm | [2022/03/25 07:08:30] [debug] [upstream] KA connection #118 to 10.3.4.84:9200 is now available
Fri, Mar 25 2022 3:08:30 pm | [2022/03/25 07:08:30] [debug] [out coro] cb_destroy coro_id=6
Fri, Mar 25 2022 3:08:30 pm | [2022/03/25 07:08:30] [debug] [retry] new retry created for task_id=8 attempts=1
Fri, Mar 25 2022 3:08:30 pm | [2022/03/25 07:08:30] [ warn] [engine] failed to flush chunk '1-1648192109.839317289.flb', retry in 8 seconds: task_id=8, input=tail.0 > output=es.0 (out_id=0)
Fri, Mar 25 2022 3:08:30 pm | [2022/03/25 07:08:30] [debug] [input:tail:tail.0] inode=69179617 events: IN_MODIFY
Fri, Mar 25 2022 3:08:30 pm | [2022/03/25 07:08:30] [debug] [input chunk] update output instances with new chunk size diff=650
Fri, Mar 25 2022 3:08:30 pm | [2022/03/25 07:08:30] [debug] [input:tail:tail.0] inode=69179617 events: IN_MODIFY
Fri, Mar 25 2022 3:08:30 pm | [2022/03/25 07:08:30] [debug] [input chunk] update output instances with new chunk size diff=1167
Fri, Mar 25 2022 3:08:30 pm | [2022/03/25 07:08:30] [debug] [input:tail:tail.0] inode=69179617 events: IN_MODIFY
Fri, Mar 25 2022 3:08:30 pm | [2022/03/25 07:08:30] [debug] [input chunk] update output instances with new chunk size diff=665
Fri, Mar 25 2022 3:08:30 pm | [2022/03/25 07:08:30] [debug] [input chunk] update output instances with new chunk size diff=657
Fri, Mar 25 2022 3:08:30 pm | [2022/03/25 07:08:30] [debug] [input:tail:tail.0] inode=69179617 events: IN_MODIFY
Fri, Mar 25 2022 3:08:30 pm | [2022/03/25 07:08:30] [debug] [input chunk] update output instances with new chunk size diff=661
Fri, Mar 25 2022 3:08:30 pm | [2022/03/25 07:08:30] [debug] [input:tail:tail.0] inode=69179617 events: IN_MODIFY
Fri, Mar 25 2022 3:08:30 pm | [2022/03/25 07:08:30] [debug] [input chunk] update output instances with new chunk size diff=694
Fri, Mar 25 2022 3:08:30 pm | [2022/03/25 07:08:30] [debug] [input:tail:tail.0] inode=69179617 events: IN_MODIFY
Fri, Mar 25 2022 3:08:30 pm | [2022/03/25 07:08:30] [debug] [input chunk] update output instances with new chunk size diff=656
Fri, Mar 25 2022 3:08:30 pm | [2022/03/25 07:08:30] [debug] [input chunk] update output instances with new chunk size diff=697
Fri, Mar 25 2022 3:08:30 pm | [2022/03/25 07:08:30] [debug] [input:tail:tail.0] inode=69179617 events: IN_MODIFY
Fri, Mar 25 2022 3:08:31 pm | [2022/03/25 07:08:31] [debug] [output:es:es.0] task_id=4 assigned to thread #1
Fri, Mar 25 2022 3:08:31 pm | [2022/03/25 07:08:31] [debug] [upstream] KA connection #120 to 10.3.4.84:9200 has been assigned (recycled)
Fri, Mar 25 2022 3:08:31 pm | [2022/03/25 07:08:31] [debug] [http_client] not using http_proxy for header
Fri, Mar 25 2022 3:08:31 pm | [2022/03/25 07:08:31] [debug] [task] created task=0x7ff2f183a0c0 id=9 OK
Fri, Mar 25 2022 3:08:31 pm | [2022/03/25 07:08:31] [debug] [output:es:es.0] task_id=9 assigned to thread #0
Fri, Mar 25 2022 3:08:31 pm | [2022/03/25 07:08:31] [debug] [upstream] KA connection #118 to 10.3.4.84:9200 has been assigned (recycled)
Fri, Mar 25 2022 3:08:31 pm | [2022/03/25 07:08:31] [debug] [http_client] not using http_proxy for header
Fri, Mar 25 2022 3:08:31 pm | [2022/03/25 07:08:31] [debug] [output:es:es.0] HTTP Status=200 URI=/_bulk
Fri, Mar 25 2022 3:08:31 pm | [2022/03/25 07:08:31] [debug] [upstream] KA connection #120 to 10.3.4.84:9200 is now available
Fri, Mar 25 2022 3:08:31 pm | [2022/03/25 07:08:31] [debug] [out coro] cb_destroy coro_id=6
Fri, Mar 25 2022 3:08:31 pm | [2022/03/25 07:08:31] [debug] [retry] re-using retry for task_id=4 attempts=2
Fri, Mar 25 2022 3:08:31 pm | [2022/03/25 07:08:31] [ warn] [engine] failed to flush chunk '1-1648192101.677940929.flb', retry in 21 seconds: task_id=4, input=tail.0 > output=es.0 (out_id=0)
Fri, Mar 25 2022 3:08:31 pm | [2022/03/25 07:08:31] [debug] [output:es:es.0] HTTP Status=200 URI=/_bulk
Fri, Mar 25 2022 3:08:31 pm | [2022/03/25 07:08:31] [debug] [upstream] KA connection #118 to 10.3.4.84:9200 is now available
Fri, Mar 25 2022 3:08:31 pm | [2022/03/25 07:08:31] [debug] [out coro] cb_destroy coro_id=7
Fri, Mar 25 2022 3:08:31 pm | [2022/03/25 07:08:31] [debug] [retry] new retry created for task_id=9 attempts=1
Fri, Mar 25 2022 3:08:31 pm | [2022/03/25 07:08:31] [ warn] [engine] failed to flush chunk '1-1648192110.850147571.flb', retry in 9 seconds: task_id=9, input=tail.0 > output=es.0 (out_id=0)
Fri, Mar 25 2022 3:08:31 pm | [2022/03/25 07:08:31] [debug] [input:tail:tail.0] inode=69179617 events: IN_MODIFY
Fri, Mar 25 2022 3:08:31 pm | [2022/03/25 07:08:31] [debug] [input chunk] update output instances with new chunk size diff=633
Fri, Mar 25 2022 3:08:32 pm | [2022/03/25 07:08:32] [debug] [task] created task=0x7ff2f183a2a0 id=10 OK
Fri, Mar 25 2022 3:08:32 pm | [2022/03/25 07:08:32] [debug] [output:es:es.0] task_id=10 assigned to thread #1
Fri, Mar 25 2022 3:08:32 pm | [2022/03/25 07:08:32] [debug] [upstream] KA connection #120 to 10.3.4.84:9200 has been assigned (recycled)
Fri, Mar 25 2022 3:08:32 pm | [2022/03/25 07:08:32] [debug] [output:es:es.0] task_id=3 assigned to thread #0
Fri, Mar 25 2022 3:08:32 pm | [2022/03/25 07:08:32] [debug] [upstream] KA connection #118 to 10.3.4.84:9200 has been assigned (recycled)
Fri, Mar 25 2022 3:08:32 pm | [2022/03/25 07:08:32] [debug] [http_client] not using http_proxy for header
Fri, Mar 25 2022 3:08:32 pm | [2022/03/25 07:08:32] [debug] [http_client] not using http_proxy for header
Fri, Mar 25 2022 3:08:32 pm | [2022/03/25 07:08:32] [debug] [output:es:es.0] HTTP Status=200 URI=/_bulk
Fri, Mar 25 2022 3:08:32 pm | [2022/03/25 07:08:32] [debug] [upstream] KA connection #120 to 10.3.4.84:9200 is now available
Fri, Mar 25 2022 3:08:32 pm | [2022/03/25 07:08:32] [debug] [out coro] cb_destroy coro_id=7
Fri, Mar 25 2022 3:08:32 pm | [2022/03/25 07:08:32] [debug] [retry] new retry created for task_id=10 attempts=1
Fri, Mar 25 2022 3:08:32 pm | [2022/03/25 07:08:32] [ warn] [engine] failed to flush chunk '1-1648192111.878474491.flb', retry in 9 seconds: task_id=10, input=tail.0 > output=es.0 (out_id=0)
Fri, Mar 25 2022 3:08:32 pm | [2022/03/25 07:08:32] [debug] [output:es:es.0] HTTP Status=200 URI=/_bulk
Fri, Mar 25 2022 3:08:32 pm | [2022/03/25 07:08:32] [debug] [upstream] KA connection #118 to 10.3.4.84:9200 is now available
Fri, Mar 25 2022 3:08:32 pm | [2022/03/25 07:08:32] [debug] [out coro] cb_destroy coro_id=8
Fri, Mar 25 2022 3:08:32 pm | [2022/03/25 07:08:32] [debug] [retry] re-using retry for task_id=3 attempts=2
Fri, Mar 25 2022 3:08:32 pm | [2022/03/25 07:08:32] [ warn] [engine] failed to flush chunk '1-1648192100.653122953.flb', retry in 17 seconds: task_id=3, input=tail.0 > output=es.0 (out_id=0)
Fri, Mar 25 2022 3:08:33 pm | [2022/03/25 07:08:33] [debug] [input:tail:tail.0] inode=69179617 events: IN_MODIFY
Fri, Mar 25 2022 3:08:33 pm | [2022/03/25 07:08:33] [debug] [input chunk] update output instances with new chunk size diff=640
Fri, Mar 25 2022 3:08:33 pm | [2022/03/25 07:08:33] [debug] [input:tail:tail.0] inode=69179617 events: IN_MODIFY
Fri, Mar 25 2022 3:08:33 pm | [2022/03/25 07:08:33] [debug] [input chunk] update output instances with new chunk size diff=634
Fri, Mar 25 2022 3:08:33 pm | [2022/03/25 07:08:33] [debug] [task] created task=0x7ff2f183a480 id=11 OK
Fri, Mar 25 2022 3:08:33 pm | [2022/03/25 07:08:33] [debug] [output:es:es.0] task_id=11 assigned to thread #1
Fri, Mar 25 2022 3:08:33 pm | [2022/03/25 07:08:33] [debug] [upstream] KA connection #120 to 10.3.4.84:9200 has been assigned (recycled)
Fri, Mar 25 2022 3:08:33 pm | [2022/03/25 07:08:33] [debug] [http_client] not using http_proxy for header
Fri, Mar 25 2022 3:08:33 pm | [2022/03/25 07:08:33] [debug] [output:es:es.0] HTTP Status=200 URI=/_bulk
Fri, Mar 25 2022 3:08:33 pm | [2022/03/25 07:08:33] [debug] [upstream] KA connection #120 to 10.3.4.84:9200 is now available
Fri, Mar 25 2022 3:08:33 pm | [2022/03/25 07:08:33] [debug] [out coro] cb_destroy coro_id=8
Fri, Mar 25 2022 3:08:33 pm | [2022/03/25 07:08:33] [debug] [retry] new retry created for task_id=11 attempts=1
Fri, Mar 25 2022 3:08:33 pm | [2022/03/25 07:08:33] [ warn] [engine] failed to flush chunk '1-1648192113.5409018.flb', retry in 8 seconds: task_id=11, input=tail.0 > output=es.0 (out_id=0)
Fri, Mar 25 2022 3:08:36 pm | [2022/03/25 07:08:36] [debug] [output:es:es.0] task_id=2 assigned to thread #0
Fri, Mar 25 2022 3:08:36 pm | [2022/03/25 07:08:36] [debug] [upstream] KA connection #118 to 10.3.4.84:9200 has been assigned (recycled)
Fri, Mar 25 2022 3:08:36 pm | [2022/03/25 07:08:36] [debug] [http_client] not using http_proxy for header
Fri, Mar 25 2022 3:08:36 pm | [2022/03/25 07:08:36] [debug] [output:es:es.0] HTTP Status=200 URI=/_bulk
Fri, Mar 25 2022 3:08:36 pm | [2022/03/25 07:08:36] [debug] [upstream] KA connection #118 to 10.3.4.84:9200 is now available
Fri, Mar 25 2022 3:08:36 pm | [2022/03/25 07:08:36] [debug] [out coro] cb_destroy coro_id=9
Fri, Mar 25 2022 3:08:36 pm | [2022/03/25 07:08:36] [debug] [retry] re-using retry for task_id=2 attempts=3
Fri, Mar 25 2022 3:08:36 pm | [2022/03/25 07:08:36] [ warn] [engine] failed to flush chunk '1-1648192099.641327100.flb', retry in 11 seconds: task_id=2, input=tail.0 > output=es.0 (out_id=0)
Fri, Mar 25 2022 3:08:37 pm | [2022/03/25 07:08:37] [debug] [output:es:es.0] task_id=7 assigned to thread #1
Fri, Mar 25 2022 3:08:37 pm | [2022/03/25 07:08:37] [debug] [upstream] KA connection #120 to 10.3.4.84:9200 has been assigned (recycled)
Fri, Mar 25 2022 3:08:37 pm | [2022/03/25 07:08:37] [debug] [http_client] not using http_proxy for header
Fri, Mar 25 2022 3:08:37 pm | [2022/03/25 07:08:37] [debug] [output:es:es.0] HTTP Status=200 URI=/_bulk
Fri, Mar 25 2022 3:08:37 pm | [2022/03/25 07:08:37] [debug] [upstream] KA connection #120 to 10.3.4.84:9200 is now available
Fri, Mar 25 2022 3:08:37 pm | [2022/03/25 07:08:37] [debug] [out coro] cb_destroy coro_id=9
Fri, Mar 25 2022 3:08:37 pm | [2022/03/25 07:08:37] [debug] [retry] re-using retry for task_id=7 attempts=2
Fri, Mar 25 2022 3:08:37 pm | [2022/03/25 07:08:37] [ warn] [engine] failed to flush chunk '1-1648192108.829100670.flb', retry in 16 seconds: task_id=7, input=tail.0 > output=es.0 (out_id=0)
Fri, Mar 25 2022 3:08:38 pm | [2022/03/25 07:08:38] [debug] [input:tail:tail.0] inode=69179617 events: IN_MODIFY
Fri, Mar 25 2022 3:08:38 pm | [2022/03/25 07:08:38] [debug] [input chunk] update output instances with new chunk size diff=650
Fri, Mar 25 2022 3:08:38 pm | [2022/03/25 07:08:38] [debug] [input chunk] update output instances with new chunk size diff=656
Fri, Mar 25 2022 3:08:38 pm | [2022/03/25 07:08:38] [debug] [input:tail:tail.0] inode=69179617 events: IN_MODIFY
Fri, Mar 25 2022 3:08:38 pm | [2022/03/25 07:08:38] [debug] [input chunk] update output instances with new chunk size diff=862
Fri, Mar 25 2022 3:08:38 pm | [2022/03/25 07:08:38] [debug] [input:tail:tail.0] inode=69179617 events: IN_MODIFY
Fri, Mar 25 2022 3:08:38 pm | [2022/03/25 07:08:38] [debug] [input chunk] update output instances with new chunk size diff=681
Fri, Mar 25 2022 3:08:38 pm | [2022/03/25 07:08:38] [debug] [input:tail:tail.0] inode=69179617 events: IN_MODIFY
Fri, Mar 25 2022 3:08:38 pm | [2022/03/25 07:08:38] [debug] [input chunk] update output instances with new chunk size diff=695
Fri, Mar 25 2022 3:08:38 pm | [2022/03/25 07:08:38] [debug] [output:es:es.0] task_id=8 assigned to thread #0
Fri, Mar 25 2022 3:08:38 pm | [2022/03/25 07:08:38] [debug] [task] created task=0x7ff2f183a660 id=12 OK
Fri, Mar 25 2022 3:08:38 pm | [2022/03/25 07:08:38] [debug] [output:es:es.0] task_id=12 assigned to thread #1
Fri, Mar 25 2022 3:08:38 pm | [2022/03/25 07:08:38] [debug] [upstream] KA connection #118 to 10.3.4.84:9200 has been assigned (recycled)
Fri, Mar 25 2022 3:08:38 pm | [2022/03/25 07:08:38] [debug] [upstream] KA connection #120 to 10.3.4.84:9200 has been assigned (recycled)
Fri, Mar 25 2022 3:08:38 pm | [2022/03/25 07:08:38] [debug] [http_client] not using http_proxy for header
Fri, Mar 25 2022 3:08:38 pm | [2022/03/25 07:08:38] [debug] [http_client] not using http_proxy for header
Fri, Mar 25 2022 3:08:38 pm | [2022/03/25 07:08:38] [debug] [output:es:es.0] HTTP Status=200 URI=/_bulk
Fri, Mar 25 2022 3:08:38 pm | [2022/03/25 07:08:38] [debug] [upstream] KA connection #118 to 10.3.4.84:9200 is now available
Fri, Mar 25 2022 3:08:38 pm | [2022/03/25 07:08:38] [debug] [out coro] cb_destroy coro_id=10
Fri, Mar 25 2022 3:08:38 pm | [2022/03/25 07:08:38] [debug] [retry] re-using retry for task_id=8 attempts=2
Fri, Mar 25 2022 3:08:38 pm | [2022/03/25 07:08:38] [ warn] [engine] failed to flush chunk '1-1648192109.839317289.flb', retry in 16 seconds: task_id=8, input=tail.0 > output=es.0 (out_id=0)
Fri, Mar 25 2022 3:08:38 pm | [2022/03/25 07:08:38] [debug] [output:es:es.0] HTTP Status=200 URI=/_bulk
Fri, Mar 25 2022 3:08:38 pm | [2022/03/25 07:08:38] [debug] [upstream] KA connection #120 to 10.3.4.84:9200 is now available
Fri, Mar 25 2022 3:08:38 pm | [2022/03/25 07:08:38] [debug] [out coro] cb_destroy coro_id=10
Fri, Mar 25 2022 3:08:38 pm | [2022/03/25 07:08:38] [debug] [retry] new retry created for task_id=12 attempts=1
Fri, Mar 25 2022 3:08:38 pm | [2022/03/25 07:08:38] [ warn] [engine] failed to flush chunk '1-1648192118.5008496.flb', retry in 8 seconds: task_id=12, input=tail.0 > output=es.0 (out_id=0)
Fri, Mar 25 2022 3:08:39 pm | [2022/03/25 07:08:39] [debug] [input:tail:tail.0] inode=69179617 events: IN_MODIFY
Fri, Mar 25 2022 3:08:39 pm | [2022/03/25 07:08:39] [debug] [input chunk] update output instances with new chunk size diff=650
Fri, Mar 25 2022 3:08:39 pm | [2022/03/25 07:08:39] [debug] [input:tail:tail.0] inode=69179617 events: IN_MODIFY
Fri, Mar 25 2022 3:08:39 pm | [2022/03/25 07:08:39] [debug] [input chunk] update output instances with new chunk size diff=1083
Fri, Mar 25 2022 3:08:39 pm | [2022/03/25 07:08:39] [debug] [input:tail:tail.0] inode=69179617 events: IN_MODIFY
Fri, Mar 25 2022 3:08:39 pm | [2022/03/25 07:08:39] [debug] [input chunk] update output instances with new chunk size diff=695
Fri, Mar 25 2022 3:08:39 pm | [2022/03/25 07:08:39] [debug] [task] created task=0x7ff2f183a840 id=13 OK
Fri, Mar 25 2022 3:08:39 pm | [2022/03/25 07:08:39] [debug] [output:es:es.0] task_id=13 assigned to thread #0
Fri, Mar 25 2022 3:08:39 pm | [2022/03/25 07:08:39] [debug] [upstream] KA connection #118 to 10.3.4.84:9200 has been assigned (recycled)
Fri, Mar 25 2022 3:08:39 pm | [2022/03/25 07:08:39] [debug] [http_client] not using http_proxy for header
Fri, Mar 25 2022 3:08:39 pm | [2022/03/25 07:08:39] [debug] [output:es:es.0] task_id=6 assigned to thread #1
Fri, Mar 25 2022 3:08:39 pm | [2022/03/25 07:08:39] [debug] [upstream] KA connection #120 to 10.3.4.84:9200 has been assigned (recycled)
Fri, Mar 25 2022 3:08:39 pm | [2022/03/25 07:08:39] [debug] [output:es:es.0] task_id=1 assigned to thread #0
Fri, Mar 25 2022 3:08:39 pm | [2022/03/25 07:08:39] [debug] [http_client] not using http_proxy for header
Fri, Mar 25 2022 3:08:39 pm | [2022/03/25 07:08:39] [debug] [http_client] not using http_proxy for header
Fri, Mar 25 2022 3:08:39 pm | [2022/03/25 07:08:39] [debug] [output:es:es.0] HTTP Status=200 URI=/_bulk
Fri, Mar 25 2022 3:08:39 pm | [2022/03/25 07:08:39] [debug] [upstream] KA connection #118 to 10.3.4.84:9200 is now available
Fri, Mar 25 2022 3:08:39 pm | [2022/03/25 07:08:39] [debug] [out coro] cb_destroy coro_id=11
Fri, Mar 25 2022 3:08:39 pm | [2022/03/25 07:08:39] [debug] [retry] new retry created for task_id=13 attempts=1
Fri, Mar 25 2022 3:08:39 pm | [2022/03/25 07:08:39] [ warn] [engine] failed to flush chunk '1-1648192119.62045721.flb', retry in 11 seconds: task_id=13, input=tail.0 > output=es.0 (out_id=0)
Fri, Mar 25 2022 3:08:39 pm | [2022/03/25 07:08:39] [debug] [output:es:es.0] HTTP Status=200 URI=/_bulk
Fri, Mar 25 2022 3:08:39 pm | [2022/03/25 07:08:39] [debug] [upstream] KA connection #36 to 10.3.4.84:9200 is now available
Fri, Mar 25 2022 3:08:39 pm | [2022/03/25 07:08:39] [debug] [out coro] cb_destroy coro_id=12
Fri, Mar 25 2022 3:08:39 pm | [2022/03/25 07:08:39] [debug] [retry] re-using retry for task_id=1 attempts=3
Fri, Mar 25 2022 3:08:39 pm | [2022/03/25 07:08:39] [ warn] [engine] failed to flush chunk '1-1648192098.623024610.flb', retry in 16 seconds: task_id=1, input=tail.0 > output=es.0 (out_id=0)
Fri, Mar 25 2022 3:08:39 pm | [2022/03/25 07:08:39] [debug] [output:es:es.0] HTTP Status=200 URI=/_bulk
Fri, Mar 25 2022 3:08:39 pm | [2022/03/25 07:08:39] [debug] [upstream] KA connection #120 to 10.3.4.84:9200 is now available
Fri, Mar 25 2022 3:08:39 pm | [2022/03/25 07:08:39] [debug] [out coro] cb_destroy coro_id=11
Fri, Mar 25 2022 3:08:39 pm | [2022/03/25 07:08:39] [debug] [retry] re-using retry for task_id=6 attempts=2
Fri, Mar 25 2022 3:08:39 pm | [2022/03/25 07:08:39] [ warn] [engine] failed to flush chunk '1-1648192107.811048259.flb', retry in 20 seconds: task_id=6, input=tail.0 > output=es.0 (out_id=0)
Fri, Mar 25 2022 3:08:40 pm | [2022/03/25 07:08:40] [debug] [input:tail:tail.0] inode=69179617 events: IN_MODIFY
Fri, Mar 25 2022 3:08:40 pm | [2022/03/25 07:08:40] [debug] [input chunk] update output instances with new chunk size diff=650
Fri, Mar 25 2022 3:08:40 pm | [2022/03/25 07:08:40] [debug] [input:tail:tail.0] inode=69179617 events: IN_MODIFY
Fri, Mar 25 2022 3:08:40 pm | [2022/03/25 07:08:40] [debug] [input chunk] update output instances with new chunk size diff=1085
Fri, Mar 25 2022 3:08:40 pm | [2022/03/25 07:08:40] [debug] [input chunk] update output instances with new chunk size diff=1182
Fri, Mar 25 2022 3:08:40 pm | [2022/03/25 07:08:40] [debug] [input:tail:tail.0] inode=69179617 events: IN_MODIFY
Fri, Mar 25 2022 3:08:40 pm | [2022/03/25 07:08:40] [debug] [input:tail:tail.0] inode=69179617 events: IN_MODIFY
Fri, Mar 25 2022 3:08:40 pm | [2022/03/25 07:08:40] [debug] [input chunk] update output instances with new chunk size diff=695
Fri, Mar 25 2022 3:08:40 pm | [2022/03/25 07:08:40] [debug] [output:es:es.0] task_id=9 assigned to thread #1
Fri, Mar 25 2022 3:08:40 pm | [2022/03/25 07:08:40] [debug] [task] created task=0x7ff2f183aa20 id=14 OK
Fri, Mar 25 2022 3:08:40 pm | [2022/03/25 07:08:40] [debug] [output:es:es.0] task_id=14 assigned to thread #0
Fri, Mar 25 2022 3:08:40 pm | [2022/03/25 07:08:40] [debug] [upstream] KA connection #120 to 10.3.4.84:9200 has been assigned (recycled)
Fri, Mar 25 2022 3:08:40 pm | [2022/03/25 07:08:40] [debug] [upstream] KA connection #118 to 10.3.4.84:9200 has been assigned (recycled)
Fri, Mar 25 2022 3:08:40 pm | [2022/03/25 07:08:40] [debug] [http_client] not using http_proxy for header
Fri, Mar 25 2022 3:08:40 pm | [2022/03/25 07:08:40] [debug] [http_client] not using http_proxy for header
Fri, Mar 25 2022 3:08:40 pm | [2022/03/25 07:08:40] [debug] [output:es:es.0] HTTP Status=200 URI=/_bulk
Fri, Mar 25 2022 3:08:40 pm | [2022/03/25 07:08:40] [debug] [upstream] KA connection #118 to 10.3.4.84:9200 is now available
Fri, Mar 25 2022 3:08:40 pm | [2022/03/25 07:08:40] [debug] [out coro] cb_destroy coro_id=13
Fri, Mar 25 2022 3:08:40 pm | [2022/03/25 07:08:40] [debug] [retry] new retry created for task_id=14 attempts=1
Fri, Mar 25 2022 3:08:40 pm | [2022/03/25 07:08:40] [ warn] [engine] failed to flush chunk '1-1648192120.74298017.flb', retry in 10 seconds: task_id=14, input=tail.0 > output=es.0 (out_id=0)
Fri, Mar 25 2022 3:08:40 pm | [2022/03/25 07:08:40] [debug] [output:es:es.0] HTTP Status=200 URI=/_bulk
Fri, Mar 25 2022 3:08:40 pm | [2022/03/25 07:08:40] [debug] [upstream] KA connection #120 to 10.3.4.84:9200 is now available
Fri, Mar 25 2022 3:08:40 pm | [2022/03/25 07:08:40] [debug] [out coro] cb_destroy coro_id=12
Fri, Mar 25 2022 3:08:40 pm | [2022/03/25 07:08:40] [debug] [retry] re-using retry for task_id=9 attempts=2
Fri, Mar 25 2022 3:08:40 pm | [2022/03/25 07:08:40] [ warn] [engine] failed to flush chunk '1-1648192110.850147571.flb', retry in 11 seconds: task_id=9, input=tail.0 > output=es.0 (out_id=0)
Fri, Mar 25 2022 3:08:41 pm | [2022/03/25 07:08:41] [debug] [input:tail:tail.0] inode=69179617 events: IN_MODIFY
Fri, Mar 25 2022 3:08:41 pm | [2022/03/25 07:08:41] [debug] [input chunk] update output instances with new chunk size diff=650
Fri, Mar 25 2022 3:08:41 pm | [2022/03/25 07:08:41] [debug] [input:tail:tail.0] inode=69179617 events: IN_MODIFY
Fri, Mar 25 2022 3:08:41 pm | [2022/03/25 07:08:41] [debug] [input chunk] update output instances with new chunk size diff=1167
Fri, Mar 25 2022 3:08:41 pm | [2022/03/25 07:08:41] [debug] [input:tail:tail.0] inode=69179617 events: IN_MODIFY
Fri, Mar 25 2022 3:08:41 pm | [2022/03/25 07:08:41] [debug] [input chunk] update output instances with new chunk size diff=665
Fri, Mar 25 2022 3:08:41 pm | [2022/03/25 07:08:41] [debug] [input chunk] update output instances with new chunk size diff=657
Fri, Mar 25 2022 3:08:41 pm | [2022/03/25 07:08:41] [debug] [input:tail:tail.0] inode=69179617 events: IN_MODIFY
Fri, Mar 25 2022 3:08:41 pm | [2022/03/25 07:08:41] [debug] [input chunk] update output instances with new chunk size diff=661
Fri, Mar 25 2022 3:08:41 pm | [2022/03/25 07:08:41] [debug] [input:tail:tail.0] inode=69179617 events: IN_MODIFY
Fri, Mar 25 2022 3:08:41 pm | [2022/03/25 07:08:41] [debug] [input chunk] update output instances with new chunk size diff=694
Fri, Mar 25 2022 3:08:41 pm | [2022/03/25 07:08:41] [debug] [input:tail:tail.0] inode=69179617 events: IN_MODIFY
Fri, Mar 25 2022 3:08:41 pm | [2022/03/25 07:08:41] [debug] [input chunk] update output instances with new chunk size diff=656
Fri, Mar 25 2022 3:08:41 pm | [2022/03/25 07:08:41] [debug] [input:tail:tail.0] inode=69179617 events: IN_MODIFY
Fri, Mar 25 2022 3:08:41 pm | [2022/03/25 07:08:41] [debug] [input chunk] update output instances with new chunk size diff=697
Fri, Mar 25 2022 3:08:41 pm | [2022/03/25 07:08:41] [debug] [output:es:es.0] task_id=0 assigned to thread #1
Fri, Mar 25 2022 3:08:41 pm | [2022/03/25 07:08:41] [debug] [output:es:es.0] task_id=10 assigned to thread #0
Fri, Mar 25 2022 3:08:41 pm | [2022/03/25 07:08:41] [debug] [output:es:es.0] task_id=11 assigned to thread #1
Fri, Mar 25 2022 3:08:41 pm | [2022/03/25 07:08:41] [debug] [upstream] KA connection #120 to 10.3.4.84:9200 has been assigned (recycled)
Fri, Mar 25 2022 3:08:41 pm | [2022/03/25 07:08:41] [debug] [upstream] KA connection #36 to 10.3.4.84:9200 has been assigned (recycled)
Fri, Mar 25 2022 3:08:41 pm | [2022/03/25 07:08:41] [debug] [task] created task=0x7ff2f183ac00 id=15 OK
Fri, Mar 25 2022 3:08:41 pm | [2022/03/25 07:08:41] [debug] [http_client] not using http_proxy for header
Fri, Mar 25 2022 3:08:41 pm | [2022/03/25 07:08:41] [debug] [http_client] not using http_proxy for header
Fri, Mar 25 2022 3:08:41 pm | [2022/03/25 07:08:41] [debug] [output:es:es.0] task_id=15 assigned to thread #0
Fri, Mar 25 2022 3:08:41 pm | [2022/03/25 07:08:41] [debug] [upstream] KA connection #118 to 10.3.4.84:9200 has been assigned (recycled)
Fri, Mar 25 2022 3:08:41 pm | [2022/03/25 07:08:41] [debug] [http_client] not using http_proxy for header
Fri, Mar 25 2022 3:08:41 pm | [2022/03/25 07:08:41] [debug] [http_client] not using http_proxy for header
Fri, Mar 25 2022 3:08:41 pm | [2022/03/25 07:08:41] [debug] [output:es:es.0] HTTP Status=200 URI=/_bulk
Fri, Mar 25 2022 3:08:41 pm | [2022/03/25 07:08:41] [debug] [upstream] KA connection #36 to 10.3.4.84:9200 is now available
Fri, Mar 25 2022 3:08:41 pm | [2022/03/25 07:08:41] [debug] [out coro] cb_destroy coro_id=14
Fri, Mar 25 2022 3:08:41 pm | [2022/03/25 07:08:41] [debug] [retry] re-using retry for task_id=10 attempts=2
Fri, Mar 25 2022 3:08:41 pm | [2022/03/25 07:08:41] [ warn] [engine] failed to flush chunk '1-1648192111.878474491.flb', retry in 14 seconds: task_id=10, input=tail.0 > output=es.0 (out_id=0)
Fri, Mar 25 2022 3:08:41 pm | [2022/03/25 07:08:41] [debug] [output:es:es.0] HTTP Status=200 URI=/_bulk
Fri, Mar 25 2022 3:08:41 pm | [2022/03/25 07:08:41] [debug] [upstream] KA connection #35 to 10.3.4.84:9200 is now available
Fri, Mar 25 2022 3:08:41 pm | [2022/03/25 07:08:41] [debug] [out coro] cb_destroy coro_id=14
Fri, Mar 25 2022 3:08:41 pm | [2022/03/25 07:08:41] [debug] [retry] re-using retry for task_id=11 attempts=2
Fri, Mar 25 2022 3:08:41 pm | [2022/03/25 07:08:41] [ warn] [engine] failed to flush chunk '1-1648192113.5409018.flb', retry in 7 seconds: task_id=11, input=tail.0 > output=es.0 (out_id=0)
Fri, Mar 25 2022 3:08:41 pm | [2022/03/25 07:08:41] [debug] [output:es:es.0] HTTP Status=200 URI=/_bulk
Fri, Mar 25 2022 3:08:41 pm | [2022/03/25 07:08:41] [debug] [upstream] KA connection #120 to 10.3.4.84:9200 is now available
Fri, Mar 25 2022 3:08:41 pm | [2022/03/25 07:08:41] [debug] [out coro] cb_destroy coro_id=13
Fri, Mar 25 2022 3:08:41 pm | [2022/03/25 07:08:41] [debug] [retry] re-using retry for task_id=0 attempts=3
Fri, Mar 25 2022 3:08:41 pm | [2022/03/25 07:08:41] [ warn] [engine] failed to flush chunk '1-1648192097.600252923.flb', retry in 26 seconds: task_id=0, input=tail.0 > output=es.0 (out_id=0)
Fri, Mar 25 2022 3:08:41 pm | [2022/03/25 07:08:41] [debug] [output:es:es.0] HTTP Status=200 URI=/_bulk
Fri, Mar 25 2022 3:08:41 pm | [2022/03/25 07:08:41] [debug] [upstream] KA connection #118 to 10.3.4.84:9200 is now available
Fri, Mar 25 2022 3:08:41 pm | [2022/03/25 07:08:41] [debug] [out coro] cb_destroy coro_id=15
Fri, Mar 25 2022 3:08:41 pm | [2022/03/25 07:08:41] [debug] [retry] new retry created for task_id=15 attempts=1
Fri, Mar 25 2022 3:08:41 pm | [2022/03/25 07:08:41] [ warn] [engine] failed to flush chunk '1-1648192121.87279162.flb', retry in 10 seconds: task_id=15, input=tail.0 > output=es.0 (out_id=0)
Fri, Mar 25 2022 3:08:42 pm | [2022/03/25 07:08:42] [debug] [input:tail:tail.0] inode=69179617 events: IN_MODIFY
Fri, Mar 25 2022 3:08:42 pm | [2022/03/25 07:08:42] [debug] [input chunk] update output instances with new chunk size diff=632
Fri, Mar 25 2022 3:08:42 pm | [2022/03/25 07:08:42] [debug] [task] created task=0x7ff2f183ade0 id=16 OK
Fri, Mar 25 2022 3:08:42 pm | [2022/03/25 07:08:42] [debug] [output:es:es.0] task_id=16 assigned to thread #1
Fri, Mar 25 2022 3:08:42 pm | [2022/03/25 07:08:42] [debug] [upstream] KA connection #35 to 10.3.4.84:9200 has been assigned (recycled)
Fri, Mar 25 2022 3:08:42 pm | [2022/03/25 07:08:42] [debug] [http_client] not using http_proxy for header
Fri, Mar 25 2022 3:08:42 pm | [2022/03/25 07:08:42] [debug] [output:es:es.0] HTTP Status=200 URI=/_bulk
Fri, Mar 25 2022 3:08:42 pm | [2022/03/25 07:08:42] [debug] [upstream] KA connection #35 to 10.3.4.84:9200 is now available
Fri, Mar 25 2022 3:08:42 pm | [2022/03/25 07:08:42] [debug] [out coro] cb_destroy coro_id=15
Fri, Mar 25 2022 3:08:42 pm | [2022/03/25 07:08:42] [debug] [retry] new retry created for task_id=16 attempts=1
Fri, Mar 25 2022 3:08:42 pm | [2022/03/25 07:08:42] [ warn] [engine] failed to flush chunk '1-1648192122.113977737.flb', retry in 7 seconds: task_id=16, input=tail.0 > output=es.0 (out_id=0)
Fri, Mar 25 2022 3:08:44 pm | [2022/03/25 07:08:43] [debug] [input:tail:tail.0] inode=69179617 events: IN_MODIFY
Fri, Mar 25 2022 3:08:44 pm | [2022/03/25 07:08:43] [debug] [input chunk] update output instances with new chunk size diff=641
Fri, Mar 25 2022 3:08:44 pm | [2022/03/25 07:08:44] [debug] [input:tail:tail.0] inode=69179617 events: IN_MODIFY
Fri, Mar 25 2022 3:08:44 pm | [2022/03/25 07:08:44] [debug] [input chunk] update output instances with new chunk size diff=634
Fri, Mar 25 2022 3:08:44 pm | [2022/03/25 07:08:44] [debug] [task] created task=0x7ff2f183afc0 id=17 OK
Fri, Mar 25 2022 3:08:44 pm | [2022/03/25 07:08:44] [debug] [output:es:es.0] task_id=17 assigned to thread #0
Fri, Mar 25 2022 3:08:44 pm | [2022/03/25 07:08:44] [debug] [upstream] KA connection #36 to 10.3.4.84:9200 has been assigned (recycled)
Fri, Mar 25 2022 3:08:44 pm | [2022/03/25 07:08:44] [debug] [http_client] not using http_proxy for header
Fri, Mar 25 2022 3:08:44 pm | [2022/03/25 07:08:44] [debug] [output:es:es.0] HTTP Status=200 URI=/_bulk
Fri, Mar 25 2022 3:08:44 pm | [2022/03/25 07:08:44] [debug] [upstream] KA connection #36 to 10.3.4.84:9200 is now available
Fri, Mar 25 2022 3:08:44 pm | [2022/03/25 07:08:44] [debug] [out coro] cb_destroy coro_id=16
Fri, Mar 25 2022 3:08:44 pm | [2022/03/25 07:08:44] [debug] [retry] new retry created for task_id=17 attempts=1
Fri, Mar 25 2022 3:08:44 pm | [2022/03/25 07:08:44] [ warn] [engine] failed to flush chunk '1-1648192124.833819.flb', retry in 10 seconds: task_id=17, input=tail.0 > output=es.0 (out_id=0)
Fri, Mar 25 2022 3:08:46 pm | [2022/03/25 07:08:46] [debug] [output:es:es.0] task_id=12 assigned to thread #1
Fri, Mar 25 2022 3:08:46 pm | [2022/03/25 07:08:46] [debug] [upstream] KA connection #120 to 10.3.4.84:9200 has been assigned (recycled)
Fri, Mar 25 2022 3:08:46 pm | [2022/03/25 07:08:46] [debug] [http_client] not using http_proxy for header
Fri, Mar 25 2022 3:08:46 pm | [2022/03/25 07:08:46] [debug] [output:es:es.0] HTTP Status=200 URI=/_bulk
Fri, Mar 25 2022 3:08:46 pm | [2022/03/25 07:08:46] [debug] [upstream] KA connection #120 to 10.3.4.84:9200 is now available
Fri, Mar 25 2022 3:08:46 pm | [2022/03/25 07:08:46] [debug] [out coro] cb_destroy coro_id=16
Fri, Mar 25 2022 3:08:46 pm | [2022/03/25 07:08:46] [debug] [retry] re-using retry for task_id=12 attempts=2
Fri, Mar 25 2022 3:08:46 pm | [2022/03/25 07:08:46] [ warn] [engine] failed to flush chunk '1-1648192118.5008496.flb', retry in 21 seconds: task_id=12, input=tail.0 > output=es.0 (out_id=0)
Fri, Mar 25 2022 3:08:47 pm | [2022/03/25 07:08:47] [debug] [output:es:es.0] task_id=2 assigned to thread #0
Fri, Mar 25 2022 3:08:47 pm | [2022/03/25 07:08:47] [debug] [upstream] KA connection #118 to 10.3.4.84:9200 has been assigned (recycled)
Fri, Mar 25 2022 3:08:47 pm | [2022/03/25 07:08:47] [debug] [http_client] not using http_proxy for header
Fri, Mar 25 2022 3:08:47 pm | [2022/03/25 07:08:47] [debug] [output:es:es.0] HTTP Status=200 URI=/_bulk
Fri, Mar 25 2022 3:08:47 pm | [2022/03/25 07:08:47] [debug] [upstream] KA connection #118 to 10.3.4.84:9200 is now available
Fri, Mar 25 2022 3:08:47 pm | [2022/03/25 07:08:47] [debug] [out coro] cb_destroy coro_id=17
Fri, Mar 25 2022 3:08:47 pm | [2022/03/25 07:08:47] [debug] [retry] re-using retry for task_id=2 attempts=4
Fri, Mar 25 2022 3:08:47 pm | [2022/03/25 07:08:47] [ warn] [engine] failed to flush chunk '1-1648192099.641327100.flb', retry in 60 seconds: task_id=2, input=tail.0 > output=es.0 (out_id=0)
Fri, Mar 25 2022 3:08:48 pm | [2022/03/25 07:08:48] [debug] [input:tail:tail.0] inode=69179617 events: IN_MODIFY
Fri, Mar 25 2022 3:08:48 pm | [2022/03/25 07:08:48] [debug] [input chunk] update output instances with new chunk size diff=650
Fri, Mar 25 2022 3:08:48 pm | [2022/03/25 07:08:48] [debug] [input chunk] update output instances with new chunk size diff=656
Fri, Mar 25 2022 3:08:48 pm | [2022/03/25 07:08:48] [debug] [input:tail:tail.0] inode=69179617 events: IN_MODIFY
Fri, Mar 25 2022 3:08:48 pm | [2022/03/25 07:08:48] [debug] [input chunk] update output instances with new chunk size diff=862
Fri, Mar 25 2022 3:08:48 pm | [2022/03/25 07:08:48] [debug] [input:tail:tail.0] inode=69179617 events: IN_MODIFY
Fri, Mar 25 2022 3:08:48 pm | [2022/03/25 07:08:48] [debug] [input chunk] update output instances with new chunk size diff=681
Fri, Mar 25 2022 3:08:48 pm | [2022/03/25 07:08:48] [debug] [input:tail:tail.0] inode=69179617 events: IN_MODIFY
Fri, Mar 25 2022 3:08:48 pm | [2022/03/25 07:08:48] [debug] [input chunk] update output instances with new chunk size diff=695
Fri, Mar 25 2022 3:08:48 pm | [2022/03/25 07:08:48] [debug] [task] created task=0x7ff2f183b1a0 id=18 OK
Fri, Mar 25 2022 3:08:48 pm | [2022/03/25 07:08:48] [debug] [output:es:es.0] task_id=18 assigned to thread #1
Fri, Mar 25 2022 3:08:48 pm | [2022/03/25 07:08:48] [debug] [output:es:es.0] task_id=5 assigned to thread #0
Fri, Mar 25 2022 3:08:48 pm | [2022/03/25 07:08:48] [debug] [upstream] KA connection #35 to 10.3.4.84:9200 has been assigned (recycled)
Fri, Mar 25 2022 3:08:48 pm | [2022/03/25 07:08:48] [debug] [output:es:es.0] task_id=11 assigned to thread #1
Fri, Mar 25 2022 3:08:48 pm | [2022/03/25 07:08:48] [debug] [upstream] KA connection #36 to 10.3.4.84:9200 has been assigned (recycled)
Fri, Mar 25 2022 3:08:48 pm | [2022/03/25 07:08:48] [debug] [http_client] not using http_proxy for header
Fri, Mar 25 2022 3:08:48 pm | [2022/03/25 07:08:48] [debug] [http_client] not using http_proxy for header
Fri, Mar 25 2022 3:08:48 pm | [2022/03/25 07:08:48] [debug] [upstream] KA connection #120 to 10.3.4.84:9200 has been assigned (recycled)
Fri, Mar 25 2022 3:08:48 pm | [2022/03/25 07:08:48] [debug] [http_client] not using http_proxy for header
Fri, Mar 25 2022 3:08:48 pm | [2022/03/25 07:08:48] [debug] [output:es:es.0] HTTP Status=200 URI=/_bulk
Fri, Mar 25 2022 3:08:48 pm | [2022/03/25 07:08:48] [debug] [upstream] KA connection #120 to 10.3.4.84:9200 is now available
Fri, Mar 25 2022 3:08:48 pm | [2022/03/25 07:08:48] [debug] [output:es:es.0] HTTP Status=200 URI=/_bulk
Fri, Mar 25 2022 3:08:48 pm | [2022/03/25 07:08:48] [debug] [out coro] cb_destroy coro_id=18
Fri, Mar 25 2022 3:08:48 pm | [2022/03/25 07:08:48] [debug] [upstream] KA connection #36 to 10.3.4.84:9200 is now available
Fri, Mar 25 2022 3:08:48 pm | [2022/03/25 07:08:48] [debug] [out coro] cb_destroy coro_id=18
Fri, Mar 25 2022 3:08:48 pm | [2022/03/25 07:08:48] [debug] [retry] re-using retry for task_id=11 attempts=3
Fri, Mar 25 2022 3:08:48 pm | [2022/03/25 07:08:48] [ warn] [engine] failed to flush chunk '1-1648192113.5409018.flb', retry in 15 seconds: task_id=11, input=tail.0 > output=es.0 (out_id=0)
Fri, Mar 25 2022 3:08:48 pm | [2022/03/25 07:08:48] [debug] [retry] re-using retry for task_id=5 attempts=3
Fri, Mar 25 2022 3:08:48 pm | [2022/03/25 07:08:48] [ warn] [engine] failed to flush chunk '1-1648192103.858183.flb', retry in 30 seconds: task_id=5, input=tail.0 > output=es.0 (out_id=0)
Fri, Mar 25 2022 3:08:48 pm | [2022/03/25 07:08:48] [debug] [output:es:es.0] HTTP Status=200 URI=/_bulk
Fri, Mar 25 2022 3:08:48 pm | [2022/03/25 07:08:48] [debug] [upstream] KA connection #35 to 10.3.4.84:9200 is now available
Fri, Mar 25 2022 3:08:48 pm | [2022/03/25 07:08:48] [debug] [out coro] cb_destroy coro_id=17
Fri, Mar 25 2022 3:08:48 pm | [2022/03/25 07:08:48] [debug] [retry] new retry created for task_id=18 attempts=1
Fri, Mar 25 2022 3:08:48 pm | [2022/03/25 07:08:48] [ warn] [engine] failed to flush chunk '1-1648192128.185362391.flb', retry in 10 seconds: task_id=18, input=tail.0 > output=es.0 (out_id=0)
Fri, Mar 25 2022 3:08:49 pm | [2022/03/25 07:08:49] [debug] [input:tail:tail.0] inode=69179617 events: IN_MODIFY
Fri, Mar 25 2022 3:08:49 pm | [2022/03/25 07:08:49] [debug] [input chunk] update output instances with new chunk size diff=650
Fri, Mar 25 2022 3:08:49 pm | [2022/03/25 07:08:49] [debug] [input:tail:tail.0] inode=69179617 events: IN_MODIFY
Fri, Mar 25 2022 3:08:49 pm | [2022/03/25 07:08:49] [debug] [input chunk] update output instances with new chunk size diff=1083
Fri, Mar 25 2022 3:08:49 pm | [2022/03/25 07:08:49] [debug] [input:tail:tail.0] inode=69179617 events: IN_MODIFY
Fri, Mar 25 2022 3:08:49 pm | [2022/03/25 07:08:49] [debug] [input chunk] update output instances with new chunk size diff=695
Fri, Mar 25 2022 3:08:49 pm | [2022/03/25 07:08:49] [debug] [output:es:es.0] task_id=16 assigned to thread #0
Fri, Mar 25 2022 3:08:49 pm | [2022/03/25 07:08:49] [debug] [output:es:es.0] task_id=3 assigned to thread #1
Fri, Mar 25 2022 3:08:49 pm | [2022/03/25 07:08:49] [debug] [upstream] KA connection #118 to 10.3.4.84:9200 has been assigned (recycled)
Fri, Mar 25 2022 3:08:49 pm | [2022/03/25 07:08:49] [debug] [upstream] KA connection #120 to 10.3.4.84:9200 has been assigned (recycled)
Fri, Mar 25 2022 3:08:49 pm | [2022/03/25 07:08:49] [debug] [http_client] not using http_proxy for header
Fri, Mar 25 2022 3:08:49 pm | [2022/03/25 07:08:49] [debug] [http_client] not using http_proxy for header
Fri, Mar 25 2022 3:08:49 pm | [2022/03/25 07:08:49] [debug] [task] created task=0x7ff2f183b380 id=19 OK
Fri, Mar 25 2022 3:08:49 pm | [2022/03/25 07:08:49] [debug] [output:es:es.0] task_id=19 assigned to thread #0
Fri, Mar 25 2022 3:08:49 pm | [2022/03/25 07:08:49] [debug] [upstream] KA connection #36 to 10.3.4.84:9200 has been assigned (recycled)
Fri, Mar 25 2022 3:08:49 pm | [2022/03/25 07:08:49] [debug] [http_client] not using http_proxy for header
Fri, Mar 25 2022 3:08:49 pm | [2022/03/25 07:08:49] [debug] [output:es:es.0] HTTP Status=200 URI=/_bulk
Fri, Mar 25 2022 3:08:49 pm | [2022/03/25 07:08:49] [debug] [upstream] KA connection #118 to 10.3.4.84:9200 is now available
Fri, Mar 25 2022 3:08:49 pm | [2022/03/25 07:08:49] [debug] [out coro] cb_destroy coro_id=19
Fri, Mar 25 2022 3:08:49 pm | [2022/03/25 07:08:49] [debug] [retry] re-using retry for task_id=16 attempts=2
Fri, Mar 25 2022 3:08:49 pm | [2022/03/25 07:08:49] [ warn] [engine] failed to flush chunk '1-1648192122.113977737.flb', retry in 21 seconds: task_id=16, input=tail.0 > output=es.0 (out_id=0)
Fri, Mar 25 2022 3:08:49 pm | [2022/03/25 07:08:49] [debug] [output:es:es.0] HTTP Status=200 URI=/_bulk
Fri, Mar 25 2022 3:08:49 pm | [2022/03/25 07:08:49] [debug] [upstream] KA connection #36 to 10.3.4.84:9200 is now available
Fri, Mar 25 2022 3:08:49 pm | [2022/03/25 07:08:49] [debug] [out coro] cb_destroy coro_id=20
Fri, Mar 25 2022 3:08:49 pm | [2022/03/25 07:08:49] [debug] [retry] new retry created for task_id=19 attempts=1
Fri, Mar 25 2022 3:08:49 pm | [2022/03/25 07:08:49] [ warn] [engine] failed to flush chunk '1-1648192129.207138564.flb', retry in 8 seconds: task_id=19, input=tail.0 > output=es.0 (out_id=0)
Fri, Mar 25 2022 3:08:49 pm | [2022/03/25 07:08:49] [debug] [output:es:es.0] HTTP Status=200 URI=/_bulk
Fri, Mar 25 2022 3:08:49 pm | [2022/03/25 07:08:49] [debug] [upstream] KA connection #120 to 10.3.4.84:9200 is now available
Fri, Mar 25 2022 3:08:49 pm | [2022/03/25 07:08:49] [debug] [out coro] cb_destroy coro_id=19
Fri, Mar 25 2022 3:08:49 pm | [2022/03/25 07:08:49] [debug] [retry] re-using retry for task_id=3 attempts=3
Fri, Mar 25 2022 3:08:49 pm | [2022/03/25 07:08:49] [ warn] [engine] failed to flush chunk '1-1648192100.653122953.flb', retry in 19 seconds: task_id=3, input=tail.0 > output=es.0 (out_id=0)
Fri, Mar 25 2022 3:08:50 pm | [2022/03/25 07:08:50] [debug] [input:tail:tail.0] inode=69179617 events: IN_MODIFY
Fri, Mar 25 2022 3:08:50 pm | [2022/03/25 07:08:50] [debug] [input chunk] update output instances with new chunk size diff=650
Fri, Mar 25 2022 3:08:50 pm | [2022/03/25 07:08:50] [debug] [input:tail:tail.0] inode=69179617 events: IN_MODIFY
Fri, Mar 25 2022 3:08:50 pm | [2022/03/25 07:08:50] [debug] [input chunk] update output instances with new chunk size diff=1085
Fri, Mar 25 2022 3:08:50 pm | [2022/03/25 07:08:50] [debug] [input chunk] update output instances with new chunk size diff=1182
Fri, Mar 25 2022 3:08:50 pm | [2022/03/25 07:08:50] [debug] [input:tail:tail.0] inode=69179617 events: IN_MODIFY
Fri, Mar 25 2022 3:08:50 pm | [2022/03/25 07:08:50] [debug] [input chunk] update output instances with new chunk size diff=695
Fri, Mar 25 2022 3:08:50 pm | [2022/03/25 07:08:50] [debug] [output:es:es.0] task_id=14 assigned to thread #1
Fri, Mar 25 2022 3:08:50 pm | [2022/03/25 07:08:50] [debug] [output:es:es.0] task_id=13 assigned to thread #0
Fri, Mar 25 2022 3:08:50 pm | [2022/03/25 07:08:50] [debug] [upstream] KA connection #35 to 10.3.4.84:9200 has been assigned (recycled)
Fri, Mar 25 2022 3:08:50 pm | [2022/03/25 07:08:50] [debug] [task] created task=0x7ff2f183b560 id=20 OK
Fri, Mar 25 2022 3:08:50 pm | [2022/03/25 07:08:50] [debug] [upstream] KA connection #118 to 10.3.4.84:9200 has been assigned (recycled)
Fri, Mar 25 2022 3:08:50 pm | [2022/03/25 07:08:50] [debug] [http_client] not using http_proxy for header
Fri, Mar 25 2022 3:08:50 pm | [2022/03/25 07:08:50] [debug] [output:es:es.0] task_id=20 assigned to thread #1
Fri, Mar 25 2022 3:08:50 pm | [2022/03/25 07:08:50] [debug] [http_client] not using http_proxy for header
Fri, Mar 25 2022 3:08:50 pm | [2022/03/25 07:08:50] [debug] [upstream] KA connection #120 to 10.3.4.84:9200 has been assigned (recycled)
Fri, Mar 25 2022 3:08:50 pm | [2022/03/25 07:08:50] [debug] [http_client] not using http_proxy for header
Fri, Mar 25 2022 3:08:50 pm | [2022/03/25 07:08:50] [debug] [output:es:es.0] HTTP Status=200 URI=/_bulk
Fri, Mar 25 2022 3:08:50 pm | [2022/03/25 07:08:50] [debug] [upstream] KA connection #118 to 10.3.4.84:9200 is now available
Fri, Mar 25 2022 3:08:50 pm | [2022/03/25 07:08:50] [debug] [out coro] cb_destroy coro_id=21
Fri, Mar 25 2022 3:08:50 pm | [2022/03/25 07:08:50] [debug] [retry] re-using retry for task_id=13 attempts=2
Fri, Mar 25 2022 3:08:50 pm | [2022/03/25 07:08:50] [ warn] [engine] failed to flush chunk '1-1648192119.62045721.flb', retry in 18 seconds: task_id=13, input=tail.0 > output=es.0 (out_id=0)
Fri, Mar 25 2022 3:08:50 pm | [2022/03/25 07:08:50] [debug] [output:es:es.0] HTTP Status=200 URI=/_bulk
Fri, Mar 25 2022 3:08:50 pm | [2022/03/25 07:08:50] [debug] [upstream] KA connection #35 to 10.3.4.84:9200 is now available
Fri, Mar 25 2022 3:08:50 pm | [2022/03/25 07:08:50] [debug] [retry] re-using retry for task_id=14 attempts=2
Fri, Mar 25 2022 3:08:50 pm | [2022/03/25 07:08:50] [debug] [out coro] cb_destroy coro_id=20
Fri, Mar 25 2022 3:08:50 pm | [2022/03/25 07:08:50] [debug] [output:es:es.0] HTTP Status=200 URI=/_bulk
Fri, Mar 25 2022 3:08:50 pm | [2022/03/25 07:08:50] [debug] [upstream] KA connection #120 to 10.3.4.84:9200 is now available
Fri, Mar 25 2022 3:08:50 pm | [2022/03/25 07:08:50] [debug] [out coro] cb_destroy coro_id=21
Fri, Mar 25 2022 3:08:50 pm | [2022/03/25 07:08:50] [ warn] [engine] failed to flush chunk '1-1648192120.74298017.flb', retry in 9 seconds: task_id=14, input=tail.0 > output=es.0 (out_id=0)
Fri, Mar 25 2022 3:08:50 pm | [2022/03/25 07:08:50] [debug] [retry] new retry created for task_id=20 attempts=1
Fri, Mar 25 2022 3:08:50 pm | [2022/03/25 07:08:50] [ warn] [engine] failed to flush chunk '1-1648192130.216865179.flb', retry in 6 seconds: task_id=20, input=tail.0 > output=es.0 (out_id=0)
Fri, Mar 25 2022 3:08:51 pm | [2022/03/25 07:08:51] [debug] [input:tail:tail.0] inode=69179617 events: IN_MODIFY
Fri, Mar 25 2022 3:08:51 pm | [2022/03/25 07:08:51] [debug] [input chunk] update output instances with new chunk size diff=650
Fri, Mar 25 2022 3:08:51 pm | [2022/03/25 07:08:51] [debug] [input:tail:tail.0] inode=69179617 events: IN_MODIFY
Fri, Mar 25 2022 3:08:51 pm | [2022/03/25 07:08:51] [debug] [input chunk] update output instances with new chunk size diff=1167
Fri, Mar 25 2022 3:08:51 pm | [2022/03/25 07:08:51] [debug] [input:tail:tail.0] inode=69179617 events: IN_MODIFY
Fri, Mar 25 2022 3:08:51 pm | [2022/03/25 07:08:51] [debug] [input chunk] update output instances with new chunk size diff=665
Fri, Mar 25 2022 3:08:51 pm | [2022/03/25 07:08:51] [debug] [input chunk] update output instances with new chunk size diff=657
Fri, Mar 25 2022 3:08:51 pm | [2022/03/25 07:08:51] [debug] [input:tail:tail.0] inode=69179617 events: IN_MODIFY
Fri, Mar 25 2022 3:08:51 pm | [2022/03/25 07:08:51] [debug] [input chunk] update output instances with new chunk size diff=661
Fri, Mar 25 2022 3:08:51 pm | [2022/03/25 07:08:51] [debug] [input:tail:tail.0] inode=69179617 events: IN_MODIFY
Fri, Mar 25 2022 3:08:51 pm | [2022/03/25 07:08:51] [debug] [input chunk] update output instances with new chunk size diff=697
Fri, Mar 25 2022 3:08:51 pm | [2022/03/25 07:08:51] [debug] [input:tail:tail.0] inode=69179617 events: IN_MODIFY
Fri, Mar 25 2022 3:08:51 pm | [2022/03/25 07:08:51] [debug] [input chunk] update output instances with new chunk size diff=693
Fri, Mar 25 2022 3:08:51 pm | [2022/03/25 07:08:51] [debug] [input:tail:tail.0] inode=69179617 events: IN_MODIFY
Fri, Mar 25 2022 3:08:51 pm | [2022/03/25 07:08:51] [debug] [input chunk] update output instances with new chunk size diff=655
Fri, Mar 25 2022 3:08:51 pm | [2022/03/25 07:08:51] [debug] [output:es:es.0] task_id=15 assigned to thread #0
Fri, Mar 25 2022 3:08:51 pm | [2022/03/25 07:08:51] [debug] [output:es:es.0] task_id=9 assigned to thread #1
Fri, Mar 25 2022 3:08:51 pm | [2022/03/25 07:08:51] [debug] [upstream] KA connection #36 to 10.3.4.84:9200 has been assigned (recycled)
Fri, Mar 25 2022 3:08:51 pm | [2022/03/25 07:08:51] [debug] [upstream] KA connection #35 to 10.3.4.84:9200 has been assigned (recycled)
Fri, Mar 25 2022 3:08:51 pm | [2022/03/25 07:08:51] [debug] [http_client] not using http_proxy for header
Fri, Mar 25 2022 3:08:51 pm | [2022/03/25 07:08:51] [debug] [http_client] not using http_proxy for header
Fri, Mar 25 2022 3:08:51 pm | [2022/03/25 07:08:51] [debug] [task] created task=0x7ff2f183b740 id=21 OK
Fri, Mar 25 2022 3:08:51 pm | [2022/03/25 07:08:51] [debug] [upstream] KA connection #118 to 10.3.4.84:9200 has been assigned (recycled)
Fri, Mar 25 2022 3:08:51 pm | [2022/03/25 07:08:51] [debug] [output:es:es.0] task_id=21 assigned to thread #0
Fri, Mar 25 2022 3:08:51 pm | [2022/03/25 07:08:51] [debug] [http_client] not using http_proxy for header
Fri, Mar 25 2022 3:08:51 pm | [2022/03/25 07:08:51] [debug] [output:es:es.0] HTTP Status=200 URI=/_bulk
Fri, Mar 25 2022 3:08:51 pm | [2022/03/25 07:08:51] [debug] [upstream] KA connection #35 to 10.3.4.84:9200 is now available
Fri, Mar 25 2022 3:08:51 pm | [2022/03/25 07:08:51] [debug] [out coro] cb_destroy coro_id=22
Fri, Mar 25 2022 3:08:51 pm | [2022/03/25 07:08:51] [debug] [retry] re-using retry for task_id=9 attempts=3
Fri, Mar 25 2022 3:08:51 pm | [2022/03/25 07:08:51] [ warn] [engine] failed to flush chunk '1-1648192110.850147571.flb', retry in 37 seconds: task_id=9, input=tail.0 > output=es.0 (out_id=0)
Fri, Mar 25 2022 3:08:51 pm | [2022/03/25 07:08:51] [debug] [output:es:es.0] HTTP Status=200 URI=/_bulk
Fri, Mar 25 2022 3:08:51 pm | [2022/03/25 07:08:51] [debug] [upstream] KA connection #36 to 10.3.4.84:9200 is now available
Fri, Mar 25 2022 3:08:51 pm | [2022/03/25 07:08:51] [debug] [out coro] cb_destroy coro_id=22
Fri, Mar 25 2022 3:08:51 pm | [2022/03/25 07:08:51] [debug] [retry] re-using retry for task_id=15 attempts=2
Fri, Mar 25 2022 3:08:51 pm | [2022/03/25 07:08:51] [ warn] [engine] failed to flush chunk '1-1648192121.87279162.flb', retry in 6 seconds: task_id=15, input=tail.0 > output=es.0 (out_id=0)

@yangtian9999
Copy link
Author

yangtian9999 commented Mar 25, 2022

Hi, @lecaros still got error, already add setting
outputs: |
[OUTPUT]
Name es
Match kube.*
Host 10.3.4.84
Logstash_Format On
Retry_Limit False
#Write_Operation upsert
Replace_Dots On

[OUTPUT]
    Name es
    Match host.*
    Host 10.3.4.84
    Logstash_Format On
    Logstash_Prefix node
    Retry_Limit False
    #Write_Operation upsert
    Replace_Dots On

I am wondering that I should update es version to the latest 7 version.

@lecaros
Copy link
Contributor

lecaros commented Mar 29, 2022

Hi @yangtian9999
I don't see the previous index error; that's good :).
What versions are you using? (fbit, es) Make sure you're using either 1.9.1 or 1.8.15.
Are you still receiving some of the records on the ES side, or does it stopped receiving records altogether?

@tumd
Copy link
Contributor

tumd commented Mar 29, 2022

I had similar issues with failed to flush chunk in fluent-bit logs, and eventually figured out that the index I was trying to send logs to already had a _type set to doc, while fluent-bit was trying to send with _type set to _doc (which is the default).
Setting Type doc in the es OUTPUT helped in my case.

@yangtian9999
Copy link
Author

@lecaros Kibana 7.6.2 management. es 7.6.2 fluent/fluent-bit 1.8.15

@bluebrown
Copy link

I am getting the same error. I have deployed the official helm chart version 0.19.23.

I only changed the output config since its a subchart. I have also set Replace_Dots On.

fluent-bit:
  enabled: true
  config:
    outputs: |
      [OUTPUT]
          Name es
          Match kube.*
          Host {{ .Release.Name }}-elasticsearch-master
          Logstash_Format On
          Retry_Limit False
          Replace_Dots On
      [OUTPUT]
          Name es
          Match host.*
          Host {{ .Release.Name }}-elasticsearch-master
          Logstash_Format On
          Logstash_Prefix node
          Retry_Limit False
          Replace_Dots On

@lecaros
Copy link
Contributor

lecaros commented Apr 13, 2022

hi @yangtian9999
Are you still receiving some of the records on the ES side, or does it stopped receiving records altogether?

To those having this same issue, can you share your config and log files with debug level enabled?

@evheniyt
Copy link

In my case the root cause of the error was

"Invalid type: expecting [_doc] but got [flb_type]"

In the ES output configuration, I had Type flb_type. Changing it to Type _doc resolved the problem

@yangtian9999
Copy link
Author

@evheniyt thanks. I will set this then.
Btw, although some warn messages, I still can search specific app logs from elastic search.

@lecaros
Copy link
Contributor

lecaros commented Apr 22, 2022

Hi @yangtian9999, can you confirm you still experiencing this issue?

@yangtian9999
Copy link
Author

hi @lecaros I think this was only [warn] message, I checked with es, I can search the right apps logs.
we can close this issue.

@Queetinliu
Copy link

I use 2.0.6,no matter set Type _doc or Replace_Dots On,i still see mass warn log above.

@Queetinliu
Copy link

as #3301 (comment) said,I add Trace_Error On to show more log,then i found the reason is https://github.com/fluent/fluent-bit/issues/4386.you must delete the exist index,otherwise even you add Replace_Dots,you still see the warn log.

@zeyangli
Copy link

I met this issue too, the fix way is delete extisted index then add Replace_Dots On to OUTPUT.

image

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
waiting-for-user Waiting for more information, tests or requested changes
Projects
None yet
Development

No branches or pull requests

8 participants