Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Restoring chunks with same metadata #2698

Closed
fleprovost opened this issue Nov 14, 2019 · 4 comments · Fixed by #2824
Closed

Restoring chunks with same metadata #2698

fleprovost opened this issue Nov 14, 2019 · 4 comments · Fixed by #2824
Assignees
Labels

Comments

@fleprovost
Copy link

fleprovost commented Nov 14, 2019

Describe the bug
When Fluentd restarts, it restores all chunks that are stored in buffer files and intends to flush them but only the last chunk restored is flushed.

To Reproduce

  • Use any input that can ship thousands of events/s
  • Use an output that supports file buffering
  • Set chunk_limit_records to 10
  • Send 1000* {"timestamp": 1} JSON events
    (100 buffer files are created)
  • Before the events are flushed, reload fluentd service

Expected behavior
All chunks are restored and flushed when Fluentd restarts

Your Environment

  • Fluentd or td-agent version: fluentd 1.7.4
  • Operating system:
    NAME="Alpine Linux"
    VERSION_ID=3.9.4
  • Kernel version: 4.4.0-154-generic

Your Configuration

<system>
  log_level trace
</system>

<source>
  @type tail
  path /fluentd/log/foo.log
  pos_file /fluentd/log/td-agent/foo.log.pos
  tag foo.bar
  <parse>
    @type json
  </parse>
</source>

<filter foo.bar>
    @type record_transformer
    enable_ruby
    <record>
      newtime ${record["timestamp"]}
    </record>
    renew_time_key newtime
</filter>

<match foo.bar>
  @type amqp
  exchange foo
  key ""
  addresses 172.18.0.2:5672
  user guest
  pass guest
  vhost /
  <buffer time>
    @type file
    timekey 30
    timekey_wait 0
    chunk_limit_records 10
    path /fluentd/buffer/
  </buffer>
</match>

Your Error Log

Before Fluentd reloading, all chunks are flushed correctly, but at restarts:

2019-11-14 14:42:44 +0000 [debug]: #0 fluent/log.rb:302:debug: restoring buffer file: path = /fluentd/buffer/buffer.b5974f78f0c58935521fbf76d5e691216.log
2019-11-14 14:42:44 +0000 [debug]: #0 fluent/log.rb:302:debug: restoring buffer file: path = /fluentd/buffer/buffer.b5974f78ecc0a41d98d0fccef5a906ab4.log
2019-11-14 14:42:44 +0000 [debug]: #0 fluent/log.rb:302:debug: restoring buffer file: path = /fluentd/buffer/buffer.b5974f78f0afa28e73d5e198227f834a6.log
2019-11-14 14:42:44 +0000 [debug]: #0 fluent/log.rb:302:debug: restoring buffer file: path = /fluentd/buffer/buffer.b5974f78eceeb22cfddb0516e29fad0bf.log
2019-11-14 14:42:44 +0000 [debug]: #0 fluent/log.rb:302:debug: restoring buffer file: path = /fluentd/buffer/buffer.b5974f78f0eee72d351461588ead60233.log
2019-11-14 14:42:44 +0000 [debug]: #0 fluent/log.rb:302:debug: restoring buffer file: path = /fluentd/buffer/buffer.b5974f78ee5955c88f80ff89826031ced.log
2019-11-14 14:42:44 +0000 [debug]: #0 fluent/log.rb:302:debug: restoring buffer file: path = /fluentd/buffer/buffer.b5974f78f11d9e4162a6f52933635699a.log
2019-11-14 14:42:44 +0000 [debug]: #0 fluent/log.rb:302:debug: restoring buffer file: path = /fluentd/buffer/buffer.b5974f78eced24d53777f80807064ca49.log
2019-11-14 14:42:44 +0000 [debug]: #0 fluent/log.rb:302:debug: restoring buffer file: path = /fluentd/buffer/buffer.b5974f78ef312163e22da0dcea03eaad8.log
2019-11-14 14:42:44 +0000 [debug]: #0 fluent/log.rb:302:debug: restoring buffer file: path = /fluentd/buffer/buffer.b5974f78f0b5e73337978add08578ff79.log
2019-11-14 14:42:44 +0000 [debug]: #0 fluent/log.rb:302:debug: restoring buffer file: path = /fluentd/buffer/buffer.b5974f78ed9429bf7afc4010636424da6.log
2019-11-14 14:42:44 +0000 [debug]: #0 fluent/log.rb:302:debug: restoring buffer file: path = /fluentd/buffer/buffer.b5974f78f0b8fba5bd26a5015402bf334.log
2019-11-14 14:42:44 +0000 [debug]: #0 fluent/log.rb:302:debug: restoring buffer file: path = /fluentd/buffer/buffer.b5974f78ef3b2c338e22bc016e57f0bf2.log
2019-11-14 14:42:44 +0000 [debug]: #0 fluent/log.rb:302:debug: restoring buffer file: path = /fluentd/buffer/buffer.b5974f78ee90e69cfef38e857e16e1d29.log
2019-11-14 14:42:44 +0000 [debug]: #0 fluent/log.rb:302:debug: restoring buffer file: path = /fluentd/buffer/buffer.b5974f78f1016e3d1cc0ffa8d05282742.log
2019-11-14 14:42:44 +0000 [debug]: #0 fluent/log.rb:302:debug: restoring buffer file: path = /fluentd/buffer/buffer.b5974f78f0e9feddc6605e140d41c0263.log
2019-11-14 14:42:44 +0000 [debug]: #0 fluent/log.rb:302:debug: restoring buffer file: path = /fluentd/buffer/buffer.b5974f78ef3d02b6ece5008cd80cca186.log
2019-11-14 14:42:44 +0000 [debug]: #0 fluent/log.rb:302:debug: restoring buffer file: path = /fluentd/buffer/buffer.b5974f78f0c259bf7bbe07eebe33b6929.log
2019-11-14 14:42:44 +0000 [debug]: #0 fluent/log.rb:302:debug: restoring buffer file: path = /fluentd/buffer/buffer.b5974f78f0d6a81bce065426511dbfe3c.log
2019-11-14 14:42:44 +0000 [debug]: #0 fluent/log.rb:302:debug: restoring buffer file: path = /fluentd/buffer/buffer.b5974f78ef53c9c7e9e0bcf36a9f10e15.log
2019-11-14 14:42:44 +0000 [debug]: #0 fluent/log.rb:302:debug: restoring buffer file: path = /fluentd/buffer/buffer.b5974f78ed65d3c9ba7ac526d1d5f9e22.log
2019-11-14 14:42:44 +0000 [debug]: #0 fluent/log.rb:302:debug: restoring buffer file: path = /fluentd/buffer/buffer.b5974f78f0aa814d119979ae181b7a2a9.log
2019-11-14 14:42:44 +0000 [debug]: #0 fluent/log.rb:302:debug: restoring buffer file: path = /fluentd/buffer/buffer.b5974f78f11a54d42c6d731084b06592d.log
2019-11-14 14:42:44 +0000 [debug]: #0 fluent/log.rb:302:debug: restoring buffer file: path = /fluentd/buffer/buffer.b5974f78ee8ab8ab0d2d502b617621901.log
2019-11-14 14:42:44 +0000 [debug]: #0 fluent/log.rb:302:debug: restoring buffer file: path = /fluentd/buffer/buffer.b5974f78f11f1917ac030e7e2c3b84a4c.log
2019-11-14 14:42:44 +0000 [debug]: #0 fluent/log.rb:302:debug: restoring buffer file: path = /fluentd/buffer/buffer.b5974f78ef4494d94ee2ce3f1b426b5de.log
2019-11-14 14:42:44 +0000 [debug]: #0 fluent/log.rb:302:debug: restoring buffer file: path = /fluentd/buffer/buffer.b5974f78eec5cc5ba112991296cf533df.log
2019-11-14 14:42:44 +0000 [debug]: #0 fluent/log.rb:302:debug: restoring buffer file: path = /fluentd/buffer/buffer.b5974f78f0f2054c18053b186689eca90.log
2019-11-14 14:42:44 +0000 [debug]: #0 fluent/log.rb:302:debug: buffer started instance=47070979384460 stage_size=310 queue_size=0
2019-11-14 14:42:44 +0000 [info]: #0 fluent/log.rb:322:info: Connecting to RabbitMQ...
2019-11-14 14:42:44 +0000 [debug]: #0 fluent/log.rb:302:debug: tailing paths: target = /fluentd/log/foo.log | existing = 
2019-11-14 14:42:44 +0000 [info]: #0 fluent/log.rb:322:info: following tail of /fluentd/log/foo.log
2019-11-14 14:42:44 +0000 [info]: #0 fluent/log.rb:322:info: fluentd worker is now running worker=0
2019-11-14 14:42:44 +0000 [debug]: #0 fluent/log.rb:302:debug: flush_thread actually running
2019-11-14 14:42:44 +0000 [debug]: #0 fluent/log.rb:302:debug: enqueue_thread actually running
2019-11-14 14:42:45 +0000 [debug]: #0 fluent/log.rb:302:debug: Sending message {"timestamp":1,"newtime":1}, :key => , :headers => {} into foo
2019-11-14 14:42:45 +0000 [debug]: #0 fluent/log.rb:302:debug: Sending message {"timestamp":1,"newtime":1}, :key => , :headers => {} into foo
2019-11-14 14:42:45 +0000 [debug]: #0 fluent/log.rb:302:debug: Sending message {"timestamp":1,"newtime":1}, :key => , :headers => {} into foo
2019-11-14 14:42:45 +0000 [debug]: #0 fluent/log.rb:302:debug: Sending message {"timestamp":1,"newtime":1}, :key => , :headers => {} into foo
2019-11-14 14:42:45 +0000 [debug]: #0 fluent/log.rb:302:debug: Sending message {"timestamp":1,"newtime":1}, :key => , :headers => {} into foo
2019-11-14 14:42:45 +0000 [debug]: #0 fluent/log.rb:302:debug: Sending message {"timestamp":1,"newtime":1}, :key => , :headers => {} into foo
2019-11-14 14:42:45 +0000 [debug]: #0 fluent/log.rb:302:debug: Sending message {"timestamp":1,"newtime":1}, :key => , :headers => {} into foo
2019-11-14 14:42:45 +0000 [debug]: #0 fluent/log.rb:302:debug: Sending message {"timestamp":1,"newtime":1}, :key => , :headers => {} into foo
2019-11-14 14:42:45 +0000 [debug]: #0 fluent/log.rb:302:debug: Sending message {"timestamp":1,"newtime":1}, :key => , :headers => {} into foo
2019-11-14 14:42:45 +0000 [debug]: #0 fluent/log.rb:302:debug: Sending message {"timestamp":1,"newtime":1}, :key => , :headers => {} into foo

Only the last restored chunk is flushed. If we reload as many times as the remaining chunks, they will be finally all flushed.

Additional context
When the chunk_limit_records is reached, a new chunk is created but it has the same metadata as the previous. If those chunks have not been flushed before a restart, when it resumes, the @stage map

@stage, @queue = resume
contains only one chunk because all remaining chunks have the same metadata
stage[chunk.metadata] = chunk
and only the last one restored is saved in the map.

Maybe we could add an information in metadata that a chunk has been created because chunk_limit_records has been reached and then, each of theses chunks could have unique metadata ?

@ganmacs
Copy link
Member

ganmacs commented Nov 26, 2019

hi. I couldn't reproduce it. Are there any other steps?

2019-11-26 14:11:44 -0800 [info]: fluent/log.rb:322:info: parsing config file is succeeded path="example/debug/restore.conf"
2019-11-26 14:11:44 -0800 [info]: fluent/log.rb:322:info: gem 'fluentd' version '1.8.0.rc2'
2019-11-26 14:11:44 -0800 [info]: fluent/log.rb:322:info: gem 'fluent-plugin-gelf-hs' version '1.0.8'
2019-11-26 14:11:44 -0800 [trace]: fluent/log.rb:281:trace: registered output plugin 'stdout'
2019-11-26 14:11:44 -0800 [trace]: fluent/log.rb:281:trace: registered buffer plugin 'file'
2019-11-26 14:11:44 -0800 [trace]: fluent/log.rb:281:trace: registered formatter plugin 'stdout'
2019-11-26 14:11:44 -0800 [trace]: fluent/log.rb:281:trace: registered formatter plugin 'json'
2019-11-26 14:11:44 -0800 [trace]: fluent/log.rb:281:trace: registered parser plugin 'in_http'
2019-11-26 14:11:44 -0800 [trace]: fluent/log.rb:281:trace: registered input plugin 'http'
2019-11-26 14:11:44 -0800 [trace]: fluent/log.rb:281:trace: registered parser plugin 'msgpack'
2019-11-26 14:11:44 -0800 [trace]: fluent/log.rb:281:trace: registered parser plugin 'json'
2019-11-26 14:11:44 -0800 [debug]: fluent/log.rb:302:debug: No fluent logger for internal event
2019-11-26 14:11:44 -0800 [info]: fluent/log.rb:322:info: using configuration file: <ROOT>
  <source>
    @type http
    port 8080
  </source>
  <match *>
    @type stdout
    <buffer time,tag>
      @type "file"
      timekey 30
      timekey_wait 0
      path "/Users/yuta.iwama/src/github.com/fluent/fluentd/example/debug/buf"
      chunk_limit_records 10
      flush_at_shutdown false
    </buffer>
  </match>
</ROOT>
2019-11-26 14:11:44 -0800 [info]: fluent/log.rb:322:info: starting fluentd-1.8.0.rc2 pid=59503 ruby="2.6.5"
2019-11-26 14:11:44 -0800 [info]: fluent/log.rb:322:info: spawn command to main:  cmdline=["/Users/yuta.iwama/.rbenv/versions/2.6.5/bin/ruby", "-Eascii-8bit:ascii-8bit", "-rbundler/setup", "bin/fluentd", "-c", "example/debug/restore.conf", "-vv", "--under-supervisor"]
2019-11-26 14:11:44 -0800 [info]: fluent/log.rb:322:info: adding match pattern="*" type="stdout"
2019-11-26 14:11:44 -0800 [trace]: #0 fluent/log.rb:281:trace: registered output plugin 'stdout'
2019-11-26 14:11:44 -0800 [trace]: #0 fluent/log.rb:281:trace: registered buffer plugin 'file'
2019-11-26 14:11:44 -0800 [trace]: #0 fluent/log.rb:281:trace: registered formatter plugin 'stdout'
2019-11-26 14:11:44 -0800 [trace]: #0 fluent/log.rb:281:trace: registered formatter plugin 'json'
2019-11-26 14:11:44 -0800 [info]: fluent/log.rb:322:info: adding source type="http"
2019-11-26 14:11:44 -0800 [trace]: #0 fluent/log.rb:281:trace: registered parser plugin 'in_http'
2019-11-26 14:11:44 -0800 [trace]: #0 fluent/log.rb:281:trace: registered input plugin 'http'
2019-11-26 14:11:44 -0800 [trace]: #0 fluent/log.rb:281:trace: registered parser plugin 'msgpack'
2019-11-26 14:11:44 -0800 [trace]: #0 fluent/log.rb:281:trace: registered parser plugin 'json'
2019-11-26 14:11:44 -0800 [debug]: #0 fluent/log.rb:302:debug: No fluent logger for internal event
2019-11-26 14:11:44 -0800 [info]: #0 fluent/log.rb:322:info: starting fluentd worker pid=59517 ppid=59503 worker=0
2019-11-26 14:11:44 -0800 [debug]: #0 fluent/log.rb:302:debug: restoring buffer file: path = /Users/yuta.iwama/src/github.com/fluent/fluentd/example/debug/buf/buffer.q59847112b36895567a5ab176955db091.log
2019-11-26 14:11:44 -0800 [debug]: #0 fluent/log.rb:302:debug: restoring buffer file: path = /Users/yuta.iwama/src/github.com/fluent/fluentd/example/debug/buf/buffer.q59847112afd24eb6c013cc708b0f7f99.log
2019-11-26 14:11:44 -0800 [debug]: #0 fluent/log.rb:302:debug: restoring buffer file: path = /Users/yuta.iwama/src/github.com/fluent/fluentd/example/debug/buf/buffer.q59847112be38cda1cea68788e1235d8c.log
2019-11-26 14:11:44 -0800 [debug]: #0 fluent/log.rb:302:debug: restoring buffer file: path = /Users/yuta.iwama/src/github.com/fluent/fluentd/example/debug/buf/buffer.q59847112d1237aabaac770ceefd0b8c6.log
2019-11-26 14:11:44 -0800 [debug]: #0 fluent/log.rb:302:debug: restoring buffer file: path = /Users/yuta.iwama/src/github.com/fluent/fluentd/example/debug/buf/buffer.q59847112c8667899d7335b7f37cfff7f.log
2019-11-26 14:11:44 -0800 [debug]: #0 fluent/log.rb:302:debug: restoring buffer file: path = /Users/yuta.iwama/src/github.com/fluent/fluentd/example/debug/buf/buffer.q59847112ce66f4e074dfd0948b7f2df8.log
2019-11-26 14:11:44 -0800 [debug]: #0 fluent/log.rb:302:debug: restoring buffer file: path = /Users/yuta.iwama/src/github.com/fluent/fluentd/example/debug/buf/buffer.q59847112c237ddaca91c80734c7df6e8.log
2019-11-26 14:11:44 -0800 [debug]: #0 fluent/log.rb:302:debug: restoring buffer file: path = /Users/yuta.iwama/src/github.com/fluent/fluentd/example/debug/buf/buffer.q59847112a6d181067d5607fed8078ec5.log
2019-11-26 14:11:44 -0800 [debug]: #0 fluent/log.rb:302:debug: restoring buffer file: path = /Users/yuta.iwama/src/github.com/fluent/fluentd/example/debug/buf/buffer.q59847112acc727d69176ce4dc98a3ee7.log
2019-11-26 14:11:44 -0800 [debug]: #0 fluent/log.rb:302:debug: restoring buffer file: path = /Users/yuta.iwama/src/github.com/fluent/fluentd/example/debug/buf/buffer.q59847112b76d9f1a2023878bbee2ef34.log
2019-11-26 14:11:44 -0800 [debug]: #0 fluent/log.rb:302:debug: restoring buffer file: path = /Users/yuta.iwama/src/github.com/fluent/fluentd/example/debug/buf/buffer.q59847112cafe3dbe1a44268aa7fa09f4.log
2019-11-26 14:11:44 -0800 [debug]: #0 fluent/log.rb:302:debug: restoring buffer file: path = /Users/yuta.iwama/src/github.com/fluent/fluentd/example/debug/buf/buffer.q59847112bae5917fda55dafbee01ac9b.log
2019-11-26 14:11:44 -0800 [debug]: #0 fluent/log.rb:302:debug: restoring buffer file: path = /Users/yuta.iwama/src/github.com/fluent/fluentd/example/debug/buf/buffer.q59847112a9a8536f2ea9faf1a1a1e200.log
2019-11-26 14:11:44 -0800 [debug]: #0 fluent/log.rb:302:debug: buffer started instance=70154479434980 stage_size=0 queue_size=8710
2019-11-26 14:11:44 -0800 [debug]: #0 fluent/log.rb:302:debug: listening http bind="0.0.0.0" port=8080
2019-11-26 14:11:44 -0800 [debug]: #0 fluent/log.rb:302:debug: flush_thread actually running
2019-11-26 14:11:44 -0800 [debug]: #0 fluent/log.rb:302:debug: enqueue_thread actually running
2019-11-26 14:11:44 -0800 [trace]: #0 fluent/log.rb:281:trace: enqueueing all chunks in buffer instance=70154479434980
2019-11-26 14:11:44 -0800 [info]: #0 fluent/log.rb:322:info: fluentd worker is now running worker=0
2019-11-26 14:11:45 -0800 [trace]: #0 fluent/log.rb:281:trace: dequeueing a chunk instance=70154479434980
2019-11-26 14:11:45 -0800 [trace]: #0 fluent/log.rb:281:trace: chunk dequeued instance=70154479434980 metadata=#<struct Fluent::Plugin::Buffer::Metadata timekey=1574805870, tag="dummy", variables=nil>
2019-11-26 14:11:45 -0800 [trace]: #0 fluent/log.rb:281:trace: trying flush for a chunk chunk="59847112a9a8536f2ea9faf1a1a1e200"
2019-11-26 14:11:45 -0800 [trace]: #0 fluent/log.rb:281:trace: adding write count instance=70154479331440
2019-11-26 14:11:45 -0800 [trace]: #0 fluent/log.rb:281:trace: executing sync write chunk="59847112a9a8536f2ea9faf1a1a1e200"
2019-11-26 14:04:41.657718000 -0800 dummy: {"q":"ruby","max":"50"}
2019-11-26 14:04:41.659346000 -0800 dummy: {"q":"ruby","max":"50"}
2019-11-26 14:04:41.659949000 -0800 dummy: {"q":"ruby","max":"50"}
2019-11-26 14:04:41.660754000 -0800 dummy: {"q":"ruby","max":"50"}
2019-11-26 14:04:41.661354000 -0800 dummy: {"q":"ruby","max":"50"}
2019-11-26 14:04:41.661845000 -0800 dummy: {"q":"ruby","max":"50"}
2019-11-26 14:04:41.663383000 -0800 dummy: {"q":"ruby","max":"50"}
2019-11-26 14:04:41.664145000 -0800 dummy: {"q":"ruby","max":"50"}
2019-11-26 14:04:41.664622000 -0800 dummy: {"q":"ruby","max":"50"}
2019-11-26 14:04:41.665514000 -0800 dummy: {"q":"ruby","max":"50"}
2019-11-26 14:11:45 -0800 [trace]: #0 fluent/log.rb:281:trace: write operation done, committing chunk="59847112a9a8536f2ea9faf1a1a1e200"
2019-11-26 14:11:45 -0800 [trace]: #0 fluent/log.rb:281:trace: committing write operation to a chunk chunk="59847112a9a8536f2ea9faf1a1a1e200" delayed=false
2019-11-26 14:11:45 -0800 [trace]: #0 fluent/log.rb:281:trace: purging a chunk instance=70154479434980 chunk_id="59847112a9a8536f2ea9faf1a1a1e200" metadata=#<struct Fluent::Plugin::Buffer::Metadata timekey=1574805870, tag="dummy", variables=nil>
2019-11-26 14:11:45 -0800 [trace]: #0 fluent/log.rb:281:trace: enqueueing all chunks in buffer instance=70154479434980
2019-11-26 14:11:45 -0800 [trace]: #0 fluent/log.rb:281:trace: chunk purged instance=70154479434980 chunk_id="59847112a9a8536f2ea9faf1a1a1e200" metadata=#<struct Fluent::Plugin::Buffer::Metadata timekey=1574805870, tag="dummy", variables=nil>
2019-11-26 14:11:45 -0800 [trace]: #0 fluent/log.rb:281:trace: done to commit a chunk chunk="59847112a9a8536f2ea9faf1a1a1e200"
2019-11-26 14:11:46 -0800 [trace]: #0 fluent/log.rb:281:trace: enqueueing all chunks in buffer instance=70154479434980
2019-11-26 14:11:46 -0800 [trace]: #0 fluent/log.rb:281:trace: dequeueing a chunk instance=70154479434980
2019-11-26 14:11:46 -0800 [trace]: #0 fluent/log.rb:281:trace: chunk dequeued instance=70154479434980 metadata=#<struct Fluent::Plugin::Buffer::Metadata timekey=1574805870, tag="dummy", variables=nil>
2019-11-26 14:11:46 -0800 [trace]: #0 fluent/log.rb:281:trace: trying flush for a chunk chunk="59847112afd24eb6c013cc708b0f7f99"
2019-11-26 14:11:46 -0800 [trace]: #0 fluent/log.rb:281:trace: adding write count instance=70154479331440
2019-11-26 14:11:46 -0800 [trace]: #0 fluent/log.rb:281:trace: executing sync write chunk="59847112afd24eb6c013cc708b0f7f99"
2019-11-26 14:04:41.683078000 -0800 dummy: {"q":"ruby","max":"50"}
2019-11-26 14:04:41.684644000 -0800 dummy: {"q":"ruby","max":"50"}
2019-11-26 14:04:41.685310000 -0800 dummy: {"q":"ruby","max":"50"}
2019-11-26 14:04:41.686080000 -0800 dummy: {"q":"ruby","max":"50"}
2019-11-26 14:04:41.687496000 -0800 dummy: {"q":"ruby","max":"50"}
2019-11-26 14:04:41.687994000 -0800 dummy: {"q":"ruby","max":"50"}
2019-11-26 14:04:41.688720000 -0800 dummy: {"q":"ruby","max":"50"}
2019-11-26 14:04:41.689648000 -0800 dummy: {"q":"ruby","max":"50"}
2019-11-26 14:04:41.689942000 -0800 dummy: {"q":"ruby","max":"50"}
2019-11-26 14:04:41.690835000 -0800 dummy: {"q":"ruby","max":"50"}
2019-11-26 14:11:46 -0800 [trace]: #0 fluent/log.rb:281:trace: write operation done, committing chunk="59847112afd24eb6c013cc708b0f7f99"
2019-11-26 14:11:46 -0800 [trace]: #0 fluent/log.rb:281:trace: committing write operation to a chunk chunk="59847112afd24eb6c013cc708b0f7f99" delayed=false
2019-11-26 14:11:46 -0800 [trace]: #0 fluent/log.rb:281:trace: purging a chunk instance=70154479434980 chunk_id="59847112afd24eb6c013cc708b0f7f99" metadata=#<struct Fluent::Plugin::Buffer::Metadata timekey=1574805870, tag="dummy", variables=nil>
2019-11-26 14:11:46 -0800 [trace]: #0 fluent/log.rb:281:trace: chunk purged instance=70154479434980 chunk_id="59847112afd24eb6c013cc708b0f7f99" metadata=#<struct Fluent::Plugin::Buffer::Metadata timekey=1574805870, tag="dummy", variables=nil>
2019-11-26 14:11:46 -0800 [trace]: #0 fluent/log.rb:281:trace: done to commit a chunk chunk="59847112afd24eb6c013cc708b0f7f99"
2019-11-26 14:11:47 -0800 [trace]: #0 fluent/log.rb:281:trace: enqueueing all chunks in buffer instance=70154479434980
2019-11-26 14:11:47 -0800 [trace]: #0 fluent/log.rb:281:trace: dequeueing a chunk instance=70154479434980
2019-11-26 14:11:47 -0800 [trace]: #0 fluent/log.rb:281:trace: chunk dequeued instance=70154479434980 metadata=#<struct Fluent::Plugin::Buffer::Metadata timekey=1574805870, tag="dummy", variables=nil>
2019-11-26 14:11:47 -0800 [trace]: #0 fluent/log.rb:281:trace: trying flush for a chunk chunk="59847112be38cda1cea68788e1235d8c"
2019-11-26 14:11:47 -0800 [trace]: #0 fluent/log.rb:281:trace: adding write count instance=70154479331440
2019-11-26 14:11:47 -0800 [trace]: #0 fluent/log.rb:281:trace: executing sync write chunk="59847112be38cda1cea68788e1235d8c"
2019-11-26 14:04:41.742064000 -0800 dummy: {"q":"ruby","max":"50"}
2019-11-26 14:04:41.747282000 -0800 dummy: {"q":"ruby","max":"50"}
2019-11-26 14:04:41.748479000 -0800 dummy: {"q":"ruby","max":"50"}
2019-11-26 14:04:41.748770000 -0800 dummy: {"q":"ruby","max":"50"}
2019-11-26 14:04:41.749553000 -0800 dummy: {"q":"ruby","max":"50"}
2019-11-26 14:04:41.750075000 -0800 dummy: {"q":"ruby","max":"50"}
2019-11-26 14:04:41.750504000 -0800 dummy: {"q":"ruby","max":"50"}
2019-11-26 14:04:41.751430000 -0800 dummy: {"q":"ruby","max":"50"}
2019-11-26 14:04:41.752000000 -0800 dummy: {"q":"ruby","max":"50"}
2019-11-26 14:04:41.752883000 -0800 dummy: {"q":"ruby","max":"50"}
2019-11-26 14:11:47 -0800 [trace]: #0 fluent/log.rb:281:trace: write operation done, committing chunk="59847112be38cda1cea68788e1235d8c"
2019-11-26 14:11:47 -0800 [trace]: #0 fluent/log.rb:281:trace: committing write operation to a chunk chunk="59847112be38cda1cea68788e1235d8c" delayed=false
2019-11-26 14:11:47 -0800 [trace]: #0 fluent/log.rb:281:trace: purging a chunk instance=70154479434980 chunk_id="59847112be38cda1cea68788e1235d8c" metadata=#<struct Fluent::Plugin::Buffer::Metadata timekey=1574805870, tag="dummy", variables=nil>
2019-11-26 14:11:47 -0800 [trace]: #0 fluent/log.rb:281:trace: chunk purged instance=70154479434980 chunk_id="59847112be38cda1cea68788e1235d8c" metadata=#<struct Fluent::Plugin::Buffer::Metadata timekey=1574805870, tag="dummy", variables=nil>
2019-11-26 14:11:47 -0800 [trace]: #0 fluent/log.rb:281:trace: done to commit a chunk chunk="59847112be38cda1cea68788e1235d8c"
2019-11-26 14:11:48 -0800 [trace]: #0 fluent/log.rb:281:trace: enqueueing all chunks in buffer instance=70154479434980
2019-11-26 14:11:48 -0800 [trace]: #0 fluent/log.rb:281:trace: dequeueing a chunk instance=70154479434980
2019-11-26 14:11:48 -0800 [trace]: #0 fluent/log.rb:281:trace: chunk dequeued instance=70154479434980 metadata=#<struct Fluent::Plugin::Buffer::Metadata timekey=1574805870, tag="dummy", variables=nil>
2019-11-26 14:11:48 -0800 [trace]: #0 fluent/log.rb:281:trace: trying flush for a chunk chunk="59847112d1237aabaac770ceefd0b8c6"
2019-11-26 14:11:48 -0800 [trace]: #0 fluent/log.rb:281:trace: adding write count instance=70154479331440
2019-11-26 14:11:48 -0800 [trace]: #0 fluent/log.rb:281:trace: executing sync write chunk="59847112d1237aabaac770ceefd0b8c6"
2019-11-26 14:04:41.817925000 -0800 dummy: {"q":"ruby","max":"50"}
2019-11-26 14:04:41.821812000 -0800 dummy: {"q":"ruby","max":"50"}
2019-11-26 14:04:41.822395000 -0800 dummy: {"q":"ruby","max":"50"}
2019-11-26 14:04:41.823019000 -0800 dummy: {"q":"ruby","max":"50"}
2019-11-26 14:04:41.823521000 -0800 dummy: {"q":"ruby","max":"50"}
2019-11-26 14:04:41.824375000 -0800 dummy: {"q":"ruby","max":"50"}
2019-11-26 14:04:41.824770000 -0800 dummy: {"q":"ruby","max":"50"}
2019-11-26 14:04:41.825202000 -0800 dummy: {"q":"ruby","max":"50"}
2019-11-26 14:04:41.825436000 -0800 dummy: {"q":"ruby","max":"50"}
2019-11-26 14:04:41.825760000 -0800 dummy: {"q":"ruby","max":"50"}
2019-11-26 14:11:48 -0800 [trace]: #0 fluent/log.rb:281:trace: write operation done, committing chunk="59847112d1237aabaac770ceefd0b8c6"
2019-11-26 14:11:48 -0800 [trace]: #0 fluent/log.rb:281:trace: committing write operation to a chunk chunk="59847112d1237aabaac770ceefd0b8c6" delayed=false
2019-11-26 14:11:48 -0800 [trace]: #0 fluent/log.rb:281:trace: purging a chunk instance=70154479434980 chunk_id="59847112d1237aabaac770ceefd0b8c6" metadata=#<struct Fluent::Plugin::Buffer::Metadata timekey=1574805870, tag="dummy", variables=nil>
2019-11-26 14:11:48 -0800 [trace]: #0 fluent/log.rb:281:trace: chunk purged instance=70154479434980 chunk_id="59847112d1237aabaac770ceefd0b8c6" metadata=#<struct Fluent::Plugin::Buffer::Metadata timekey=1574805870, tag="dummy", variables=nil>
2019-11-26 14:11:48 -0800 [trace]: #0 fluent/log.rb:281:trace: done to commit a chunk chunk="59847112d1237aabaac770ceefd0b8c6"
2019-11-26 14:11:49 -0800 [trace]: #0 fluent/log.rb:281:trace: enqueueing all chunks in buffer instance=70154479434980
2019-11-26 14:11:49 -0800 [trace]: #0 fluent/log.rb:281:trace: dequeueing a chunk instance=70154479434980
2019-11-26 14:11:49 -0800 [trace]: #0 fluent/log.rb:281:trace: chunk dequeued instance=70154479434980 metadata=#<struct Fluent::Plugin::Buffer::Metadata timekey=1574805870, tag="dummy", variables=nil>
2019-11-26 14:11:49 -0800 [trace]: #0 fluent/log.rb:281:trace: trying flush for a chunk chunk="59847112c8667899d7335b7f37cfff7f"
2019-11-26 14:11:49 -0800 [trace]: #0 fluent/log.rb:281:trace: adding write count instance=70154479331440
2019-11-26 14:11:49 -0800 [trace]: #0 fluent/log.rb:281:trace: executing sync write chunk="59847112c8667899d7335b7f37cfff7f"
2019-11-26 14:04:41.783732000 -0800 dummy: {"q":"ruby","max":"50"}
2019-11-26 14:04:41.784518000 -0800 dummy: {"q":"ruby","max":"50"}
2019-11-26 14:04:41.785005000 -0800 dummy: {"q":"ruby","max":"50"}
2019-11-26 14:04:41.785455000 -0800 dummy: {"q":"ruby","max":"50"}
2019-11-26 14:04:41.786145000 -0800 dummy: {"q":"ruby","max":"50"}
2019-11-26 14:04:41.786664000 -0800 dummy: {"q":"ruby","max":"50"}
2019-11-26 14:04:41.787296000 -0800 dummy: {"q":"ruby","max":"50"}
2019-11-26 14:04:41.788001000 -0800 dummy: {"q":"ruby","max":"50"}
2019-11-26 14:04:41.788717000 -0800 dummy: {"q":"ruby","max":"50"}
2019-11-26 14:04:41.789142000 -0800 dummy: {"q":"ruby","max":"50"}
2019-11-26 14:11:49 -0800 [trace]: #0 fluent/log.rb:281:trace: write operation done, committing chunk="59847112c8667899d7335b7f37cfff7f"
2019-11-26 14:11:49 -0800 [trace]: #0 fluent/log.rb:281:trace: committing write operation to a chunk chunk="59847112c8667899d7335b7f37cfff7f" delayed=false
2019-11-26 14:11:49 -0800 [trace]: #0 fluent/log.rb:281:trace: purging a chunk instance=70154479434980 chunk_id="59847112c8667899d7335b7f37cfff7f" metadata=#<struct Fluent::Plugin::Buffer::Metadata timekey=1574805870, tag="dummy", variables=nil>
2019-11-26 14:11:49 -0800 [trace]: #0 fluent/log.rb:281:trace: chunk purged instance=70154479434980 chunk_id="59847112c8667899d7335b7f37cfff7f" metadata=#<struct Fluent::Plugin::Buffer::Metadata timekey=1574805870, tag="dummy", variables=nil>
2019-11-26 14:11:49 -0800 [trace]: #0 fluent/log.rb:281:trace: done to commit a chunk chunk="59847112c8667899d7335b7f37cfff7f"
2019-11-26 14:11:50 -0800 [trace]: #0 fluent/log.rb:281:trace: dequeueing a chunk instance=70154479434980
2019-11-26 14:11:50 -0800 [trace]: #0 fluent/log.rb:281:trace: enqueueing all chunks in buffer instance=70154479434980
2019-11-26 14:11:50 -0800 [trace]: #0 fluent/log.rb:281:trace: chunk dequeued instance=70154479434980 metadata=#<struct Fluent::Plugin::Buffer::Metadata timekey=1574805870, tag="dummy", variables=nil>
2019-11-26 14:11:50 -0800 [trace]: #0 fluent/log.rb:281:trace: trying flush for a chunk chunk="59847112ce66f4e074dfd0948b7f2df8"
2019-11-26 14:11:50 -0800 [trace]: #0 fluent/log.rb:281:trace: adding write count instance=70154479331440
2019-11-26 14:11:50 -0800 [trace]: #0 fluent/log.rb:281:trace: executing sync write chunk="59847112ce66f4e074dfd0948b7f2df8"
2019-11-26 14:04:41.808273000 -0800 dummy: {"q":"ruby","max":"50"}
2019-11-26 14:04:41.809431000 -0800 dummy: {"q":"ruby","max":"50"}
2019-11-26 14:04:41.810696000 -0800 dummy: {"q":"ruby","max":"50"}
2019-11-26 14:04:41.811562000 -0800 dummy: {"q":"ruby","max":"50"}
2019-11-26 14:04:41.811935000 -0800 dummy: {"q":"ruby","max":"50"}
2019-11-26 14:04:41.812453000 -0800 dummy: {"q":"ruby","max":"50"}
2019-11-26 14:04:41.812834000 -0800 dummy: {"q":"ruby","max":"50"}
2019-11-26 14:04:41.813253000 -0800 dummy: {"q":"ruby","max":"50"}
2019-11-26 14:04:41.813687000 -0800 dummy: {"q":"ruby","max":"50"}
2019-11-26 14:04:41.814110000 -0800 dummy: {"q":"ruby","max":"50"}
2019-11-26 14:11:50 -0800 [trace]: #0 fluent/log.rb:281:trace: write operation done, committing chunk="59847112ce66f4e074dfd0948b7f2df8"
2019-11-26 14:11:50 -0800 [trace]: #0 fluent/log.rb:281:trace: committing write operation to a chunk chunk="59847112ce66f4e074dfd0948b7f2df8" delayed=false
2019-11-26 14:11:50 -0800 [trace]: #0 fluent/log.rb:281:trace: purging a chunk instance=70154479434980 chunk_id="59847112ce66f4e074dfd0948b7f2df8" metadata=#<struct Fluent::Plugin::Buffer::Metadata timekey=1574805870, tag="dummy", variables=nil>
2019-11-26 14:11:50 -0800 [trace]: #0 fluent/log.rb:281:trace: chunk purged instance=70154479434980 chunk_id="59847112ce66f4e074dfd0948b7f2df8" metadata=#<struct Fluent::Plugin::Buffer::Metadata timekey=1574805870, tag="dummy", variables=nil>
2019-11-26 14:11:50 -0800 [trace]: #0 fluent/log.rb:281:trace: done to commit a chunk chunk="59847112ce66f4e074dfd0948b7f2df8"
2019-11-26 14:11:51 -0800 [trace]: #0 fluent/log.rb:281:trace: enqueueing all chunks in buffer instance=70154479434980
2019-11-26 14:11:51 -0800 [trace]: #0 fluent/log.rb:281:trace: dequeueing a chunk instance=70154479434980
2019-11-26 14:11:51 -0800 [trace]: #0 fluent/log.rb:281:trace: chunk dequeued instance=70154479434980 metadata=#<struct Fluent::Plugin::Buffer::Metadata timekey=1574805870, tag="dummy", variables=nil>
2019-11-26 14:11:51 -0800 [trace]: #0 fluent/log.rb:281:trace: trying flush for a chunk chunk="59847112c237ddaca91c80734c7df6e8"
2019-11-26 14:11:51 -0800 [trace]: #0 fluent/log.rb:281:trace: adding write count instance=70154479331440
2019-11-26 14:11:51 -0800 [trace]: #0 fluent/log.rb:281:trace: executing sync write chunk="59847112c237ddaca91c80734c7df6e8"
2019-11-26 14:04:41.758350000 -0800 dummy: {"q":"ruby","max":"50"}
2019-11-26 14:04:41.773456000 -0800 dummy: {"q":"ruby","max":"50"}
2019-11-26 14:04:41.773850000 -0800 dummy: {"q":"ruby","max":"50"}
2019-11-26 14:04:41.774191000 -0800 dummy: {"q":"ruby","max":"50"}
2019-11-26 14:04:41.774527000 -0800 dummy: {"q":"ruby","max":"50"}
2019-11-26 14:04:41.774936000 -0800 dummy: {"q":"ruby","max":"50"}
2019-11-26 14:04:41.775473000 -0800 dummy: {"q":"ruby","max":"50"}
2019-11-26 14:04:41.775772000 -0800 dummy: {"q":"ruby","max":"50"}
2019-11-26 14:04:41.776257000 -0800 dummy: {"q":"ruby","max":"50"}
2019-11-26 14:04:41.780023000 -0800 dummy: {"q":"ruby","max":"50"}
2019-11-26 14:11:51 -0800 [trace]: #0 fluent/log.rb:281:trace: write operation done, committing chunk="59847112c237ddaca91c80734c7df6e8"
2019-11-26 14:11:51 -0800 [trace]: #0 fluent/log.rb:281:trace: committing write operation to a chunk chunk="59847112c237ddaca91c80734c7df6e8" delayed=false
2019-11-26 14:11:51 -0800 [trace]: #0 fluent/log.rb:281:trace: purging a chunk instance=70154479434980 chunk_id="59847112c237ddaca91c80734c7df6e8" metadata=#<struct Fluent::Plugin::Buffer::Metadata timekey=1574805870, tag="dummy", variables=nil>
2019-11-26 14:11:51 -0800 [trace]: #0 fluent/log.rb:281:trace: chunk purged instance=70154479434980 chunk_id="59847112c237ddaca91c80734c7df6e8" metadata=#<struct Fluent::Plugin::Buffer::Metadata timekey=1574805870, tag="dummy", variables=nil>
2019-11-26 14:11:51 -0800 [trace]: #0 fluent/log.rb:281:trace: done to commit a chunk chunk="59847112c237ddaca91c80734c7df6e8"
2019-11-26 14:11:52 -0800 [trace]: #0 fluent/log.rb:281:trace: enqueueing all chunks in buffer instance=70154479434980
2019-11-26 14:11:52 -0800 [trace]: #0 fluent/log.rb:281:trace: dequeueing a chunk instance=70154479434980
2019-11-26 14:11:52 -0800 [trace]: #0 fluent/log.rb:281:trace: chunk dequeued instance=70154479434980 metadata=#<struct Fluent::Plugin::Buffer::Metadata timekey=1574805870, tag="dummy", variables=nil>
2019-11-26 14:11:52 -0800 [trace]: #0 fluent/log.rb:281:trace: trying flush for a chunk chunk="59847112a6d181067d5607fed8078ec5"
2019-11-26 14:11:52 -0800 [trace]: #0 fluent/log.rb:281:trace: adding write count instance=70154479331440
2019-11-26 14:11:52 -0800 [trace]: #0 fluent/log.rb:281:trace: executing sync write chunk="59847112a6d181067d5607fed8078ec5"
2019-11-26 14:04:41.643818000 -0800 dummy: {"q":"ruby","max":"50"}
2019-11-26 14:04:41.647616000 -0800 dummy: {"q":"ruby","max":"50"}
2019-11-26 14:04:41.648104000 -0800 dummy: {"q":"ruby","max":"50"}
2019-11-26 14:04:41.648440000 -0800 dummy: {"q":"ruby","max":"50"}
2019-11-26 14:04:41.648905000 -0800 dummy: {"q":"ruby","max":"50"}
2019-11-26 14:04:41.649440000 -0800 dummy: {"q":"ruby","max":"50"}
2019-11-26 14:04:41.649722000 -0800 dummy: {"q":"ruby","max":"50"}
2019-11-26 14:04:41.650251000 -0800 dummy: {"q":"ruby","max":"50"}
2019-11-26 14:04:41.650759000 -0800 dummy: {"q":"ruby","max":"50"}
2019-11-26 14:04:41.651216000 -0800 dummy: {"q":"ruby","max":"50"}
2019-11-26 14:11:52 -0800 [trace]: #0 fluent/log.rb:281:trace: write operation done, committing chunk="59847112a6d181067d5607fed8078ec5"
2019-11-26 14:11:52 -0800 [trace]: #0 fluent/log.rb:281:trace: committing write operation to a chunk chunk="59847112a6d181067d5607fed8078ec5" delayed=false
2019-11-26 14:11:52 -0800 [trace]: #0 fluent/log.rb:281:trace: purging a chunk instance=70154479434980 chunk_id="59847112a6d181067d5607fed8078ec5" metadata=#<struct Fluent::Plugin::Buffer::Metadata timekey=1574805870, tag="dummy", variables=nil>
2019-11-26 14:11:52 -0800 [trace]: #0 fluent/log.rb:281:trace: chunk purged instance=70154479434980 chunk_id="59847112a6d181067d5607fed8078ec5" metadata=#<struct Fluent::Plugin::Buffer::Metadata timekey=1574805870, tag="dummy", variables=nil>
2019-11-26 14:11:52 -0800 [trace]: #0 fluent/log.rb:281:trace: done to commit a chunk chunk="59847112a6d181067d5607fed8078ec5"
2019-11-26 14:11:53 -0800 [trace]: #0 fluent/log.rb:281:trace: enqueueing all chunks in buffer instance=70154479434980
2019-11-26 14:11:54 -0800 [trace]: #0 fluent/log.rb:281:trace: dequeueing a chunk instance=70154479434980
2019-11-26 14:11:54 -0800 [trace]: #0 fluent/log.rb:281:trace: chunk dequeued instance=70154479434980 metadata=#<struct Fluent::Plugin::Buffer::Metadata timekey=1574805870, tag="dummy", variables=nil>
2019-11-26 14:11:54 -0800 [trace]: #0 fluent/log.rb:281:trace: trying flush for a chunk chunk="59847112acc727d69176ce4dc98a3ee7"
2019-11-26 14:11:54 -0800 [trace]: #0 fluent/log.rb:281:trace: adding write count instance=70154479331440
2019-11-26 14:11:54 -0800 [trace]: #0 fluent/log.rb:281:trace: executing sync write chunk="59847112acc727d69176ce4dc98a3ee7"
2019-11-26 14:04:41.669711000 -0800 dummy: {"q":"ruby","max":"50"}
2019-11-26 14:04:41.671924000 -0800 dummy: {"q":"ruby","max":"50"}
2019-11-26 14:04:41.672883000 -0800 dummy: {"q":"ruby","max":"50"}
2019-11-26 14:04:41.674015000 -0800 dummy: {"q":"ruby","max":"50"}
2019-11-26 14:04:41.674555000 -0800 dummy: {"q":"ruby","max":"50"}
2019-11-26 14:04:41.675547000 -0800 dummy: {"q":"ruby","max":"50"}
2019-11-26 14:04:41.675922000 -0800 dummy: {"q":"ruby","max":"50"}
2019-11-26 14:04:41.677851000 -0800 dummy: {"q":"ruby","max":"50"}
2019-11-26 14:04:41.678660000 -0800 dummy: {"q":"ruby","max":"50"}
2019-11-26 14:04:41.679402000 -0800 dummy: {"q":"ruby","max":"50"}
2019-11-26 14:11:54 -0800 [trace]: #0 fluent/log.rb:281:trace: write operation done, committing chunk="59847112acc727d69176ce4dc98a3ee7"
2019-11-26 14:11:54 -0800 [trace]: #0 fluent/log.rb:281:trace: committing write operation to a chunk chunk="59847112acc727d69176ce4dc98a3ee7" delayed=false
2019-11-26 14:11:54 -0800 [trace]: #0 fluent/log.rb:281:trace: purging a chunk instance=70154479434980 chunk_id="59847112acc727d69176ce4dc98a3ee7" metadata=#<struct Fluent::Plugin::Buffer::Metadata timekey=1574805870, tag="dummy", variables=nil>
2019-11-26 14:11:54 -0800 [trace]: #0 fluent/log.rb:281:trace: chunk purged instance=70154479434980 chunk_id="59847112acc727d69176ce4dc98a3ee7" metadata=#<struct Fluent::Plugin::Buffer::Metadata timekey=1574805870, tag="dummy", variables=nil>
2019-11-26 14:11:54 -0800 [trace]: #0 fluent/log.rb:281:trace: done to commit a chunk chunk="59847112acc727d69176ce4dc98a3ee7"
2019-11-26 14:11:55 -0800 [trace]: #0 fluent/log.rb:281:trace: enqueueing all chunks in buffer instance=70154479434980
2019-11-26 14:11:55 -0800 [trace]: #0 fluent/log.rb:281:trace: dequeueing a chunk instance=70154479434980
2019-11-26 14:11:55 -0800 [trace]: #0 fluent/log.rb:281:trace: chunk dequeued instance=70154479434980 metadata=#<struct Fluent::Plugin::Buffer::Metadata timekey=1574805870, tag="dummy", variables=nil>
2019-11-26 14:11:55 -0800 [trace]: #0 fluent/log.rb:281:trace: trying flush for a chunk chunk="59847112b76d9f1a2023878bbee2ef34"
2019-11-26 14:11:55 -0800 [trace]: #0 fluent/log.rb:281:trace: adding write count instance=70154479331440
2019-11-26 14:11:55 -0800 [trace]: #0 fluent/log.rb:281:trace: executing sync write chunk="59847112b76d9f1a2023878bbee2ef34"
2019-11-26 14:04:41.711491000 -0800 dummy: {"q":"ruby","max":"50"}
2019-11-26 14:04:41.715305000 -0800 dummy: {"q":"ruby","max":"50"}
2019-11-26 14:04:41.715601000 -0800 dummy: {"q":"ruby","max":"50"}
2019-11-26 14:04:41.716546000 -0800 dummy: {"q":"ruby","max":"50"}
2019-11-26 14:04:41.717226000 -0800 dummy: {"q":"ruby","max":"50"}
2019-11-26 14:04:41.718098000 -0800 dummy: {"q":"ruby","max":"50"}
2019-11-26 14:04:41.718683000 -0800 dummy: {"q":"ruby","max":"50"}
2019-11-26 14:04:41.719530000 -0800 dummy: {"q":"ruby","max":"50"}
2019-11-26 14:04:41.720995000 -0800 dummy: {"q":"ruby","max":"50"}
2019-11-26 14:04:41.721727000 -0800 dummy: {"q":"ruby","max":"50"}
2019-11-26 14:11:55 -0800 [trace]: #0 fluent/log.rb:281:trace: write operation done, committing chunk="59847112b76d9f1a2023878bbee2ef34"
2019-11-26 14:11:55 -0800 [trace]: #0 fluent/log.rb:281:trace: committing write operation to a chunk chunk="59847112b76d9f1a2023878bbee2ef34" delayed=false
2019-11-26 14:11:55 -0800 [trace]: #0 fluent/log.rb:281:trace: purging a chunk instance=70154479434980 chunk_id="59847112b76d9f1a2023878bbee2ef34" metadata=#<struct Fluent::Plugin::Buffer::Metadata timekey=1574805870, tag="dummy", variables=nil>
2019-11-26 14:11:55 -0800 [trace]: #0 fluent/log.rb:281:trace: chunk purged instance=70154479434980 chunk_id="59847112b76d9f1a2023878bbee2ef34" metadata=#<struct Fluent::Plugin::Buffer::Metadata timekey=1574805870, tag="dummy", variables=nil>
2019-11-26 14:11:55 -0800 [trace]: #0 fluent/log.rb:281:trace: done to commit a chunk chunk="59847112b76d9f1a2023878bbee2ef34"
2019-11-26 14:11:56 -0800 [trace]: #0 fluent/log.rb:281:trace: enqueueing all chunks in buffer instance=70154479434980
2019-11-26 14:11:56 -0800 [trace]: #0 fluent/log.rb:281:trace: dequeueing a chunk instance=70154479434980
2019-11-26 14:11:56 -0800 [trace]: #0 fluent/log.rb:281:trace: chunk dequeued instance=70154479434980 metadata=#<struct Fluent::Plugin::Buffer::Metadata timekey=1574805870, tag="dummy", variables=nil>
2019-11-26 14:11:56 -0800 [trace]: #0 fluent/log.rb:281:trace: trying flush for a chunk chunk="59847112cafe3dbe1a44268aa7fa09f4"
2019-11-26 14:11:56 -0800 [trace]: #0 fluent/log.rb:281:trace: adding write count instance=70154479331440
2019-11-26 14:11:56 -0800 [trace]: #0 fluent/log.rb:281:trace: executing sync write chunk="59847112cafe3dbe1a44268aa7fa09f4"
2019-11-26 14:04:41.794382000 -0800 dummy: {"q":"ruby","max":"50"}
2019-11-26 14:04:41.795857000 -0800 dummy: {"q":"ruby","max":"50"}
2019-11-26 14:04:41.797138000 -0800 dummy: {"q":"ruby","max":"50"}
2019-11-26 14:04:41.797891000 -0800 dummy: {"q":"ruby","max":"50"}
2019-11-26 14:04:41.798502000 -0800 dummy: {"q":"ruby","max":"50"}
2019-11-26 14:04:41.798805000 -0800 dummy: {"q":"ruby","max":"50"}
2019-11-26 14:04:41.799282000 -0800 dummy: {"q":"ruby","max":"50"}
2019-11-26 14:04:41.799630000 -0800 dummy: {"q":"ruby","max":"50"}
2019-11-26 14:04:41.800005000 -0800 dummy: {"q":"ruby","max":"50"}
2019-11-26 14:04:41.800539000 -0800 dummy: {"q":"ruby","max":"50"}
2019-11-26 14:11:56 -0800 [trace]: #0 fluent/log.rb:281:trace: write operation done, committing chunk="59847112cafe3dbe1a44268aa7fa09f4"
2019-11-26 14:11:56 -0800 [trace]: #0 fluent/log.rb:281:trace: committing write operation to a chunk chunk="59847112cafe3dbe1a44268aa7fa09f4" delayed=false
2019-11-26 14:11:56 -0800 [trace]: #0 fluent/log.rb:281:trace: purging a chunk instance=70154479434980 chunk_id="59847112cafe3dbe1a44268aa7fa09f4" metadata=#<struct Fluent::Plugin::Buffer::Metadata timekey=1574805870, tag="dummy", variables=nil>
2019-11-26 14:11:56 -0800 [trace]: #0 fluent/log.rb:281:trace: chunk purged instance=70154479434980 chunk_id="59847112cafe3dbe1a44268aa7fa09f4" metadata=#<struct Fluent::Plugin::Buffer::Metadata timekey=1574805870, tag="dummy", variables=nil>
2019-11-26 14:11:56 -0800 [trace]: #0 fluent/log.rb:281:trace: done to commit a chunk chunk="59847112cafe3dbe1a44268aa7fa09f4"
2019-11-26 14:11:57 -0800 [trace]: #0 fluent/log.rb:281:trace: enqueueing all chunks in buffer instance=70154479434980
2019-11-26 14:11:57 -0800 [trace]: #0 fluent/log.rb:281:trace: dequeueing a chunk instance=70154479434980
2019-11-26 14:11:57 -0800 [trace]: #0 fluent/log.rb:281:trace: chunk dequeued instance=70154479434980 metadata=#<struct Fluent::Plugin::Buffer::Metadata timekey=1574805870, tag="dummy", variables=nil>
2019-11-26 14:11:57 -0800 [trace]: #0 fluent/log.rb:281:trace: trying flush for a chunk chunk="59847112bae5917fda55dafbee01ac9b"
2019-11-26 14:11:57 -0800 [trace]: #0 fluent/log.rb:281:trace: adding write count instance=70154479331440
2019-11-26 14:11:57 -0800 [trace]: #0 fluent/log.rb:281:trace: executing sync write chunk="59847112bae5917fda55dafbee01ac9b"
2019-11-26 14:04:41.728419000 -0800 dummy: {"q":"ruby","max":"50"}
2019-11-26 14:04:41.730391000 -0800 dummy: {"q":"ruby","max":"50"}
2019-11-26 14:04:41.731421000 -0800 dummy: {"q":"ruby","max":"50"}
2019-11-26 14:04:41.731847000 -0800 dummy: {"q":"ruby","max":"50"}
2019-11-26 14:04:41.732514000 -0800 dummy: {"q":"ruby","max":"50"}
2019-11-26 14:04:41.732938000 -0800 dummy: {"q":"ruby","max":"50"}
2019-11-26 14:04:41.733831000 -0800 dummy: {"q":"ruby","max":"50"}
2019-11-26 14:04:41.734282000 -0800 dummy: {"q":"ruby","max":"50"}
2019-11-26 14:04:41.735481000 -0800 dummy: {"q":"ruby","max":"50"}
2019-11-26 14:04:41.736284000 -0800 dummy: {"q":"ruby","max":"50"}
2019-11-26 14:11:57 -0800 [trace]: #0 fluent/log.rb:281:trace: write operation done, committing chunk="59847112bae5917fda55dafbee01ac9b"
2019-11-26 14:11:57 -0800 [trace]: #0 fluent/log.rb:281:trace: committing write operation to a chunk chunk="59847112bae5917fda55dafbee01ac9b" delayed=false
2019-11-26 14:11:57 -0800 [trace]: #0 fluent/log.rb:281:trace: purging a chunk instance=70154479434980 chunk_id="59847112bae5917fda55dafbee01ac9b" metadata=#<struct Fluent::Plugin::Buffer::Metadata timekey=1574805870, tag="dummy", variables=nil>
2019-11-26 14:11:57 -0800 [trace]: #0 fluent/log.rb:281:trace: chunk purged instance=70154479434980 chunk_id="59847112bae5917fda55dafbee01ac9b" metadata=#<struct Fluent::Plugin::Buffer::Metadata timekey=1574805870, tag="dummy", variables=nil>
2019-11-26 14:11:57 -0800 [trace]: #0 fluent/log.rb:281:trace: done to commit a chunk chunk="59847112bae5917fda55dafbee01ac9b"
2019-11-26 14:11:58 -0800 [trace]: #0 fluent/log.rb:281:trace: enqueueing all chunks in buffer instance=70154479434980
2019-11-26 14:11:58 -0800 [trace]: #0 fluent/log.rb:281:trace: dequeueing a chunk instance=70154479434980
2019-11-26 14:11:58 -0800 [trace]: #0 fluent/log.rb:281:trace: chunk dequeued instance=70154479434980 metadata=#<struct Fluent::Plugin::Buffer::Metadata timekey=1574805870, tag="dummy", variables=nil>
2019-11-26 14:11:58 -0800 [trace]: #0 fluent/log.rb:281:trace: trying flush for a chunk chunk="59847112b36895567a5ab176955db091"
2019-11-26 14:11:58 -0800 [trace]: #0 fluent/log.rb:281:trace: adding write count instance=70154479331440
2019-11-26 14:11:58 -0800 [trace]: #0 fluent/log.rb:281:trace: executing sync write chunk="59847112b36895567a5ab176955db091"
2019-11-26 14:04:41.697106000 -0800 dummy: {"q":"ruby","max":"50"}
2019-11-26 14:04:41.698876000 -0800 dummy: {"q":"ruby","max":"50"}
2019-11-26 14:04:41.699780000 -0800 dummy: {"q":"ruby","max":"50"}
2019-11-26 14:04:41.700258000 -0800 dummy: {"q":"ruby","max":"50"}
2019-11-26 14:04:41.700801000 -0800 dummy: {"q":"ruby","max":"50"}
2019-11-26 14:04:41.701770000 -0800 dummy: {"q":"ruby","max":"50"}
2019-11-26 14:04:41.702213000 -0800 dummy: {"q":"ruby","max":"50"}
2019-11-26 14:04:41.704078000 -0800 dummy: {"q":"ruby","max":"50"}
2019-11-26 14:04:41.704920000 -0800 dummy: {"q":"ruby","max":"50"}
2019-11-26 14:04:41.705374000 -0800 dummy: {"q":"ruby","max":"50"}
2019-11-26 14:11:58 -0800 [trace]: #0 fluent/log.rb:281:trace: write operation done, committing chunk="59847112b36895567a5ab176955db091"
2019-11-26 14:11:58 -0800 [trace]: #0 fluent/log.rb:281:trace: committing write operation to a chunk chunk="59847112b36895567a5ab176955db091" delayed=false
2019-11-26 14:11:58 -0800 [trace]: #0 fluent/log.rb:281:trace: purging a chunk instance=70154479434980 chunk_id="59847112b36895567a5ab176955db091" metadata=#<struct Fluent::Plugin::Buffer::Metadata timekey=1574805870, tag="dummy", variables=nil>
2019-11-26 14:11:58 -0800 [trace]: #0 fluent/log.rb:281:trace: chunk purged instance=70154479434980 chunk_id="59847112b36895567a5ab176955db091" metadata=#<struct Fluent::Plugin::Buffer::Metadata timekey=1574805870, tag="dummy", variables=nil>
2019-11-26 14:11:58 -0800 [trace]: #0 fluent/log.rb:281:trace: done to commit a chunk chunk="59847112b36895567a5ab176955db091"
2019-11-26 14:11:59 -0800 [trace]: #0 fluent/log.rb:281:trace: enqueueing all chunks in buffer instance=70154479434980
2019-11-26 14:12:00 -0800 [trace]: #0 fluent/log.rb:281:trace: enqueueing all chunks in buffer instance=70154479434980
2019-11-26 14:12:01 -0800 [trace]: #0 fluent/log.rb:281:trace: enqueueing all chunks in buffer instance=70154479434980
^C2019-11-26 14:12:01 -0800 [debug]: #0 fluent/log.rb:302:debug: fluentd main process get SIGINT
2019-11-26 14:12:01 -0800 [info]: fluent/log.rb:322:info: Received graceful stop
2019-11-26 14:12:02 -0800 [trace]: #0 fluent/log.rb:281:trace: enqueueing all chunks in buffer instance=70154479434980
2019-11-26 14:12:02 -0800 [debug]: #0 fluent/log.rb:302:debug: fluentd main process get SIGTERM
2019-11-26 14:12:02 -0800 [debug]: #0 fluent/log.rb:302:debug: getting start to shutdown main process
2019-11-26 14:12:03 -0800 [trace]: #0 fluent/log.rb:281:trace: enqueueing all chunks in buffer instance=70154479434980
2019-11-26 14:12:03 -0800 [info]: #0 fluent/log.rb:322:info: fluentd worker is now stopping worker=0
2019-11-26 14:12:03 -0800 [info]: #0 fluent/log.rb:322:info: shutting down fluentd worker worker=0
2019-11-26 14:12:03 -0800 [debug]: #0 fluent/log.rb:302:debug: calling stop on input plugin type=:http plugin_id="object:3fce1c1b7dec"
2019-11-26 14:12:03 -0800 [debug]: #0 fluent/log.rb:302:debug: calling stop on output plugin type=:stdout plugin_id="object:3fce1cd1e870"
2019-11-26 14:12:03 -0800 [trace]: #0 fluent/log.rb:281:trace: enqueueing all chunks in buffer instance=70154479434980
2019-11-26 14:12:03 -0800 [debug]: #0 fluent/log.rb:302:debug: preparing shutdown input plugin type=:http plugin_id="object:3fce1c1b7dec"
2019-11-26 14:12:03 -0800 [info]: #0 fluent/log.rb:322:info: shutting down input plugin type=:http plugin_id="object:3fce1c1b7dec"
2019-11-26 14:12:03 -0800 [debug]: #0 fluent/log.rb:302:debug: preparing shutdown output plugin type=:stdout plugin_id="object:3fce1cd1e870"
2019-11-26 14:12:03 -0800 [info]: #0 fluent/log.rb:322:info: shutting down output plugin type=:stdout plugin_id="object:3fce1cd1e870"
2019-11-26 14:12:03 -0800 [debug]: #0 fluent/log.rb:302:debug: calling after_shutdown on input plugin type=:http plugin_id="object:3fce1c1b7dec"
2019-11-26 14:12:03 -0800 [debug]: #0 fluent/log.rb:302:debug: calling after_shutdown on output plugin type=:stdout plugin_id="object:3fce1cd1e870"
2019-11-26 14:12:03 -0800 [debug]: #0 fluent/log.rb:302:debug: closing input plugin type=:http plugin_id="object:3fce1c1b7dec"
2019-11-26 14:12:03 -0800 [debug]: #0 fluent/log.rb:302:debug: closing output plugin type=:stdout plugin_id="object:3fce1cd1e870"
2019-11-26 14:12:03 -0800 [debug]: #0 fluent/log.rb:302:debug: closing buffer instance=70154479434980
2019-11-26 14:12:03 -0800 [debug]: #0 fluent/log.rb:302:debug: calling terminate on input plugin type=:http plugin_id="object:3fce1c1b7dec"
2019-11-26 14:12:03 -0800 [debug]: #0 fluent/log.rb:302:debug: calling terminate on output plugin type=:stdout plugin_id="object:3fce1cd1e870"
2019-11-26 14:12:03 -0800 [info]: fluent/log.rb:322:info: Worker 0 finished with status 0

@fleprovost
Copy link
Author

@ganmacs

In my example, chunks are still on stage (buffer.b* filename) when fluentd restarts. I think that makes a big difference since the behavior of the resume function in buf_file is different according to chunk.state:

case chunk.state

@fleprovost
Copy link
Author

@ganmacs Any news ? The problem is still there. To give you context, the Fluentd service (dockerized) is reloaded weekly due to the rotation of the logs and some orphan chunks may remain due to this problem. The use of flush_at_shutdown is not a solution for us because we want the buffer to be flushed at startup.

@ganmacs ganmacs self-assigned this Feb 12, 2020
@ganmacs
Copy link
Member

ganmacs commented Feb 13, 2020

Sorry for the delay.
I've confirmed this can happen in the combination write_step_by_step and flush buffer. I'll fix it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants