Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The field "time" is missing #1833

Closed
ahrtr opened this issue Jan 27, 2018 · 7 comments
Closed

The field "time" is missing #1833

ahrtr opened this issue Jan 27, 2018 · 7 comments

Comments

@ahrtr
Copy link

ahrtr commented Jan 27, 2018

The log entries collected by fluentd can be sent to Elasticsearch successfully, and eventually I could see the log entries on Kibana. But the issue is that the field "time" is missing.

I found there was a related issue as below, but I did not find the paramter "time_key" and "keep_time_key" on the official online document, what did I miss?
#1360

@repeatedly
Copy link
Member

I did not find the paramter "time_key" and "keep_time_key" on the official online document, what did I miss?

Exist: https://docs.fluentd.org/v1.0/articles/parse-section#parse-parameters

@ahrtr
Copy link
Author

ahrtr commented Jan 27, 2018

Thanks, it works.

But there is a minor issue. Actually I generated the following example log entry,
{"level":"info", "msg":"hello", "time":"2018-01-27T02:38:15Z"}

But docker converted it to:
{"log": "{"level":"info", "msg":"hello", "time":"2018-01-27T02:38:15Z"}, "time":"2018-01-27T02:38:16.382229755Z", "stream":"stderr"}

It's acceptable, and I configured the following filter,

    <filter **>
        @type parser
        format json
        key_name log
        reserve_data true
        hash_value_field log
    </filter>

The issue is that the field log.time lost, the field "time" added by docker is reserved. How can I reserve the log.time? Of course, it's a minor issue.

@dcosson
Copy link

dcosson commented Feb 6, 2018

@repeatedly Something seems broken with time for me as well. Here's a sample configuration I'm testing:

<source>
  @type http
  port 8889
  bind 0.0.0.0
  <parse>
    time_key time
    time_type string
    time_format %Y-%m-%dT%H:%M:%S.%NZ
    keep_time_key true
  </parse>
</source>

<match **>
  @type stdout
</match>

Then I run fluentd via: docker run -it -v pwd/fluentd_etc:/fluentd/etc -p 8889:8889 fluent/fluentd:v1.1.0

And I query it with:
curl -X POST -d 'json={"action":"foo", "time":"2018-01-01T08:01:02.345678999Z"}' http://localhost:8889/some.tag

I get the following output:

2018-02-06 02:06:51 +0000 [info]: parsing config file is succeeded path="/fluentd/etc/fluent.conf"
2018-02-06 02:06:51 +0000 [info]: using configuration file: <ROOT>
  <source>
    @type http
    port 8889
    bind "0.0.0.0"
    <parse>
      time_key "time"
      time_type string
      time_format "%Y-%m-%dT%H:%M:%S.%NZ"
      keep_time_key true
    </parse>
  </source>
  <match **>
    @type stdout
  </match>
</ROOT>
2018-02-06 02:06:51 +0000 [info]: starting fluentd-1.1.0 pid=5 ruby="2.3.6"
2018-02-06 02:06:51 +0000 [info]: spawn command to main:  cmdline=["/usr/bin/ruby", "-Eascii-8bit:ascii-8bit", "/usr/bin/fluentd", "-c", "/fluentd/etc/fluent.conf", "-p", "/fluentd/plugins", "--under-supervisor"]
2018-02-06 02:06:51 +0000 [info]: gem 'fluentd' version '1.1.0'
2018-02-06 02:06:51 +0000 [info]: adding match pattern="**" type="stdout"
2018-02-06 02:06:51 +0000 [info]: adding source type="http"
2018-02-06 02:06:51 +0000 [info]: #0 starting fluentd worker pid=15 ppid=5 worker=0
2018-02-06 02:06:51 +0000 [info]: #0 fluentd worker is now running worker=0
2018-02-06 02:06:51.941641805 +0000 fluent.info: {"worker":0,"message":"fluentd worker is now running worker=0"}
2018-02-06 02:06:53.376000766 +0000 some.tag: {"action":"foo"}

So it looks as if the time field is not being parsed at all, and it's definitely ignoring the keep_time_key field. There are no errors parsing my config file or anything.

And lastly I've double-checked that my time_format is valid in ruby:

docker run -it -v `pwd`/fluentd_etc:/fluentd/etc -p 8889:8889 fluent/fluentd:v1.1.0 irb
irb(main):001:0> require('time')
=> true
irb(main):002:0> Time.strptime("2018-01-01T08:01:02.345678999Z", "%Y-%m-%dT%H:%M:%S.%NZ")
=> 2018-01-01 08:01:02 +0000

@dcosson
Copy link

dcosson commented Feb 6, 2018

Also worth noting that no "time" event in my curl command will trigger an error or warning in the output, and no bogus values that I can pass in the conf file to time_format or any of the other time values will trigger an error or warning.

@ahrtr
Copy link
Author

ahrtr commented Feb 7, 2018

@dcosson It seems that you missed the "@type json" in your parser, so the parser should be as below,

    <parse>
      @type json
      time_key "time"
      time_type string
      time_format "%Y-%m-%dT%H:%M:%S.%NZ"
      keep_time_key true
    </parse>

@fujimotos
Copy link
Member

fujimotos commented Feb 22, 2018

The issue is that the field log.time lost, the field "time" added by docker is reserved. How can I reserve the log.time? Of course, it's a minor issue.

@ahrtr FWIW, your particular problem ("time" field discarded while parsing) can be resolved
just by setting the keep_time_key flag to true.

Check the following example configuration how to set the option:

<filter debug.**>
    @type parser
    format json
    key_name log
    reserve_data true
    hash_value_field log
    time_format "%Y-%m-%dT%H:%M:%S"
    keep_time_key true
</filter>

How I've checked it works

I confirmed this actually works on my environment (Fluentd v1.1.0) using the following data record:

{"log":"{\"time\":\"2018-01-27T02:38:16\"}"}

Before turning on keep_time_key, Fluentd discard the "time" field as you reported:

2018-01-27 02:38:16.000000000 +0900 debug.log: {"log":{}}

Then, after turning on keep_time_key:

2018-01-27 02:38:16.000000000 +0900 debug.log: {"log":{"time":"2018-01-27T02:38:16"}}

@ahrtr
Copy link
Author

ahrtr commented Feb 22, 2018

@fujimotos, thanks for the info. I will try it out sometime later.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants