Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Logs periodically not sending to kinesis using kinesis_streams plugin #223

Closed
mrahman1-godaddy opened this issue Mar 31, 2023 · 1 comment

Comments

@mrahman1-godaddy
Copy link

mrahman1-godaddy commented Mar 31, 2023

Hi,

I've been facing issues with the kinesis streams plugin, it appears to be not sending logs periodically to our streams. Unsure if its related to this plugin exactly, but I've opened a discussion about it over at fluentd repo

If anyone would like to chime in, I'd greatly appreciate it.

Ty!

Original discussion:

We've had this service working for almost 2 years, but recently it's been failing to send logs to our kinesis stream periodically.

Versions:
fluentd (1.11.1)
fluent-plugin-kinesis (3.4.2, 3.4.1, 3.4.0, 3.3.0)
td (0.16.9)
td-client (1.0.7)
td-logger (0.3.27)

Here is the config file for td-agent :

<source>
  @type http
  port 12385
  keepalive_timeout 0
</source>

<match *application_**>
  @type kinesis_streams
  region us-west-2
  stream_name <REDACTED>
  aws_key_id <REDACTED>
  aws_sec_key <REDACTED>
     <buffer time>
	timekey      30s
	timekey_wait	0s
     </buffer>
</match>

By using the @type stdout match, I was able to see all the latest logs being delivered to fluentd, but it seems that it doesn't reach kinesis (this happens periodically, some times it will pause for 6 hours, then resume for an hour, then pause for another 3 hours etc).

I've also turned on trace logging, and here's what I am seeing, it looks normal to me.

2023-03-31 08:57:04 -0700 [trace]: #0 writing events into buffer instance=70239868108060 metadata_size=1
2023-03-31 08:57:04 -0700 [trace]: #0 writing events into buffer instance=70239868108060 metadata_size=1
2023-03-31 08:57:04 -0700 [trace]: #0 enqueueing all chunks in buffer instance=70239868108060
2023-03-31 08:57:04 -0700 [trace]: #0 writing events into buffer instance=70239868108060 metadata_size=1
2023-03-31 08:57:05 -0700 [trace]: #0 enqueueing all chunks in buffer instance=70239868108060
2023-03-31 08:57:05 -0700 [trace]: #0 writing events into buffer instance=70239868108060 metadata_size=1
2023-03-31 08:57:06 -0700 [trace]: #0 enqueueing all chunks in buffer instance=70239868108060
2023-03-31 08:57:06 -0700 [trace]: #0 writing events into buffer instance=70239868108060 metadata_size=1
2023-03-31 08:57:07 -0700 [trace]: #0 enqueueing all chunks in buffer instance=70239868108060
2023-03-31 08:57:08 -0700 [trace]: #0 enqueueing all chunks in buffer instance=70239868108060
2023-03-31 08:57:09 -0700 [trace]: #0 writing events into buffer instance=70239868108060 metadata_size=1
2023-03-31 08:57:09 -0700 [trace]: #0 enqueueing all chunks in buffer instance=70239868108060
2023-03-31 08:57:09 -0700 [trace]: #0 writing events into buffer instance=70239868108060 metadata_size=1
2023-03-31 08:57:10 -0700 [trace]: #0 enqueueing all chunks in buffer instance=70239868108060
2023-03-31 08:57:10 -0700 [trace]: #0 writing events into buffer instance=70239868108060 metadata_size=1
2023-03-31 08:57:10 -0700 [trace]: #0 writing events into buffer instance=70239868108060 metadata_size=1
2023-03-31 08:57:11 -0700 [trace]: #0 writing events into buffer instance=70239868108060 metadata_size=1
2023-03-31 08:57:11 -0700 [trace]: #0 enqueueing all chunks in buffer instance=70239868108060
2023-03-31 08:57:11 -0700 [trace]: #0 writing events into buffer instance=70239868108060 metadata_size=1
2023-03-31 08:57:11 -0700 [trace]: #0 writing events into buffer instance=70239868108060 metadata_size=1
2023-03-31 08:57:11 -0700 [trace]: #0 writing events into buffer instance=70239868108060 metadata_size=1
2023-03-31 08:57:11 -0700 [trace]: #0 writing events into buffer instance=70239868108060 metadata_size=1
2023-03-31 08:57:11 -0700 [trace]: #0 writing events into buffer instance=70239868108060 metadata_size=1
2023-03-31 08:57:11 -0700 [trace]: #0 writing events into buffer instance=70239868108060 metadata_size=1
2023-03-31 08:57:12 -0700 [trace]: #0 writing events into buffer instance=70239868108060 metadata_size=1
2023-03-31 08:57:12 -0700 [trace]: #0 writing events into buffer instance=70239868108060 metadata_size=1
2023-03-31 08:57:12 -0700 [trace]: #0 writing events into buffer instance=70239868108060 metadata_size=1
2023-03-31 08:57:12 -0700 [trace]: #0 enqueueing all chunks in buffer instance=70239868108060
2023-03-31 08:57:12 -0700 [trace]: #0 writing events into buffer instance=70239868108060 metadata_size=1
2023-03-31 08:57:13 -0700 [trace]: #0 writing events into buffer instance=70239868108060 metadata_size=1
2023-03-31 08:57:13 -0700 [trace]: #0 writing events into buffer instance=70239868108060 metadata_size=1
2023-03-31 08:57:13 -0700 [trace]: #0 writing events into buffer instance=70239868108060 metadata_size=1
2023-03-31 08:57:13 -0700 [trace]: #0 enqueueing all chunks in buffer instance=70239868108060
2023-03-31 08:57:13 -0700 [trace]: #0 writing events into buffer instance=70239868108060 metadata_size=1
2023-03-31 08:57:13 -0700 [trace]: #0 writing events into buffer instance=70239868108060 metadata_size=1
2023-03-31 08:57:13 -0700 [trace]: #0 writing events into buffer instance=70239868108060 metadata_size=1
2023-03-31 08:57:14 -0700 [trace]: #0 writing events into buffer instance=70239868108060 metadata_size=1
2023-03-31 08:57:14 -0700 [trace]: #0 enqueueing all chunks in buffer instance=70239868108060
2023-03-31 08:57:14 -0700 [trace]: #0 writing events into buffer instance=70239868108060 metadata_size=1
2023-03-31 08:57:15 -0700 [trace]: #0 writing events into buffer instance=70239868108060 metadata_size=1
2023-03-31 08:57:15 -0700 [trace]: #0 enqueueing all chunks in buffer instance=70239868108060
2023-03-31 08:57:16 -0700 [trace]: #0 writing events into buffer instance=70239868108060 metadata_size=1
2023-03-31 08:57:16 -0700 [trace]: #0 enqueueing all chunks in buffer instance=70239868108060
2023-03-31 08:57:16 -0700 [trace]: #0 writing events into buffer instance=70239868108060 metadata_size=1
2023-03-31 08:57:16 -0700 [trace]: #0 writing events into buffer instance=70239868108060 metadata_size=1
2023-03-31 08:57:16 -0700 [trace]: #0 writing events into buffer instance=70239868108060 metadata_size=1
2023-03-31 08:57:17 -0700 [trace]: #0 writing events into buffer instance=70239868108060 metadata_size=1
2023-03-31 08:57:17 -0700 [trace]: #0 enqueueing all chunks in buffer instance=70239868108060
2023-03-31 08:57:18 -0700 [trace]: #0 enqueueing all chunks in buffer instance=70239868108060
@mrahman1-godaddy
Copy link
Author

Sorry to bother you all, but it turns out the issue was on our side of things.

Copied from discussion:

I'm going to close this, turns out that the issue was on our side. Our kinesis stream would be consumed by a lambda function before going into kibana, that lambda function was running out of memory and timing out. Bumping the memory size fixed the issue.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant