You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I've been facing issues with the kinesis streams plugin, it appears to be not sending logs periodically to our streams. Unsure if its related to this plugin exactly, but I've opened a discussion about it over at fluentd repo
If anyone would like to chime in, I'd greatly appreciate it.
Ty!
Original discussion:
We've had this service working for almost 2 years, but recently it's been failing to send logs to our kinesis stream periodically.
By using the @type stdout match, I was able to see all the latest logs being delivered to fluentd, but it seems that it doesn't reach kinesis (this happens periodically, some times it will pause for 6 hours, then resume for an hour, then pause for another 3 hours etc).
I've also turned on trace logging, and here's what I am seeing, it looks normal to me.
2023-03-31 08:57:04 -0700 [trace]: #0 writing events into buffer instance=70239868108060 metadata_size=1
2023-03-31 08:57:04 -0700 [trace]: #0 writing events into buffer instance=70239868108060 metadata_size=1
2023-03-31 08:57:04 -0700 [trace]: #0 enqueueing all chunks in buffer instance=70239868108060
2023-03-31 08:57:04 -0700 [trace]: #0 writing events into buffer instance=70239868108060 metadata_size=1
2023-03-31 08:57:05 -0700 [trace]: #0 enqueueing all chunks in buffer instance=70239868108060
2023-03-31 08:57:05 -0700 [trace]: #0 writing events into buffer instance=70239868108060 metadata_size=1
2023-03-31 08:57:06 -0700 [trace]: #0 enqueueing all chunks in buffer instance=70239868108060
2023-03-31 08:57:06 -0700 [trace]: #0 writing events into buffer instance=70239868108060 metadata_size=1
2023-03-31 08:57:07 -0700 [trace]: #0 enqueueing all chunks in buffer instance=70239868108060
2023-03-31 08:57:08 -0700 [trace]: #0 enqueueing all chunks in buffer instance=70239868108060
2023-03-31 08:57:09 -0700 [trace]: #0 writing events into buffer instance=70239868108060 metadata_size=1
2023-03-31 08:57:09 -0700 [trace]: #0 enqueueing all chunks in buffer instance=70239868108060
2023-03-31 08:57:09 -0700 [trace]: #0 writing events into buffer instance=70239868108060 metadata_size=1
2023-03-31 08:57:10 -0700 [trace]: #0 enqueueing all chunks in buffer instance=70239868108060
2023-03-31 08:57:10 -0700 [trace]: #0 writing events into buffer instance=70239868108060 metadata_size=1
2023-03-31 08:57:10 -0700 [trace]: #0 writing events into buffer instance=70239868108060 metadata_size=1
2023-03-31 08:57:11 -0700 [trace]: #0 writing events into buffer instance=70239868108060 metadata_size=1
2023-03-31 08:57:11 -0700 [trace]: #0 enqueueing all chunks in buffer instance=70239868108060
2023-03-31 08:57:11 -0700 [trace]: #0 writing events into buffer instance=70239868108060 metadata_size=1
2023-03-31 08:57:11 -0700 [trace]: #0 writing events into buffer instance=70239868108060 metadata_size=1
2023-03-31 08:57:11 -0700 [trace]: #0 writing events into buffer instance=70239868108060 metadata_size=1
2023-03-31 08:57:11 -0700 [trace]: #0 writing events into buffer instance=70239868108060 metadata_size=1
2023-03-31 08:57:11 -0700 [trace]: #0 writing events into buffer instance=70239868108060 metadata_size=1
2023-03-31 08:57:11 -0700 [trace]: #0 writing events into buffer instance=70239868108060 metadata_size=1
2023-03-31 08:57:12 -0700 [trace]: #0 writing events into buffer instance=70239868108060 metadata_size=1
2023-03-31 08:57:12 -0700 [trace]: #0 writing events into buffer instance=70239868108060 metadata_size=1
2023-03-31 08:57:12 -0700 [trace]: #0 writing events into buffer instance=70239868108060 metadata_size=1
2023-03-31 08:57:12 -0700 [trace]: #0 enqueueing all chunks in buffer instance=70239868108060
2023-03-31 08:57:12 -0700 [trace]: #0 writing events into buffer instance=70239868108060 metadata_size=1
2023-03-31 08:57:13 -0700 [trace]: #0 writing events into buffer instance=70239868108060 metadata_size=1
2023-03-31 08:57:13 -0700 [trace]: #0 writing events into buffer instance=70239868108060 metadata_size=1
2023-03-31 08:57:13 -0700 [trace]: #0 writing events into buffer instance=70239868108060 metadata_size=1
2023-03-31 08:57:13 -0700 [trace]: #0 enqueueing all chunks in buffer instance=70239868108060
2023-03-31 08:57:13 -0700 [trace]: #0 writing events into buffer instance=70239868108060 metadata_size=1
2023-03-31 08:57:13 -0700 [trace]: #0 writing events into buffer instance=70239868108060 metadata_size=1
2023-03-31 08:57:13 -0700 [trace]: #0 writing events into buffer instance=70239868108060 metadata_size=1
2023-03-31 08:57:14 -0700 [trace]: #0 writing events into buffer instance=70239868108060 metadata_size=1
2023-03-31 08:57:14 -0700 [trace]: #0 enqueueing all chunks in buffer instance=70239868108060
2023-03-31 08:57:14 -0700 [trace]: #0 writing events into buffer instance=70239868108060 metadata_size=1
2023-03-31 08:57:15 -0700 [trace]: #0 writing events into buffer instance=70239868108060 metadata_size=1
2023-03-31 08:57:15 -0700 [trace]: #0 enqueueing all chunks in buffer instance=70239868108060
2023-03-31 08:57:16 -0700 [trace]: #0 writing events into buffer instance=70239868108060 metadata_size=1
2023-03-31 08:57:16 -0700 [trace]: #0 enqueueing all chunks in buffer instance=70239868108060
2023-03-31 08:57:16 -0700 [trace]: #0 writing events into buffer instance=70239868108060 metadata_size=1
2023-03-31 08:57:16 -0700 [trace]: #0 writing events into buffer instance=70239868108060 metadata_size=1
2023-03-31 08:57:16 -0700 [trace]: #0 writing events into buffer instance=70239868108060 metadata_size=1
2023-03-31 08:57:17 -0700 [trace]: #0 writing events into buffer instance=70239868108060 metadata_size=1
2023-03-31 08:57:17 -0700 [trace]: #0 enqueueing all chunks in buffer instance=70239868108060
2023-03-31 08:57:18 -0700 [trace]: #0 enqueueing all chunks in buffer instance=70239868108060
The text was updated successfully, but these errors were encountered:
Sorry to bother you all, but it turns out the issue was on our side of things.
Copied from discussion:
I'm going to close this, turns out that the issue was on our side. Our kinesis stream would be consumed by a lambda function before going into kibana, that lambda function was running out of memory and timing out. Bumping the memory size fixed the issue.
Hi,
I've been facing issues with the kinesis streams plugin, it appears to be not sending logs periodically to our streams. Unsure if its related to this plugin exactly, but I've opened a discussion about it over at fluentd repo
If anyone would like to chime in, I'd greatly appreciate it.
Ty!
Original discussion:
We've had this service working for almost 2 years, but recently it's been failing to send logs to our kinesis stream periodically.
Versions:
fluentd (1.11.1)
fluent-plugin-kinesis (3.4.2, 3.4.1, 3.4.0, 3.3.0)
td (0.16.9)
td-client (1.0.7)
td-logger (0.3.27)
Here is the config file for
td-agent
:By using the
@type stdout
match, I was able to see all the latest logs being delivered to fluentd, but it seems that it doesn't reach kinesis (this happens periodically, some times it will pause for 6 hours, then resume for an hour, then pause for another 3 hours etc).I've also turned on trace logging, and here's what I am seeing, it looks normal to me.
The text was updated successfully, but these errors were encountered: