New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feature request: new configuration: 'timeout_is_error' #8
Comments
I will consider this issue in next week. |
This does not solve my use case. I do not want to wait forever for the next line. In some cases, it could be hours or days before the next line occurs. If 60s go by with no new lines, that's probably it - I want to output what is in the buffer without generating an error. |
Use |
That's what I do currently, but it adds a lot of pointless complexity to my config files and generates spurious timeout warning messages. All I want is to output my timeouts the same way other lines are output. |
+1 Some time ago I requested the same in #7 |
Hmm...
You want:
Right? |
Yes. When the timeout happens, just emit whatever's in the buffer, as if a startline had come in. |
It is impossible to implement such a thing. |
Er? You already emit the buffer content, directed to timeout_label. Is it really not possible to simply do the same, but direct it to the default label instead? And it's definitely possible to not emit an error message - just make line 188 conditional. |
It is impossible. You can set
|
I have a way to work around this issue, but still not perfect: so, only error stack trace is printed, the multiline plugin is activated. of course, this config suppose you print stack trace only when error happened. but this is not perfect as well, because when an error printed, if there is no more preceding logs, this error may not be flushed |
Is there really no way to simply close a multiline capture when timeout is reached and process like normal? A lot of situations (java logs & stacktrace) have no clear multiline end. The other plugins from fluentd, like the memory buffer ones seems to be able to simply apply a flush interval to their processing, at which point all the collected stuff simply goes on to be processed normally. |
Something like this from fluentd 0.12.20+:
|
There is no way.
Those plugins are output plugin. But this is filter plugin. |
Got it. An unfortunate fluentd limitation. |
I've got a solution for this issue, but it is not perfect and not implemented.
Is this enough or not? |
@okkez this would not really help my case as when output happens, it always starts with a start-line followed by some arbitrary number of lines. Then there can be no output for prolonged period of time and new event that starts with a start-line. |
A workaround this would be to use the relabel plugin to label in both cases i.e. in case when concat gives timeout and in case when it does not and then, finally putting all your output plugins under that label. Here is an example:
This will solve the duplicacy issue. |
Thanks you for comments. |
@mdamir 's workaround works nicely! Thanks :) |
I request that a new configuration option be added, 'timeout_is_error' (type boolean, default true). This option would be such that, if it is set to true, if the flush_interval expires on a buffer, an error event is generated and the content of the buffer is output under the timeout_label label (as is the current behavior), but if it is false, the content of the buffer is emitted as if it had been completed and no error is generated.
I have a number of logs which I process using concat. In each case, I can tell when a new record is starting (because it starts with a timestamp), but have no reliable way of telling when one ends, except by waiting for the next log entry to start. As a result, I get lots of bogus timeout errors from concat. Dealing with this adds a lot of pointless, hard-to-maintain complexity and repetition to my Fluendtd configuration files. It would be much easier if I could just tell concat that no, those timeouts are not errors and it should not treat them as if they were.
The text was updated successfully, but these errors were encountered: