Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Added parameter to set await termination timeout #179

Merged
merged 2 commits into from Sep 22, 2021

Conversation

snorwin
Copy link
Contributor

@snorwin snorwin commented May 20, 2021

Partial solves #152 and I think it is worth to add this option.

@snorwin
Copy link
Contributor Author

snorwin commented May 24, 2021

I tried to address the issue with the queued messages in the Dispatcher by waiting until they are forwarded to the ExecutorService.

@snorwin snorwin force-pushed the await-termination-timeout branch from 46a0db8 to 427bca6 Compare May 24, 2021 08:41
@guitarosexual
Copy link

guitarosexual commented May 27, 2021

Hello, @snorwin . Recently I needed to solve the same problem which you're addressing with your PR. I'm using this library within Lambda function. The problem is Lambda finishes its work and doesn't wait for Splunk appender to send last batch to the Splunk server. I tried your last commit, but unfortunately it doesn't log all messages every time. Can't understand why though. I tried it with different amount of load and usually with 300 Lambda requests it looks good, but with 3000 requests (there is a queue that feeds the Lambda) there are some log messages missing. What helped me, but significantly spoiled the performance, is using sync http request for the last flush (httpClient.newCall(requestBldr.build()).execute()). Your solution is more elegant, but I can't understand what's missing.

@snorwin
Copy link
Contributor Author

snorwin commented May 27, 2021

@guitarosexual is it possible to share your example as runnable code? I would be very happy to look at it and debug it.

@guitarosexual
Copy link

@snorwin sure. The only thing is I removed everything related to Logback appender, since I don't need it.
The main change is in HttpEventCollectorSender. https://github.com/guitarosexual/splunk-library-javalogging/blob/main/src/main/java/com/splunk/logging/HttpEventCollectorSender.java

@snorwin
Copy link
Contributor Author

snorwin commented May 27, 2021

Thanks @guitarosexual, can you also share the example code with the lambdas in order to reproduce it?

@guitarosexual
Copy link

guitarosexual commented May 28, 2021

Sorry @snorwin , I think I'm not allowed to send the actual Lambda code. I can send just log4j2.xml settings.

@guitarosexual
Copy link

@snorwin I created sanitized version of Lambda function, which can be used for debugging

@snorwin
Copy link
Contributor Author

snorwin commented Jun 2, 2021

@guitarosexual I tried to reproduce it but it was not able to do so. TBH I'm not an expert on AWS lambda functions either. But I still think that this PR facilitates a termination without loosing log messages in general (i.e. except AWS lambdas).

Maybe @zenmoto or @shakeelmohamed, can also take a look at it.

@guitarosexual
Copy link

Thank you, @snorwin

@MidnightRider13
Copy link

I encountered the same issue described in #152 while running a batch application that terminates on completion.

Before finding this PR, I made very similar changes with a similar result (i.e. resolved the issue).

Like @guitarosexual mentioned, I was worried about the final http request to Splunk, so my changes also altered the behavior to ensure the last http request was synchronous rather than asynchronous. That said, I did not rigorously test whether omitting this additional change made an observable impact on log visibility.

@chringwer
Copy link

@snorwin We had the same issue for a short-lived batch job running in a kubernetes pod. Verified the patch on our end and it solves the issue. Would love to see this merged!

@snorwin
Copy link
Contributor Author

snorwin commented Sep 17, 2021

@fantavlik @ashah-splunk is there a chance that you could have a look at this PR?

Copy link
Contributor

@fantavlik fantavlik left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks so much for the contribution @snorwin - this looks good to me.

@fantavlik fantavlik merged commit f681fb0 into splunk:main Sep 22, 2021
@vmalaviya-splunk vmalaviya-splunk mentioned this pull request Sep 27, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

6 participants