Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Excessive log messages #518

Closed
lukehoban opened this issue Apr 3, 2019 · 5 comments · Fixed by #558
Closed

Excessive log messages #518

lukehoban opened this issue Apr 3, 2019 · 5 comments · Fixed by #558
Assignees
Labels
p1 A bug severe enough to be the next item assigned to an engineer
Milestone

Comments

@lukehoban
Copy link
Member

We've seen a few cases where a seemingly excessive number of log messages are generated by the kubernetes package.

In one recent case, it appears that the package was leading to ~200 log messages per second.

I suspect this is some await logic which can trigger a tight loop of feedback under some condition.

This appears to have been introduced around about the time we did a significant refactoring on the await logic (the first mention of this problem was soon after that) - though it may be unrelated.

I do not have a repro myself - but I believe users have hit this when running the https://github.com/pulumi/examples/tree/master/kubernetes-ts-exposed-deployment example (though possibly only when some error condition is triggered).

@lukehoban lukehoban added this to the 0.22 milestone Apr 3, 2019
@hausdorff
Copy link
Contributor

We stream a "status" message every time we observe a Kubernetes watch event, which could be arbitrarily many times -- basically any time the API server changes something and reports to us. This would be a consequence of switching from the polling strategy we previously had.

Even if this isn't the problem, we should probably throttle event reporting and report as batch every second or two.

@lukehoban
Copy link
Member Author

Even if this isn't the problem, we should probably throttle event reporting and report as batch every second or two.

Indeed - though we also shouldn't spit out 12,000 lines of messages per minute total :-).

@lukehoban lukehoban modified the milestones: 0.22, 0.23 Apr 20, 2019
@lblackstone
Copy link
Member

@lukehoban @hausdorff Is batching important, or just preventing duplicate messages from logging to the service? I've got a WIP that sends batches once per second, but the bigger issue here was that most of the messages were duplicates triggered by k8s events.

@lukehoban
Copy link
Member Author

Is batching important, or just preventing duplicate messages from logging to the service?

I'm not sure batching is critical (if it ends up with the same number of total lines of output). Re-sending the same status message as the last status message sent is (I believe) the core problem.

@hausdorff
Copy link
Contributor

I actually think we should not batch messages. I think we should just de-dup them.

@infin8x infin8x added the p1 A bug severe enough to be the next item assigned to an engineer label Jul 10, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
p1 A bug severe enough to be the next item assigned to an engineer
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants