-
Notifications
You must be signed in to change notification settings - Fork 899
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[WIP] monitor container termination status info when getting pod info #22266
base: master
Are you sure you want to change the base?
[WIP] monitor container termination status info when getting pod info #22266
Conversation
ch[:container_restarts] = pod.status.containerStatuses.sum { |cs| cs.restartCount.to_i } | ||
|
||
ch[:last_state_terminated] = false | ||
ch[:terminations] = [] | ||
pod.status.containerStatuses.each do |cs| |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The alternative way of doing this figuring out what termination events look like and monitor just those events.
See lines 178-195 above.
ch[:container_restarts] = pod.status.containerStatuses.sum { |cs| cs.restartCount.to_i } | ||
|
||
ch[:last_state_terminated] = false | ||
ch[:terminations] = [] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I chose to clear the array each time through. The server code that checks the current_pods
concurrent hash can possibly miss terminations or deal with things it already processed... so we might want to just monitor for termination events as mentioned 👇 . I'm not sure how much work that is.
really looking forward to this...thanks @jrafanie |
I also found this but it assumes you can install things in the cluster so at best, we could possibly suggest it: https://medium.com/@andrew.kaczynski/kubernetes-events-how-to-keep-historical-data-of-your-cluster-835d685cc45 |
This pull request has been automatically marked as stale because it has not been updated for at least 3 months. If these changes are still valid, please remove the Thank you for all your contributions! More information about the ManageIQ triage process can be found in the triage process documentation. |
This pull request has been automatically closed because it has not been updated for at least 3 months. Feel free to reopen this pull request if these changes are still valid. Thank you for all your contributions! More information about the ManageIQ triage process can be found in the triage process documentation. |
Checked commit jrafanie@0aaf05b with ruby 2.6.10, rubocop 1.28.2, haml-lint 0.35.0, and yamllint app/models/miq_server/worker_management/kubernetes.rb
|
This pull request has been automatically marked as stale because it has not been updated for at least 3 months. If these changes are still valid, please remove the |
1 similar comment
This pull request has been automatically marked as stale because it has not been updated for at least 3 months. If these changes are still valid, please remove the |
WIP...
This is what it looks like in
pp MiqServer.my_server.worker_manager.current_pods