-
Notifications
You must be signed in to change notification settings - Fork 614
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
add some events to pod #797
Conversation
Shouldn't we have this delegated to the providers (mainly the containers-related ones) ? I think most of useful pod events will need to be extracted -or- written by provider. maybe through a notifier similar to the one used for pod updates. I just feel statuses anyway exist, and we just adding events with same redundant information. but we can still keep the ones like initializing, failed to create, failed to delete, etc. @cpuguy83, ideas ? |
I think abstract an interface for provider to sending the events may be more flexible, wdyt? |
I don't think we need anything special for providers to send events. You give the provider what it needs to operate the core runtime does not care about this. If there is some way we can make sending events simpler for all providers this could definitely be added as a helper that library consumers can choose to use or not. |
@cpuguy83 but wouldn't it still be better to abstract this from VK perspective. it is like calling GetPods on resource manger provided, we should have something similar where I can pass a list of aggregated k8s events + namespace + podName and it writes a new ones or overrides existing through its client to api-server. Some async handler, like what we do for updating pods or nodes status. In ACI, we'd like in future to write all container-runtime related events back to the k8s cluster, and in my opinion we should avoid having the providers do create their own clients, right ? It's exactly a kind of helper as you described. |
So yes I'm fine with some helper, but it should not be implicit as it is provider specific. In general I think there are many helpers that could be built which providers could take advantage of. |
So it's @cpuguy83 call here. From my point of view, i'll just be against any container-specific events as these are more related to what the actual provider/service uses to initialize/create podsandbox and differs from runtime to another. but if you want to add some events from a higher level for the pods, like creating in provider, failed to create, pod successfully initialized, failed to sync provider state, deleting from provider, failed to delete, etc., I think it'll still be helpful and it would make sense regardless of what provider is using the library. |
Totally agree, makes sense :) |
Thanks, I would change this PR |
8f5d587
to
55f3f17
Compare
ping @ibabou |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good to me. Thanks @cwdsuzhou !
ping @cpuguy83 |
I think this should be behind a config option. The risk-factor I see here is that if that if people don't have their cluster ready for lots of events, it could easily blow up their clusters. @cpuguy83 what do you think about putting it behind a config? |
I do not think it would blow the clusters, since events may be very few and we only sent the events when create/update/delete fails due to provider. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
I think it's fine to send these events. The event sink should decide if it can handle the event rather than VK making the decision to not send it at all ever.
@cpuguy83 thanks. Is this PR ready to be merged? |
I'd like to hear back from @sargun since he had some concerns. |
ping @sargun |
Looks good to me. I think these few events aren't problematic. If we start adding a lot more events, I think we should put it behind a flag. |
current version VK can not show event when we use
kubectl describe pod xxx
, though pod has failed. e.g. A pod failed due to not found image, when I usekubectl get pod xxx
, I get as follow:But when
kubectl describe pod xxx
, it just showsFix #796