Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Notifications API #76597

Closed
grzesiek opened this Issue Apr 15, 2019 · 7 comments

Comments

Projects
None yet
4 participants
@grzesiek
Copy link

grzesiek commented Apr 15, 2019

What would you like to be added:

High-level Notifications API, where a notification is a Kubernetes resource:

apiVersion: v1alpha1
kind: Notification
metadata:
  name: service-notification
  labels:
    app: my-app-example
spec:
  watch: Deployment
  selector:
    matchLabels:
      - my-deployment
  rules:
    - [TBD ... similar to admission webhooks]
  notify:
    webhook: https://my-service.example/k8s
    secret: my-opaque-secret
    payload: v1alpha1

This is a little bit different than admission webhooks, because there is no need to mutate / validate an operation on a resource. Also, the notification is being triggered after persisting a resource in etcd, instead of executing the webhook before that happens.

This is similar to Docker Distribution notifications ➡️ https://docs.docker.com/registry/notifications/

Why is this needed:

We would like to integrate a few of our services with Kubernetes.

Currently we are using API polling to populate the service with a cluster state. We are considering creating an in-cluster controller along with a few other options, but having first-class, high-level notifications in Kubernetes might be an interesting, generic solution for this problem.

I might be missing something and perhaps we can already achieve something like this with Kubernetes, please let me know if something like this already exists.

If this feature seems to be as useful to a wider community, I'm willing to spend some time working on it.

Thanks for the input in advance :)

@grzesiek

This comment has been minimized.

Copy link
Author

grzesiek commented Apr 15, 2019

@kubernetes/sig-api-machinery-feature-requests

@k8s-ci-robot

This comment has been minimized.

Copy link
Contributor

k8s-ci-robot commented Apr 15, 2019

@grzesiek: Reiterating the mentions to trigger a notification:
@kubernetes/sig-api-machinery-feature-requests

In response to this:

@kubernetes/sig-api-machinery-feature-requests

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@lavalamp

This comment has been minimized.

Copy link
Member

lavalamp commented Apr 15, 2019

I don't think we need to build this into apiserver, I think it can be built as a CRD + external controller. See e.g. the garbage collector and resource quota controllers as examples of controllers that watch everything.

If you want to see literally every state transition ("edge"), then an external controller won't guarantee that (although in practice, it will be very close). However, we recommend that all controller code should work based on the object's state, not the transitions. See: https://speakerdeck.com/thockin/edge-vs-level-triggered-logic

Did you look at the audit webhooks? They might do what you want. They're gaining additional features in the nearish future.

@grzesiek

This comment has been minimized.

Copy link
Author

grzesiek commented Apr 16, 2019

Thanks for the reply @lavalamp!

I don't think we need to build this into apiserver, I think it can be built as a CRD + external controller. See e.g. the garbage collector and resource quota controllers as examples of controllers that watch everything.

What do you mean by "an external controller"? Do you suggest to implement this feature outside of a Kubernetes as a separate controller than one can deploy onto their cluster? This is something we are considering too, but I believe that this could be a very generic mechanism that would solve most of use cases and could be useful for other users too. We can implement an external controller, but this way is a little more difficult for others to use this feature (it is still reusable, though).

If you want to see literally every state transition ("edge"), then an external controller won't guarantee that (although in practice, it will be very close). However, we recommend that all controller code should work based on the object's state, not the transitions. See: https://speakerdeck.com/thockin/edge-vs-level-triggered-logic

Thanks for sharing these slides! I do agree with the principles described there, this is great in theory but in practice polling Kubernetes APIs is not always the best solution. What about designing this feature in the way that the event (emitted when the "edge" happens) contains only so much details that enables you to query the cluster state the usual way and it becomes merely a trigger that tells you when you should check the cluster state without providing information about a transition? Is that in-line with the principles described in the slides?

Did you look at the audit webhooks? They might do what you want. They're gaining additional features in the nearish future.

Thank you for pointing me to audit webhooks! Just to double check if we are on the same page, do you mean using AuditSink objects? It seems like an awesome idea but while similar it seems a little orthogonal to Notification objects because it seems that you need to define audit rules / policies separately from the AuditSink itself it might happen that some policies are preconfigued in some environments.

@lavalamp do you think that implementing Notification feature doesn't make sense now while we have dynamic audit logs implemented?

@liggitt

This comment has been minimized.

Copy link
Member

liggitt commented Apr 16, 2019

Currently we are using API polling to populate the service with a cluster state

Is there a reason you are not using the watch API?

@lavalamp

This comment has been minimized.

Copy link
Member

lavalamp commented Apr 16, 2019

Do you suggest to implement this feature outside of a Kubernetes as a separate controller than one can deploy onto their cluster?

Yes. If you want webhook notification of change events, build a controller that offers that. The controller would be reusable if you built it right (i.e., as you proposed in the opening comment).

What about designing this feature in the way that the event (emitted when the "edge" happens) contains only so much details that enables you to query the cluster state the usual way and it becomes merely a trigger that tells you when you should check the cluster state without providing information about a transition? Is that in-line with the principles described in the slides?

That would be very resource inefficient compared to just using watch. Are you sure you don't want to just use the watch API? (My first response assumed you'd considered watch API and preferred a different interaction model, but maybe I shouldn't have assumed that!)

Thank you for pointing me to audit webhooks! Just to double check if we are on the same page, do you mean using AuditSink objects?

Yes. You'd have to also offer a corresponding audit policy.

But the audit system is meant for auditing; it's not intended to drive automation (e.g., notifications can be batched). Using it for that would be a bit of a hack.

@lavalamp do you think that implementing Notification feature doesn't make sense now while we have dynamic audit logs implemented?

I think it doesn't make sense because it can be implemented in an external controller with no apiserver changes. I think you'd actually find lots of people wanting a webhook-style notification rather than a watch-style notification, so it's likely a useful thing to build. It just doesn't need to be built in, and therefore shouldn't be; it's much better to put this sort of load in a component that's external to apiserver where possible.

@grzesiek

This comment has been minimized.

Copy link
Author

grzesiek commented Apr 17, 2019

Watch API is indeed not something that makes sense in our case, I'm aware that it exists, but thanks for mentioning it.

I agree that an external controller makes sense here. Because we are using Kubernetes with Knative, I'm going to check KubernetesEventSource before taking a stab at implementing the controller + Notification CRD. Implementing that could be a fun ride, though!

Thanks for your input, I'm going to close this issue for the time being 🙇 🙏

@grzesiek grzesiek closed this Apr 17, 2019

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.