Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Define scaling annotations for eventing #2584

Closed
aslom opened this issue Feb 14, 2020 · 18 comments
Closed

Define scaling annotations for eventing #2584

aslom opened this issue Feb 14, 2020 · 18 comments
Labels
kind/feature-request lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale.
Milestone

Comments

@aslom
Copy link
Member

aslom commented Feb 14, 2020

Problem
Knative serving provides annotations and we should consider using them for eventing to keep serverless consistent scaling configuration for Knative users

Persona:
Which persona is this feature for?
Event consumer
System Integrator
Contributors

Exit Criteria
Eventing scaling annotations are defined in docs/specs

Additional context (optional)

We have ongoing discussion and experimenting with using KEDA for scaling event sources. Annotations may be the most natural and consistent way to do it: #2153
/cc @Abd4llA @lionelvillard @nachocano @matzew @n3wscott @zroubalik

@aslom
Copy link
Member Author

aslom commented Feb 14, 2020

As a first proposal please consider:

annotations:
    autoscaling.knative.dev/minScale: "0"
    autoscaling.knative.dev/maxScale: "10"
    autoscaling.knative.dev/class: keda.autoscaling.knative.dev
    keda.autoscaling.knative.dev/pollingInterval: "2"
    keda.autoscaling.knative.dev/cooldownPeriod: "15"

It reuses autoscaling.knative.dev/minScale and autoscaling.knative.dev/maxScale from Knative Serving and class extension mechanism to add optional KEDA specific parameters

@aslom
Copy link
Member Author

aslom commented Feb 14, 2020

I think having consistent annotation across Serving and Eventing for autoscaling is big win for serverless developer experience in Knative. And in Eventing to have scaling configuration work for all data plane objects starting with sources, then channels and brokers.

@nachocano
Copy link
Contributor

nachocano commented Feb 14, 2020 via email

@nachocano
Copy link
Contributor

maybe couple this to sources? sources.knative.dev/minScale: "0" sources.knative.dev/maxScale: "10" sources.knative.dev/autoscalerClass: keda.sources.knative.dev keda.sources.knative.dev/pollingInterval: "2" keda.sources.knative.dev/cooldownPeriod: "15"

Actually, I may take this back... Might need something similar for channels for example, so coupling it with sources might not be the right thing to do...

@n3wscott
Copy link
Contributor

I think there are some levels of control that you want to have the knobs (like our Queue sources) and others where the controller knows it is a bad idea to scale in some way (like strict quotas on an API).

Have a think about what that might look like for the resources in question and ask if it makes sense for each of them to be able to control KEDA or whatever in this way. I suspect that the higher level objects will not want to allow control and all the knobs.

The more knobs, the more documentation and more test permutations.

@aslom
Copy link
Member Author

aslom commented Feb 26, 2020

Based on two experimental implementations (Kafka and @nachocano gcp) I have documented annotations used in PR #2655

@matzew
Copy link
Member

matzew commented Feb 27, 2020

I am a bit worried that here in eventing we "just" leverage KEDA, for autoscaling, and (re)using same annotations that the autoscaler in serving uses.

I'd rather like to see some way of integrating both, and than defining APIs here, for usage.

I fear a bit of are just adding KEDA here, but are leaving our own knative autoscaler from serving.

Perhaps there is interest in having some deeper integration between these two?

Perhaps @markusthoemmes and/or @mattmoor have a comment here

@mattmoor
Copy link
Member

cc @vagababov too

I think I'm missing some context here. Who wants to fill me in so I can help? 🤓

@aslom
Copy link
Member Author

aslom commented Mar 3, 2020

@mattmoor this is part of making eventing sources scalable - you can see whole history and discussion in #2153

@aslom
Copy link
Member Author

aslom commented Mar 3, 2020

@n3wscott how would you control what knobs/parameters are allowed for autoscaling? Do you have example(s) for it?

@aslom
Copy link
Member Author

aslom commented Mar 4, 2020

Currently it is not clear that push based event sources can use Knative Serving autoscaling annotations without any changes. Going to modify PR #2655 to reflect it

@aslom
Copy link
Member Author

aslom commented Mar 4, 2020

Other unaddressed problem is to be able to provide targeted autoscaling for more complex components such as channels and brokers. For example I may want to have ingress to never scale to zero (when working as ksvc) but for dispatcher part to allow scaling to zero (as scaling deployments replicas form zero may be fast enough with KEDA). In such situation there should be different minScale annotations for ingress ksvc and delivery that uses KEDA. One idea:

  ingress.channels.knative.dev/minScale: "1"
  outgress.channels.knative.dev/minScale: "0"

/cc @nachocano @yolocs @lionelvillard @n3wscott @grantr what do you think about it ^^

@aslom
Copy link
Member Author

aslom commented Mar 4, 2020

Updated PR with clarifications about different types of autoscaling and different Knative Eventing objects: https://github.com/knative/eventing/pull/2655/files#diff-9dab640b44f6688997a48fd33f01b501

@aslom
Copy link
Member Author

aslom commented Mar 5, 2020

Based on discussion it seems that eventing autoscaling should be made easy to configure per cluster or per domain - added short description and TODO to https://github.com/knative/eventing/pull/2655/files#diff-9dab640b44f6688997a48fd33f01b501

@aslom
Copy link
Member Author

aslom commented Mar 31, 2020

Public google doc to discuss and define eventing autoscaling goals:
https://docs.google.com/document/d/1usNmsuHBWzVaL5GGC873iGVrkKXGbc6t7bLHJL38Cyg/edit?usp=sharing

@github-actions
Copy link

This issue is stale because it has been open for 90 days with no
activity. It will automatically close after 30 more days of
inactivity. Reopen the issue with /reopen. Mark the issue as
fresh by adding the comment /remove-lifecycle stale.

@github-actions github-actions bot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Nov 25, 2020
@aslom
Copy link
Member Author

aslom commented Dec 6, 2020

/reopen

@github-actions github-actions bot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 23, 2021
@github-actions
Copy link

This issue is stale because it has been open for 90 days with no
activity. It will automatically close after 30 more days of
inactivity. Reopen the issue with /reopen. Mark the issue as
fresh by adding the comment /remove-lifecycle stale.

@github-actions github-actions bot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Apr 24, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/feature-request lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale.
Projects
None yet
Development

No branches or pull requests

6 participants