Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Provide CPU/Memory scaler #1183

Closed
tomkerkhove opened this issue Sep 22, 2020 · 19 comments · Fixed by #1215
Closed

Provide CPU/Memory scaler #1183

tomkerkhove opened this issue Sep 22, 2020 · 19 comments · Fixed by #1215
Assignees
Labels
feature-request All issues for new features that have not been committed to Hacktoberfest help wanted Looking for support from community needs-discussion scaler

Comments

@tomkerkhove
Copy link
Member

Provide CPU/Memory scaler which acts as an abstraction on top of HPA functionality.

Today, you can already scale on CPU/Memory (docs) through horizontalPodAutoscalerConfig.resourceMetrics but it requires you to at least have one trigger defined.

This is not ideal given KEDA users should be able to fully rely on KEDA for autoscaling, and not have the need to add HPAs as well. If we have dedicated scalers for these (who don't go through metrics server) they have a consistent experience without knowing the Kubernetes internals and only use ScaledObjects.

Do you need this as well? Don't hesitate to give a 👍

💡 We know this is perfectly possible today, but we want to give a streamlined experience for those who are not a Kubernetes (expert)

@tomkerkhove tomkerkhove added help wanted Looking for support from community needs-discussion scaler feature-request All issues for new features that have not been committed to Hacktoberfest labels Sep 22, 2020
@silenceper
Copy link
Contributor

silenceper commented Sep 23, 2020

Does this mean that metrics servier is no longer needed?

@tomkerkhove
Copy link
Member Author

Yes, it's jsut an abstraction on top of the HPA; but as a user I don't have to care about that as KEDA handles that for me.

@silenceper
Copy link
Contributor

silenceper commented Oct 2, 2020

I am interested in completing this feature.

I think there are two ways to achieve this:

  1. Set the triggers field as an optional field, and use the settings in horizontalPodAutoscalerConfig to generate hpa objects.
  2. Add a new scaler, which means cpu/memory built-in resources. Thus replacing horizontalPodAutoscalerConfig.resourceMetrics(abandon this field)

@tomconte @zroubalik Which way is more appropriate?

I tend to use the second method, to provide a new scaler for cpu/memory to instead of horizontalPodAutoscalerConfig.resourceMetrics field

@tomkerkhove
Copy link
Member Author

Our idea is to use a scaler definition which defines the CPU/Memory needs and just use that for horizontalPodAutoscalerConfig.

Reasoning for that is:

  1. We shouldn't reinvent the wheel and just use HPA
  2. Provide a trigger for user experience
  3. We enforce at least one trigger (which makes sense)

But I can see the confusion if the setting is still there. However, if we remove it then we should have it for 2.0. Thoughts @zroubalik ?

@silenceper
Copy link
Contributor

My idea is to add the following configuration, using resourceMetrics as a new trigger:

apiVersion: keda.sh/v1alpha1
kind: ScaledObject
metadata:
  name: cron-scaledobject
  namespace: default
spec:
  scaleTargetRef:
    name: my-deployment
  advanced:                                          # Optional. Section to specify advanced options
    restoreToOriginalReplicaCount: true/false        # Optional. Default: false
    horizontalPodAutoscalerConfig:                   # Optional. Section to specify HPA related options
      behavior:                                      # Optional. Use to modify HPA's scaling behavior
        scaleDown:
          stabilizationWindowSeconds: 300
          policies:
          - type: Percent
            value: 100
            periodSeconds: 15
  triggers:
  - type: resource                           # provider cpu/memory scaler
    metadata:
      name: cpu/memory
      type: value/ utilization/ averagevalue
      value: 60                                  # Optional
      averageValue: 40                           # Optional
      averageUtilization: 50                     # Optional

Of course, the wheel will not be repeated. The resource scaler is only responsible for generating an hpa resource object (in implementation, this scaler may need to be specially processed, instead of generating a type:External hpa object like other scalers)

@tomkerkhove
Copy link
Member Author

I personnaly would intro cpu & memory instead of resource which is less user friendly

@silenceper
Copy link
Contributor

like this :

...
  triggers:
  - type: cpu/memory                          # provider cpu/memory scaler
    metadata:
      type: value/ utilization/ averagevalue
      value: 60                                  # Optional
      averageValue: 40                           # Optional
      averageUtilization: 50                     # Optional

if this design is ok, I plan to implement it.

PTAL @tomconte @zroubalik

@tomkerkhove
Copy link
Member Author

What if it's just this:

  triggers:
  - type: cpu/memory                          # provider cpu/memory scaler
    metadata:
      type: value/utilization/averagevalue # or 'aggregation'
      value: 60

@silenceper
Copy link
Contributor

silenceper commented Oct 2, 2020

yes, this will be more user-friendly.

apiVersion: keda.sh/v1alpha1
kind: ScaledObject
metadata:
  name: cron-scaledobject
  namespace: default
spec:
  scaleTargetRef:
    name: my-deployment
  advanced:                                          # Optional. Section to specify advanced options
    restoreToOriginalReplicaCount: true/false        # Optional. Default: false
    horizontalPodAutoscalerConfig:                   # Optional. Section to specify HPA related options
      behavior:                                      # Optional. Use to modify HPA's scaling behavior
        scaleDown:
          stabilizationWindowSeconds: 300
          policies:
          - type: Percent
            value: 100
            periodSeconds: 15
  triggers:
  - type: cpu/memory                           # provider cpu/memory scaler
    metadata:
      type: value/ utilization/ averagevalue
      value: 60                                  

Any other suggestions?

@tomkerkhove
Copy link
Member Author

LGTM, thanks!

@silenceper
Copy link
Contributor

silenceper commented Oct 2, 2020

@tomkerkhove

In addition, this scaler is not applicable to ScaledJob.

If you want to support it, it will be more complicated, and the logic will be repeated with the hpa part, resulting in duplicate wheels.

@silenceper
Copy link
Contributor

/assign me

@ZviMints
Copy link

Does the percentage are from container CPU usage or from k8s requests CPU usage?

@tomkerkhove
Copy link
Member Author

It does not support per container usage yet (see #3146) but is only on pod-level.

@joeynaor
Copy link

joeynaor commented Jun 27, 2022

@tomkerkhove Do I need the K8s metrics server in order to utilize this or does Keda collects the CPU/memory usage of each pod?

EDIT: #1644

@zroubalik
Copy link
Member

@joeynaor yes, you need that.

@gabricc
Copy link

gabricc commented Mar 28, 2023

hey guys! @silenceper @tomkerkhove I got this error when trying to use this scaler:

KEDAScalerFailed
no scaler found for type: cpu/memory

Did I miss anything? This is my ScaledObject file:

apiVersion: keda.sh/v1alpha1
kind: ScaledObject
metadata:
  annotations:
  labels:
    scaledobject.keda.sh/name: deployment-name
  name: deployment-name
  namespace: app
spec:
  cooldownPeriod: 600
  maxReplicaCount: 2
  minReplicaCount: 1
  scaleTargetRef:
    name: deployment-name
  triggers:
  - type: cpu/memory
    metricType: Utilization
    metadata:
      value: "95"

Thanks!!

@silenceper
Copy link
Contributor

  triggers:
  - type: cpu/memory # only support `type: cpu` and `type: memory`
    metricType: Utilizat

@zroubalik
Copy link
Member

It's either cpu or memory

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
feature-request All issues for new features that have not been committed to Hacktoberfest help wanted Looking for support from community needs-discussion scaler
Projects
None yet
Development

Successfully merging a pull request may close this issue.

6 participants