Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Question. How to enable custom metrics. #1467

Closed
ghost opened this issue Jan 13, 2017 · 26 comments
Closed

Question. How to enable custom metrics. #1467

ghost opened this issue Jan 13, 2017 · 26 comments
Labels
flag-map-request lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Milestone

Comments

@ghost
Copy link

ghost commented Jan 13, 2017

I am trying to use custom metrics to scale our cluster upon.
Came across this document which describes how to achieve that.

https://github.com/mwielgus/kubernetes.github.io/blob/custom-metrics/docs/user-guide/horizontal-pod-autoscaling/index.md#support-for-custom-metrics

My question is how how to start/edit cluster with ENABLE_CUSTOM_METRICS variable set to true.

@justinsb justinsb modified the milestone: 1.5.0 Jan 16, 2017
@justinsb
Copy link
Member

This needs us to map a flag (--enable-custom-metrics I believe). I'll mark it as such and circle back and add the requested flags en-masse.

@ghost
Copy link
Author

ghost commented Jan 17, 2017

Thank you @justinsb.

justinsb added a commit to justinsb/kops that referenced this issue Jan 20, 2017
@ghost
Copy link
Author

ghost commented Feb 7, 2017

Hey @justinsb, now with 1.5.1 released, how can one enable custom metrics? It seems there is no documentation whatsoever.

I tried to set enableCustomMetrics to true in cluster configuration but it gets erased, same for nodes ig.

Thanks!

@lamchakchan
Copy link

@justinsb I'm also needing to understand how to use this new flag? is it something along the lines of kops create cluster --enableCustomMetrics ?

@23doors
Copy link

23doors commented May 23, 2017

Any news on that? Lots of things changed in 1.6 regarding custom metrics but still, docs on this topic are fuzzy at best.

@kenden
Copy link
Contributor

kenden commented Jul 13, 2017

This continues in #2652

@chrislovecnm
Copy link
Contributor

This is an API value that needs to be added to the kubelet section. The new yaml docs that were added should help. Here is the PR that was added for the flag.

db54ecf

This doc talks about using and editing the yaml API of a cluster https://github.com/kubernetes/kops/blob/master/docs/manifests_and_customizing_via_api.md

@itskingori
Copy link
Member

itskingori commented Jul 27, 2017

@kwinczek this is probably a late answer to your question but an answer nonetheless. You need something like this in your cluster spec ...

spec:
  kubelet:
    enableCustomMetrics: true
    resolvConf: "/etc/resolv.conf"

This was enabled/added in db54ecf. The reason you need resolvConf: "" is explained here i.e.

NOTE: Where the corresponding configuration value can be empty,
fields can be set to empty in the spec, and an empty string will be
passed as the configuration value.

If you don't set it, you won't be able to save the change as kops will complain of invalid value (or something like that ... don't quite remember what the error was).

By, "Where the corresponding configuration value can be empty" ... they mean this part of the cluster component.

Update:

This is the error I was talking about earlier ...

# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
# Found fields that are not recognized
# ...
#     kubelet:
#       enableCustomMetrics: true
# -     resolvConf: null
#     kubernetesApiAccess:
#     - 0.0.0.0/0
# ...
#
#

@RahulMahale
Copy link
Contributor

I am getting the same error on adding it to my cluster spec in both the ways.

# Please edit the object below. Lines beginning with a '#' will be ignored,
 # and an empty file will abort the edit. If an error occurs while saving this file will
# reopened with the relevant failures.
# Found fields that are not recognized
#  kubelet:
#       enableCustomMetrics: true
# +     resolvConf: /etc/resolv.conf
# Please edit the object below. Lines beginning with a '#' will be ignored,
   # and an empty file will abort the edit. If an error occurs while saving this file will be
   # reopened with the relevant failures.
   #
   # Found fields that are not recognized
   # ...
   #     kubelet:
   #       enableCustomMetrics: true
   # +     resolvConf: ""

@moos3
Copy link

moos3 commented Oct 6, 2017

Any updates on this?

@RahulMahale
Copy link
Contributor

@moos3 I got it working using

spec:
  kubelet:
    enableCustomMetrics: true
  kubeAPIServer:
    runtimeConfig:
      autoscaling/v2alpha1: "true"

and then

kops update cluster --yes
kops rolling-update cluster --yes --force

But not able to get autoscaling based on QPS working.

@moos3
Copy link

moos3 commented Oct 6, 2017

Awesome, trying now.

@chrislovecnm
Copy link
Contributor

Re-opening.

Does someone mind documenting once we figure this out?

@chrislovecnm chrislovecnm reopened this Oct 7, 2017
@itskingori
Copy link
Member

@chrislovecnm I don't mind doing it ... I can give this a stab this week and report back.

@RahulMahale
Copy link
Contributor

@chrislovecnm @itskingori I can document it can you point me where to document that ? I will create a PR. Thanks.

@chrislovecnm
Copy link
Contributor

@RahulMahale
Copy link
Contributor

Thank @chrislovecnm here we go #3570

@itskingori
Copy link
Member

@RahulMahale awesome! Keen to give it a spin.

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

Prevent issues from auto-closing with an /lifecycle frozen comment.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or @fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 7, 2018
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Feb 10, 2018
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@itskingori
Copy link
Member

This is safe to close. Added horizontal_pod_autoscaling.md in #3942 which explains the how. If not clear, lemme know and I can clarify/assist.

@alrf
Copy link

alrf commented Apr 29, 2018

How to use enable-custom-metrics from command line?
kops returned the error:
unknown flag: --enable-custom-metrics
Can someone write the right command?

@itskingori
Copy link
Member

@alrf I think you might have been confused by #1467 (comment). There's no --enable-custom-metrics on the kops command. What Justin was referring to is the mapping of the config that the user will provide to the flag that needs to be set on the respective Kubernetes component.

So if you set this ...

spec:
  kubelet:
    enableCustomMetrics: true

... in your kops cluster state, the flag will be set where it needs to be set.

@jsenon
Copy link
Contributor

jsenon commented Sep 27, 2018

does this command deprecated?
Sep 27 07:52:00 ip-1xxxx kubelet[10434]: F0927 07:52:00.304903 10434 server.go:148] unknown flag: --enable-custom-metrics

@jsenon
Copy link
Contributor

jsenon commented Sep 27, 2018

It seems yes 52564

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
flag-map-request lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests