Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Kubernetes pod scaling #2540

Closed
prasenforu opened this Issue Mar 28, 2017 · 9 comments

Comments

Projects
None yet
4 participants
@prasenforu
Copy link

prasenforu commented Mar 28, 2017

Recently we tried container monitoring with Prometheus.

Is there any approach where we can scale container/pod using cpu or memory metric in prometheus and alertmanager?

@gouthamve

This comment has been minimized.

Copy link
Member

gouthamve commented Mar 28, 2017

You could write an alert at which you want to scale (ex: CPU > 80% FOR 10mins).
Run Alertmanager and use a webhook receiver to write your scaling logic.

Usage related questions are best limited to: https://groups.google.com/forum/#!forum/prometheus-users

@prasenforu

This comment has been minimized.

Copy link
Author

prasenforu commented Mar 28, 2017

Will it work when it recover it should do scale down.

Can you please refer any blog or mail thread on this subject.

@juliusv

This comment has been minimized.

Copy link
Member

juliusv commented Mar 28, 2017

@prasenforu

This comment has been minimized.

Copy link
Author

prasenforu commented Mar 30, 2017

Thanks for response.

I am more interested as follows,

When cpu high it should trigger one webhook.
Which I am able to do by setting receiver webhook.

But when it resolved it should trigger another webhook.

Which I am not able to configure.

Plesae advice.

@juliusv

This comment has been minimized.

Copy link
Member

juliusv commented Mar 30, 2017

We don't support different receivers for firing/resolved for the same alert, but you could emulate it with two alerting rules, one that fires when something is bad and one that fires on the opposite condition (everything is good), and then notify different webhooks for each when they are firing. For example:

ALERT InstanceDown up{job="myservice"} == 0
[...]

ALERT AllInstancesUp avg(up{job="myservice"}) == 1
[...]

Not sure how great that pattern is though :)

@juliusv

This comment has been minimized.

Copy link
Member

juliusv commented Mar 30, 2017

Closing, as questions are for the users mailing list, not GitHub issues. See https://prometheus.io/community/.

@juliusv juliusv closed this Mar 30, 2017

@prasenforu

This comment has been minimized.

Copy link
Author

prasenforu commented Apr 2, 2017

If I create another alert for scale down.

Like cpu < 70%

But in that scenario it is allways true if I do then it will scale down to zero than mean no pod running.

My question any alternative approach or does it support alert as range.

CPU > 40% but CPU < 70%

in a single alert.

@andrewhowdencom

This comment has been minimized.

Copy link

andrewhowdencom commented Apr 6, 2017

@prasenforu If you ask your question on the mailing list I may be able to help further. (Paste a link here once you have)

@lock

This comment has been minimized.

Copy link

lock bot commented Mar 23, 2019

This thread has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs.

@lock lock bot locked and limited conversation to collaborators Mar 23, 2019

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
You can’t perform that action at this time.