Skip to content
This repository has been archived by the owner on Oct 3, 2020. It is now read-only.

More scaled down than scaled up #40

Closed
dbluxo opened this issue Mar 14, 2019 · 3 comments
Closed

More scaled down than scaled up #40

dbluxo opened this issue Mar 14, 2019 · 3 comments
Assignees
Labels

Comments

@dbluxo
Copy link

dbluxo commented Mar 14, 2019

Hi,

first of all, thank you for your project!

I've tried kube-downscaler in a nearly empty cluster. Here are the logs:

2019-03-12 12:52:11,615 INFO: Downscaler v0.9 started with debug=False, default_downtime=never, default_uptime=Mon-Fri 07:30-20:00 Europe/Berlin, dry_run=False, exclude_deployments=kube-downscaler,downscaler, exclude_namespaces=kube-system, exclude_statefulsets=, grace_period=900, interval=60, kind=['deployment', 'deployment', 'statefulset'], namespace=None, once=False
2019-03-12 19:00:23,032 INFO: Scaling down Deployment authelia/authelia-app from 1 to 0 replicas (uptime: Mon-Fri 07:30-20:00 Europe/Berlin, downtime: never)
2019-03-12 19:00:23,045 INFO: Scaling down Deployment authelia/authelia-redis-slave from 2 to 0 replicas (uptime: Mon-Fri 07:30-20:00 Europe/Berlin, downtime: never)
2019-03-12 19:00:23,060 INFO: Scaling down Deployment cattle-system/cattle-cluster-agent from 1 to 0 replicas (uptime: Mon-Fri 07:30-20:00 Europe/Berlin, downtime: never)
2019-03-12 19:00:23,080 INFO: Scaling down Deployment logging/elasticsearch-client from 2 to 0 replicas (uptime: Mon-Fri 07:30-20:00 Europe/Berlin, downtime: never)
2019-03-12 19:00:23,103 INFO: Scaling down Deployment logging/elasticsearch-exporter from 1 to 0 replicas (uptime: Mon-Fri 07:30-20:00 Europe/Berlin, downtime: never)
2019-03-12 19:00:23,124 INFO: Scaling down Deployment logging/kibana from 1 to 0 replicas (uptime: Mon-Fri 07:30-20:00 Europe/Berlin, downtime: never)
2019-03-12 19:00:23,144 INFO: Scaling down Deployment logging/laas-metricbeat from 1 to 0 replicas (uptime: Mon-Fri 07:30-20:00 Europe/Berlin, downtime: never)
2019-03-12 19:00:23,161 INFO: Scaling down Deployment monitoring/kube-state-metrics from 1 to 0 replicas (uptime: Mon-Fri 07:30-20:00 Europe/Berlin, downtime: never)
2019-03-12 19:00:23,191 INFO: Scaling down StatefulSet authelia/authelia-redis-master from 1 to 0 replicas (uptime: Mon-Fri 07:30-20:00 Europe/Berlin, downtime: never)
2019-03-12 19:00:23,212 INFO: Scaling down StatefulSet authelia/mongo from 3 to 0 replicas (uptime: Mon-Fri 07:30-20:00 Europe/Berlin, downtime: never)
2019-03-12 19:00:23,231 INFO: Scaling down StatefulSet logging/elasticsearch-data from 3 to 0 replicas (uptime: Mon-Fri 07:30-20:00 Europe/Berlin, downtime: never)
2019-03-12 19:00:23,253 INFO: Scaling down StatefulSet logging/elasticsearch-master from 3 to 0 replicas (uptime: Mon-Fri 07:30-20:00 Europe/Berlin, downtime: never)
2019-03-12 19:00:23,271 INFO: Scaling down StatefulSet monitoring/alertmanager from 3 to 0 replicas (uptime: Mon-Fri 07:30-20:00 Europe/Berlin, downtime: never)
2019-03-12 19:00:23,290 INFO: Scaling down StatefulSet monitoring/grafana from 1 to 0 replicas (uptime: Mon-Fri 07:30-20:00 Europe/Berlin, downtime: never)
2019-03-12 19:00:23,310 INFO: Scaling down StatefulSet monitoring/prometheus from 2 to 0 replicas (uptime: Mon-Fri 07:30-20:00 Europe/Berlin, downtime: never)
2019-03-13 06:30:14,204 INFO: Scaling up Deployment authelia/authelia-app from 0 to 1 replicas (uptime: Mon-Fri 07:30-20:00 Europe/Berlin, downtime: never)
2019-03-13 06:30:14,216 INFO: Scaling up Deployment authelia/authelia-redis-slave from 0 to 2 replicas (uptime: Mon-Fri 07:30-20:00 Europe/Berlin, downtime: never)
2019-03-13 06:30:14,228 INFO: Scaling up Deployment cattle-system/cattle-cluster-agent from 0 to 1 replicas (uptime: Mon-Fri 07:30-20:00 Europe/Berlin, downtime: never)
2019-03-13 06:30:14,239 INFO: Scaling up Deployment logging/elasticsearch-client from 0 to 2 replicas (uptime: Mon-Fri 07:30-20:00 Europe/Berlin, downtime: never)
2019-03-13 06:30:14,252 INFO: Scaling up Deployment logging/elasticsearch-exporter from 0 to 1 replicas (uptime: Mon-Fri 07:30-20:00 Europe/Berlin, downtime: never)
2019-03-13 06:30:14,268 INFO: Scaling up Deployment logging/kibana from 0 to 1 replicas (uptime: Mon-Fri 07:30-20:00 Europe/Berlin, downtime: never)
2019-03-13 06:30:14,286 INFO: Scaling up Deployment logging/laas-metricbeat from 0 to 1 replicas (uptime: Mon-Fri 07:30-20:00 Europe/Berlin, downtime: never)
2019-03-13 06:30:14,299 INFO: Scaling up Deployment monitoring/kube-state-metrics from 0 to 1 replicas (uptime: Mon-Fri 07:30-20:00 Europe/Berlin, downtime: never)
2019-03-13 06:30:14,345 INFO: Scaling up StatefulSet monitoring/alertmanager from 0 to 3 replicas (uptime: Mon-Fri 07:30-20:00 Europe/Berlin, downtime: never)
2019-03-13 06:30:14,358 INFO: Scaling up StatefulSet monitoring/grafana from 0 to 1 replicas (uptime: Mon-Fri 07:30-20:00 Europe/Berlin, downtime: never)
2019-03-13 06:30:14,372 INFO: Scaling up StatefulSet monitoring/prometheus from 0 to 2 replicas (uptime: Mon-Fri 07:30-20:00 Europe/Berlin, downtime: never)

As you can see: 15 x "Scaling down", but only 11 x "Scaling up". I have tried it several times, at each try this statefulsets were not scaled up:

  • authelia/authelia-redis-master
  • authelia/mongo
  • logging/elasticsearch-data
  • logging/elasticsearch-master

We use kube-downscaler version 0.9. (Kubernetes v1.12.6)

Any idea why?

@hjacobs
Copy link
Owner

hjacobs commented Mar 14, 2019

This might be related to #21

@dbluxo
Copy link
Author

dbluxo commented Mar 15, 2019

Yes, I can confirm that it only affects StatefulSets, which hasn't previous annotations and adding annotation downscaler/original-replicas fails. I think it doesn't affect Deployments, because the annotation deployment.kubernetes.io/revision exists automatically and adding further annotation downscaler/original-replicas works.

@hjacobs
Copy link
Owner

hjacobs commented Mar 15, 2019

Should be fixed by #42 and released in v0.11: https://github.com/hjacobs/kube-downscaler/releases/tag/0.11

@hjacobs hjacobs closed this as completed Mar 15, 2019
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
Projects
None yet
Development

No branches or pull requests

2 participants