Skip to content

reconcilePeriod is not honored when installing HorizontalPodAutoscalers #3708

@faust64

Description

@faust64

Bug Report

What did you do?

I'm trying to have some Ansible operator install an HPA alongside my deployments.

Overall it works fine. Although as I've been looking at metrics exported by my operator, I realized any playbook I have, where those HPA configuration is enabled, would be running on a loop - as if some object got changed, either manually or during the last playbook run.

I could confirm that when HPAs configuration is disabled, the same playbooks would only apply once every reconcilePeriod, as defined in my watches.yaml.

When managing HPAs, my operator would create the following object, using the k8s plugin:

- apiVersion: autoscaling/v1
  kind: HorizontalPodAutoscaler
  metadata:
    name: cpu-autoscale-ssp-demo
    namespace: localrelease
  spec:
    maxReplicas: 4
    minReplicas: 1
    scaleTargetRef:
      apiVersion: apps/v1
      kind: Deployment
      name: ssp-demo
    targetCPUUtilizationPercentage: 75

The operator would not update that object, unless its spec actually needs to be changed - so once done with the first run, creating HPAs, I would see the operator skipping any step creating or patching HPAs.
In the end, the operator did not apply anything, yet that playbook is always running.

As far as I understand, it has to do with HPAs being regularly updated by a controller. This somehow triggers a new run of Ansible, while in that specific case, this really isn't necessary.

If I dump that object, I can see, among others, who last updated it:

$ kubectl get -o yaml -n localrelease hpa cpu-autoscale-ssp-demo
apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
metadata:
  annotations:
    autoscaling.alpha.kubernetes.io/conditions: '[{"type":"AbleToScale","status":"True","lastTransitionTime":"2020-08-02T13:26:23Z","reason":"ReadyForNewScale","message":"recommended
      size matches current size"},{"type":"ScalingActive","status":"True","lastTransitionTime":"2020-08-02T13:27:41Z","reason":"ValidMetricFound","message":"the
      HPA was able to successfully calculate a replica count from cpu resource utilization
      (percentage of request)"},{"type":"ScalingLimited","status":"False","lastTransitionTime":"2020-07-17T15:07:36Z","reason":"DesiredWithinRange","message":"the
      desired count is within the acceptable range"}]'
    autoscaling.alpha.kubernetes.io/current-metrics: '[{"type":"Resource","resource":{"name":"cpu","currentAverageUtilization":3,"currentAverageValue":"6m"}}]'
  creationTimestamp: "2020-07-17T15:06:49Z"
  managedFields:
  - apiVersion: autoscaling/v1
    fieldsType: FieldsV1
    fieldsV1:
      f:metadata:
        f:ownerReferences:
          .: {}
          k:{"uid":"121293cd-c2b0-432b-8efe-2abb29cd30d2"}:
            .: {}
            f:apiVersion: {}
            f:kind: {}
            f:name: {}
            f:uid: {}
      f:spec:
        f:maxReplicas: {}
        f:minReplicas: {}
        f:scaleTargetRef:
          f:apiVersion: {}
          f:kind: {}
          f:name: {}
        f:targetCPUUtilizationPercentage: {}
    manager: Swagger-Codegen
    operation: Update
    time: "2020-07-17T15:06:49Z"
  - apiVersion: autoscaling/v1
    fieldsType: FieldsV1
    fieldsV1:
      f:metadata:
        f:annotations:
          .: {}
          f:autoscaling.alpha.kubernetes.io/conditions: {}
          f:autoscaling.alpha.kubernetes.io/current-metrics: {}
      f:status:
        f:currentCPUUtilizationPercentage: {}
        f:currentReplicas: {}
        f:desiredReplicas: {}
    manager: kube-controller-manager
    operation: Update
    time: "2020-08-10T10:02:41Z"
  name: cpu-autoscale-ssp-demo
  namespace: localrelease
  ownerReferences:
  - apiVersion: wopla.io/v1beta1
    kind: Directory
    name: demo
    uid: 121293cd-c2b0-432b-8efe-2abb29cd30d2
  resourceVersion: "38768057"
  selfLink: /apis/autoscaling/v1/namespaces/localrelease/horizontalpodautoscalers/cpu-autoscale-ssp-demo
  uid: 3a8354d7-d53f-4064-98cd-c94e41681330
spec:
  maxReplicas: 4
  minReplicas: 1
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: ssp-demo
  targetCPUUtilizationPercentage: 75
status:
  currentCPUUtilizationPercentage: 3
  currentReplicas: 1
  desiredReplicas: 1

The managedFields array mentions, in addition to the Swagger-Codegen edit, one made by some kube-controller-manager, whose time frequently changes matching current date.

What did you expect to see?

I would expect the reconcilePeriod defined in my watches.yaml to be observed, regardless of my operator managing HPAs or not.

What did you see instead? Under which circumstances?

When installing HPAs, reconcilePeriod is "ignored", as some controller constantly updates HPA - child resources of objets being watched by my operator.

Environment

  • operator-sdk version: 0.19.2

  • go version: (whichever ships in that image)

  • Kubernetes version information:

Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.6", GitCommit:"dff82dc0de47299ab66c83c626e08b245ab19037", GitTreeState:"clean", BuildDate:"2020-07-15T16:58:53Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.3", GitCommit:"2e7996e3e2712684bc73f0dec0200d64eec7fe40", GitTreeState:"clean", BuildDate:"2020-05-20T12:43:34Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}

  • Kubernetes cluster kind:

vanilla / kube-spray, calico, containerd

  • Are you writing your operator in ansible, helm, or go?

Ansible

Metadata

Metadata

Assignees

Labels

language/ansibleIssue is related to an Ansible operator projecttriage/needs-informationIndicates an issue needs more information in order to work on it.

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions