Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error: Initialization of V2beta1HorizontalPodAutoscalerStatus #553

Closed
keigohtr opened this issue Jun 22, 2018 · 19 comments
Closed

Error: Initialization of V2beta1HorizontalPodAutoscalerStatus #553

keigohtr opened this issue Jun 22, 2018 · 19 comments
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@keigohtr
Copy link

When we use create_namespaced_horizontal_pod_autoscaler of AutoscalingV2beta1Api, the response always throws an exception since it doesn't set conditions in V2beta1HorizontalPodAutoscalerStatus .

Likewise, the response of read_namespaced_horizontal_pod_autoscaler throws an exception since it doesn't set current_metrics .

I will create a PR for this.

Situaltion

  • Kubernetes 1.9 via Rancher 1.6.16
  • Python 3.6.3
  • python kubernetes 5.0.0
@ackintosh
Copy link

ackintosh commented Jun 23, 2018

#554
It might be realized by fixing the API definition:

https://github.com/kubernetes-client/python/blob/71b5abce5a4e2d747c0bb26d4eaa17eeb08f8ac5/scripts/swagger.json#L68656

      "required": [
        "currentReplicas",
        "desiredReplicas",
        "currentMetrics",
        "conditions"
      ],

Removing "currentMetrics" and "conditions" in the swagger.json ( or its original data if the json is auto-generated file) will result as same as #554 .

@ackintosh
Copy link

@keigohtr
Copy link
Author

Thank you for your information. Yes we can fix this issue by hands.
I am not sure but I think this Kubernetes swagger spec is automatically generated from a code. So I believe we need to fix the original Kubernetes code for this.

@flmu
Copy link

flmu commented Jul 19, 2018

I am facing the same issue. Are there any plans to permanently fix this problem?

Btw, the same setting can be deployed with kubectl

@zq-david-wang
Copy link

I have the same issue, and sometimes "list_namespaced_horizontal_pod_autoscaler" would raise ValueError("Invalid value for conditions, must not be None") when the hpa has not condition set yet.

And I am using 9.0.0

For create_namespaced_horizontal_pod_autoscaler, add '_preload_content=False' can help

@Anurag2408
Copy link

Anurag2408 commented May 27, 2019

I am also facing the same issue. When is plan to fix this permanently?

@nollimahere
Copy link

nollimahere commented Jul 11, 2019

for those of us trying to get an HPA launched in a production environment where manual mods to the python modules are otherwise unreasonable, adding an annotation in the metadata has been enough to get past this issue for me.

resource_hpa['metadata']['annotations'] = { "autoscaling.alpha.kubernetes.io/conditions": "[ {\"type\":\"ScalingLimited\",\"status\":\"True\",\"reason\":\"TooFewReplicas\",\"message\":\"who needs messages\"}]" }

so, for reference, the whole dict looks like


>>> print( cc.json.dumps( resource_hpa, indent=2 ) )
{ 
  "apiVersion": "autoscaling/v2beta1",
  "kind": "HorizontalPodAutoscaler",
  "metadata": {
    "name": "my-serviceHPA",
    "annotations": {
      "autoscaling.alpha.kubernetes.io/conditions": "[ {\"type\":\"ScalingLimited\",\"status\":\"True\",\"reason\":\"TooFewReplicas\",\"message\":\"who needs messages\"}]"
    }
  },
  "spec": {
    "scaleTargetRef": {
      "apiVersion": "extensions/v1beta1",
      "kind": "Deployment",
      "name": "my-serviceHPA"
    },
    "minReplicas": 1,
    "maxReplicas": 10,
    "metrics": [
      { 
        "pods": {
          "targetAverageValue": 60,
          "metricName": "utilizationRate"
        },
        "type": "Pods"
      }
    ]
  }
}

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Oct 10, 2019
@nollimahere
Copy link

nollimahere commented Nov 6, 2019 via email

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Nov 6, 2019
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Feb 4, 2020
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Mar 5, 2020
@palnabarun
Copy link
Member

/remove-lifecycle rotten

@k8s-ci-robot k8s-ci-robot removed the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label Mar 25, 2020
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jun 23, 2020
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jul 23, 2020
@nollimahere
Copy link

nollimahere commented Jul 29, 2020 via email

@k8s-ci-robot k8s-ci-robot removed the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label Jul 29, 2020
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Oct 27, 2020
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Nov 26, 2020
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@k8s-ci-robot
Copy link
Contributor

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

9 participants