Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

create_namespaced_horizontal_ pod_autoscaler throws ValueError: Invalid value for conditions, must not be None #1098

Closed
savitha-suresh opened this issue Mar 6, 2020 · 15 comments
Assignees
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@savitha-suresh
Copy link

savitha-suresh commented Mar 6, 2020

File "/virutalenv/lib/python3.6/site-packages/kubernetes/client/apis/autoscaling_v2beta1_api.py", line 60, in create_namespaced_horizontal_pod_autoscaler                                                                                                                                                
    (data) = self.create_namespaced_horizontal_pod_autoscaler_with_http_info(namespace, body, **kwargs)                                                       
  File "/virutalenv/lib/python3.6/site-packages/kubernetes/client/apis/autoscaling_v2beta1_api.py", line 151, in create_namespaced_horizontal
_pod_autoscaler_with_http_info                                                                                                                                
    collection_formats=collection_formats)                                                                                                                    
  File "/virutalenv/lib/python3.6/site-packages/kubernetes/client/api_client.py", line 334, in call_api                                      
    _return_http_data_only, collection_formats, _preload_content, _request_timeout)                                                                           
  File "/virutalenv/lib/python3.6/site-packages/kubernetes/client/api_client.py", line 176, in __call_api                                    
    return_data = self.deserialize(response_data, response_type)                                                                                              
  File "/virutalenv/lib/python3.6/site-packages/kubernetes/client/api_client.py", line 249, in deserialize                                   
    return self.__deserialize(data, response_type)                                                                                                            
  File "/virutalenv/lib/python3.6/site-packages/kubernetes/client/api_client.py", line 289, in __deserialize                                 
    return self.__deserialize_model(data, klass)                                                                                                              
  File "/virutalenv/lib/python3.6/site-packages/kubernetes/client/api_client.py", line 633, in __deserialize_model                           
    kwargs[attr] = self.__deserialize(value, attr_type)                                                                                                       
  File "/virutalenv/lib/python3.6/site-packages/kubernetes/client/api_client.py", line 289, in __deserialize                                 
    return self.__deserialize_model(data, klass)                                                                                                              
  File "/virutalenv/lib/python3.6/site-packages/kubernetes/client/api_client.py", line 635, in __deserialize_model                           
    instance = klass(**kwargs)                                                                                                                                
  File "/virutalenv/lib/python3.6/site-packages/kubernetes/client/models/v2beta1_horizontal_pod_autoscaler_status.py", line 64, in __init__  
    self.conditions = conditions                                                                                                                              
  File "/virutalenv/lib/python3.6/site-packages/kubernetes/client/models/v2beta1_horizontal_pod_autoscaler_status.py", line 95, in conditions
    raise ValueError("Invalid value for `conditions`, must not be `None`")                                                                                    
ValueError: Invalid value for `conditions`, must not be `None`

code to reproduce -

from kubernetes import client
k8s_api = client.AutoscalingV2beta1Api()
k8s_api.create_namespaced_horizontal_pod_autoscaler(namespace='default',
body=client.V2beta2HorizontalPodAutoscaler(
        api_version='autoscaling/v2beta2',
        kind='HorizontalPodAutoscaler',
        metadata=client.V1ObjectMeta(
            name=scaler_name
        ),
        spec=client.V2beta2HorizontalPodAutoscalerSpec(
            scale_target_ref=client.V2beta2CrossVersionObjectReference(
                api_version=version1,
                kind=scalable_object,
                name=app_name
            ),
            min_replicas=N1,
            max_replicas=N2,
            metrics=[client.V2beta2MetricSpec(
                type='Object',
                object=client.V2beta2ObjectMetricSource(
                    metric=client.V2beta2MetricIdentifier(
                        name=metric_name
                    ),
                    described_object=client.V2beta2CrossVersionObjectReference(
                        api_version=version2,
                        kind=scalable_object,
                        name=app_name
                    ),
                    target=client.V2beta2MetricTarget(
                        type=target_type,
                        value=target_value
                    )
                ),

            )]
        ),
    )

This returns the exception i posted above, though it create the hpa. status is optional in V2beta2HorizontalPodAutoscaler, why does this exception occur and what is the work around for this?

@palnabarun
Copy link
Member

/assign

@GrahamDumpleton
Copy link

There is a similar issue with creating CRDs. For CRDs, the workaround is to use the wrapt module to monkey patch the code to temporarily fix it. For the CRD case one can use:

import wrapt

def fix_V1beta1CustomResourceDefinitionStatus___init__(wrapped, instance, args, kwargs):
    def _resolve(accepted_names=None, conditions=[], stored_versions=[]):
        return accepted_names, conditions, stored_versions

    accepted_names, conditions, stored_versions = _resolve(*args, **kwargs)

    return wrapped(
            accepted_names=accepted_names,
            conditions=conditions or [],
            stored_versions=stored_versions or [])

wrapt.wrap_function_wrapper(
        'kubernetes.client.models.v1beta1_custom_resource_definition_status',
        'V1beta1CustomResourceDefinitionStatus.__init__',
        fix_V1beta1CustomResourceDefinitionStatus___init__)

You could craft a similar monkey patch for V2beta2HorizontalPodAutoscaler.

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jun 22, 2020
@daemur
Copy link

daemur commented Jul 20, 2020

/remove-lifecycle stale

Getting the issue with current version running on k8s 1.16 on eks. Any status on the fix?

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jul 20, 2020
@tdmalone
Copy link

tdmalone commented Jul 23, 2020

This issue can also be skipped by catching the ValueError exception and checking for the error:

except ValueError as exception:
    if str(exception) == 'Invalid value for `conditions`, must not be `None`':
        logger.info('Skipping invalid \'conditions\' value...')
    else:
        raise exception

fwiw, I'm also see this on k8s 1.17 on EKS with a HorizontalPodAutoscaler on autoscaling/v2beta2 (deployed from a YAML file via kubernetes.utils.create_from_dict).

@tdmalone
Copy link

Related: #1022
Looks like it'll be fixed when v12 of this client is released.

@tdmalone
Copy link

Aand, here's another potential workaround method: #553 (comment)
So that's at least 3 to choose from in this ticket :)

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Oct 22, 2020
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Nov 21, 2020
@palnabarun
Copy link
Member

/remove-lifecycle rotten

@k8s-ci-robot k8s-ci-robot removed the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label Nov 23, 2020
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Feb 21, 2021
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Mar 23, 2021
@abhishekdhotre
Copy link

Still facing the issue using kubernetes-12.0.1 and kubernetes v 1.17, any updates?

@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-contributor-experience at kubernetes/community.
/close

@k8s-ci-robot
Copy link
Contributor

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-contributor-experience at kubernetes/community.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

8 participants