Skip to content
This repository has been archived by the owner on Jun 13, 2024. It is now read-only.

k8s module failing with openshift version v0.12.0 released on Feb 25th 2021 #373

Closed
Akasurde opened this issue Mar 1, 2021 · 5 comments
Closed
Assignees
Labels
duplicate This issue or pull request already exists

Comments

@Akasurde
Copy link
Member

Akasurde commented Mar 1, 2021

From @codeaprendiz on Mar 01, 2021 09:49

SUMMARY

With the latest release of openshift version 0.12.0 we started facing issue with ansible module k8s. The k8s module works fine with openshift version 0.11.2
The following is the actual error which we are getting

  msg: 'Failed to get client due to HTTPConnectionPool(host=''localhost'', port=80): Max retries exceeded with url: /version (Caused by NewConnectionError(''<urllib3.connection.HTTPConnection object at 0x7f5800d01668>: Failed to establish a new connection: [Errno 111] Connection refused'',))'

$ pip list | grep openshift
openshift           0.12.0
ISSUE TYPE
  • Bug Report
COMPONENT NAME

How we are using the module

- name: deploy application on kubernetes
  k8s:
    #    apply: yes
    #    force: yes
    state: present
    kubeconfig: _kubeconfig/{{env}}/kubeconfig
    #    resource_definition: "{{ version_deployment_file }}"
    src: "/tmp/{{ env }}-{{ applicationName }}.yaml"
  register: k8s_apply_result
  no_log: "{{ no_logging_enabled }}"
  tags:
    - full-deploy
    - code-deploy
ANSIBLE VERSION
$ ansible --version                 
ansible 2.10.5
  config file = /opt/dev-deploy-tradeling/_ansible/ansible.cfg
  configured module search path = ['/home/ubuntu/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
  ansible python module location = /usr/local/lib/python3.6/dist-packages/ansible
  executable location = /usr/local/bin/ansible
  python version = 3.6.9 (default, Jan 26 2021, 15:33:00) [GCC 8.4.0]
The full traceback is:
  File "/tmp/ansible_k8s_payload_ai55sktu/ansible_k8s_payload.zip/ansible_collections/community/kubernetes/plugins/module_utils/common.py", line 265, in get_api_client
    return DynamicClient(kubernetes.client.ApiClient(configuration))
  File "/usr/local/lib/python3.6/dist-packages/openshift/dynamic/client.py", line 71, in __init__
    self.__discoverer = discoverer(self, cache_file)
  File "/usr/local/lib/python3.6/dist-packages/openshift/dynamic/discovery.py", line 259, in __init__
    Discoverer.__init__(self, client, cache_file)
  File "/usr/local/lib/python3.6/dist-packages/openshift/dynamic/discovery.py", line 31, in __init__
    self.__init_cache()
  File "/usr/local/lib/python3.6/dist-packages/openshift/dynamic/discovery.py", line 78, in __init_cache
    self._load_server_info()
  File "/usr/local/lib/python3.6/dist-packages/openshift/dynamic/discovery.py", line 158, in _load_server_info
    'kubernetes': self.client.request('get', '/version', serializer=just_json)
  File "/usr/local/lib/python3.6/dist-packages/openshift/dynamic/client.py", line 42, in inner
    resp = func(self, *args, **kwargs)
  File "/usr/local/lib/python3.6/dist-packages/openshift/dynamic/client.py", line 247, in request
    _return_http_data_only=params.get('_return_http_data_only', True)
  File "/usr/local/lib/python3.6/dist-packages/kubernetes/client/api_client.py", line 353, in call_api
    _preload_content, _request_timeout, _host)
  File "/usr/local/lib/python3.6/dist-packages/kubernetes/client/api_client.py", line 184, in __call_api
    _request_timeout=_request_timeout)
  File "/usr/local/lib/python3.6/dist-packages/kubernetes/client/api_client.py", line 377, in request
    headers=headers)
  File "/usr/local/lib/python3.6/dist-packages/kubernetes/client/rest.py", line 243, in GET
    query_params=query_params)
  File "/usr/local/lib/python3.6/dist-packages/kubernetes/client/rest.py", line 216, in request
    headers=headers)
  File "/usr/local/lib/python3.6/dist-packages/urllib3/request.py", line 76, in request
    method, url, fields=fields, headers=headers, **urlopen_kw
  File "/usr/local/lib/python3.6/dist-packages/urllib3/request.py", line 97, in request_encode_url
    return self.urlopen(method, url, **extra_kw)
  File "/usr/local/lib/python3.6/dist-packages/urllib3/poolmanager.py", line 336, in urlopen
    response = conn.urlopen(method, u.request_uri, **kw)
  File "/usr/local/lib/python3.6/dist-packages/urllib3/connectionpool.py", line 765, in urlopen
    **response_kw
  File "/usr/local/lib/python3.6/dist-packages/urllib3/connectionpool.py", line 765, in urlopen
    **response_kw
  File "/usr/local/lib/python3.6/dist-packages/urllib3/connectionpool.py", line 765, in urlopen
    **response_kw
  File "/usr/local/lib/python3.6/dist-packages/urllib3/connectionpool.py", line 725, in urlopen
    method, url, error=e, _pool=self, _stacktrace=sys.exc_info()[2]
  File "/usr/local/lib/python3.6/dist-packages/urllib3/util/retry.py", line 439, in increment
    raise MaxRetryError(_pool, url, error or ResponseError(cause))
fatal: [localhost]: FAILED! => changed=false 
..
..
 msg: 'Failed to get client due to HTTPConnectionPool(host=''localhost'', port=80): Max retries exceeded with url: /version (Caused by NewConnectionError(''<urllib3.connection.HTTPConnection object at 0x7f5800d01668>: Failed to establish a new connection: [Errno 111] Connection refused'',))'
CONFIGURATION
ANSIBLE_FORCE_COLOR(/opt/dev-deploy-somedirectory/_ansible/ansible.cfg) = True
ANSIBLE_NOCOWS(/opt/dev-deploy-somedirectory/_ansible/ansible.cfg) = True
ANSIBLE_PIPELINING(/opt/dev-deploy-somedirectory/_ansible/ansible.cfg) = True
ANSIBLE_SSH_ARGS(/opt/dev-deploy-somedirectory/_ansible/ansible.cfg) = -C -F ssh/web.ansible.server.config -q
ANSIBLE_SSH_CONTROL_PATH(/opt/dev-deploy-somedirectory/_ansible/ansible.cfg) = %(directory)s/%%h-%%r
ANSIBLE_SSH_RETRIES(/opt/dev-deploy-somedirectory/_ansible/ansible.cfg) = 10
CACHE_PLUGIN(/opt/dev-deploy-somedirectory/_ansible/ansible.cfg) = jsonfile
CACHE_PLUGIN_CONNECTION(/opt/dev-deploy-somedirectory/_ansible/ansible.cfg) = ./_logs/ssh/facts/
CACHE_PLUGIN_TIMEOUT(/opt/dev-deploy-somedirectory/_ansible/ansible.cfg) = 10000
DEFAULT_CALLBACK_WHITELIST(/opt/dev-deploy-somedirectory/_ansible/ansible.cfg) = ['profile_tasks']
DEFAULT_FILTER_PLUGIN_PATH(/opt/dev-deploy-somedirectory/_ansible/ansible.cfg) = ['/opt/dev-deploy-somedirectory/_ansible/lib/plugins/filter_plugins', '/home/ubuntu/.ansible/plugins/filter_plugins', '/usr/share/ansible_plugins/filter_plugins']
DEFAULT_FORKS(/opt/dev-deploy-somedirectory/_ansible/ansible.cfg) = 50
DEFAULT_GATHERING(/opt/dev-deploy-somedirectory/_ansible/ansible.cfg) = smart
DEFAULT_HOST_LIST(/opt/dev-deploy-somedirectory/_ansible/ansible.cfg) = ['/opt/dev-deploy-somedirectory/_ansible/inventory']
DEFAULT_INVENTORY_PLUGIN_PATH(/opt/dev-deploy-somedirectory/_ansible/ansible.cfg) = ['/opt/dev-deploy-somedirectory/_ansible/lib/plugins/inventory_plugins', '/home/ubuntu/.ansible/plugins/inventory_plugins', '/usr/share/ansible_plugins/inventory_plugins']
DEFAULT_LOAD_CALLBACK_PLUGINS(/opt/dev-deploy-somedirectory/_ansible/ansible.cfg) = True
DEFAULT_LOOKUP_PLUGIN_PATH(/opt/dev-deploy-somedirectory/_ansible/ansible.cfg) = ['/opt/dev-deploy-somedirectory/_ansible/lib/plugins/lookup_plugins', '/home/ubuntu/.ansible/plugins/lookup_plugins', '/usr/share/ansible_plugins/lookup_plugins']
DEFAULT_POLL_INTERVAL(/opt/dev-deploy-somedirectory/_ansible/ansible.cfg) = 15
DEFAULT_SCP_IF_SSH(/opt/dev-deploy-somedirectory/_ansible/ansible.cfg) = smart
DEFAULT_SSH_TRANSFER_METHOD(/opt/dev-deploy-somedirectory/_ansible/ansible.cfg) = smart
DEFAULT_STDOUT_CALLBACK(/opt/dev-deploy-somedirectory/_ansible/ansible.cfg) = yaml
DEFAULT_TRANSPORT(/opt/dev-deploy-somedirectory/_ansible/ansible.cfg) = smart
DEFAULT_VARS_PLUGIN_PATH(/opt/dev-deploy-somedirectory/_ansible/ansible.cfg) = ['/opt/dev-deploy-somedirectory/_ansible/lib/plugins/vars_plugins', '/home/ubuntu/.ansible/plugins/vars_plugins', '/usr/share/ansible_plugins/vars_plugins']
HOST_KEY_CHECKING(/opt/dev-deploy-somedirectory/_ansible/ansible.cfg) = False
INVENTORY_ENABLED(/opt/dev-deploy-somedirectory/_ansible/ansible.cfg) = ['host_list', 'script', 'auto', 'yaml', 'ini', 'toml', 'aws_ec2_custom']
PARAMIKO_HOST_KEY_AUTO_ADD(/opt/dev-deploy-somedirectory/_ansible/ansible.cfg) = True
RETRY_FILES_ENABLED(/opt/dev-deploy-somedirectory/_ansible/ansible.cfg) = False
SHOW_CUSTOM_STATS(/opt/dev-deploy-somedirectory/_ansible/ansible.cfg) = True
OS / ENVIRONMENT
$ cat /etc/os-release  
NAME="Ubuntu"
VERSION="18.04.4 LTS (Bionic Beaver)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 18.04.4 LTS"
VERSION_ID="18.04"
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
VERSION_CODENAME=bionic
UBUNTU_CODENAME=bionic
STEPS TO REPRODUCE

Ensure that that the pip version is latest

$ pip list | grep openshift
openshift           0.12.0
  • Do deployment on the cluster using the k8s module. The following task can help with the same.
- name: deploy application on kubernetes
  k8s:
    #    apply: yes
    #    force: yes
    state: present
    kubeconfig: _kubeconfig/{{env}}/kubeconfig
    #    resource_definition: "{{ version_deployment_file }}"
    src: "/tmp/{{ env }}-{{ applicationName }}.yaml"
  register: k8s_apply_result
  no_log: "{{ no_logging_enabled }}"
  tags:
    - full-deploy
    - code-deploy
EXPECTED RESULTS

The deployment should be successful. It is successful when we use openshift version 0.11.2 which is the previous release.

ACTUAL RESULTS

Have already shared the verbose output above.

Copied from original issue: ansible/ansible#73745

@Akasurde
Copy link
Member Author

Akasurde commented Mar 1, 2021

@codeaprendiz Thanks for reporting this issue. Could you please provide the following

# ansible-galaxy collection list | grep kuber

@codeaprendiz
Copy link

Thanks @Akasurde , The following is the output

$ ansible-galaxy collection list | grep kuber
community.kubernetes      1.1.1 

@Akasurde
Copy link
Member Author

Akasurde commented Mar 2, 2021

@codeaprendiz Please update the collection version to the latest i.e., 1.2.0. This issue is resolved in this specific verison.

@Akasurde Akasurde self-assigned this Mar 2, 2021
@Akasurde Akasurde added needs_info More information required in order to debug the issue such as console log, library versions etc. type/question Further information is requested labels Mar 2, 2021
@codeaprendiz
Copy link

Ah okay, let me check this and update you. Thanks a lot for your amazing support 🙏

$ ansible-galaxy collection list | grep kuber
community.kubernetes          1.1.1
kubernetes.core               1.1.1

$ ansible-galaxy collection install community.kubernetes
Starting galaxy collection install process
Process install dependency map
ERROR! Error when getting available collection versions for community.kubernetes from default (https://galaxy.ansible.com/api/) (HTTP Code: 500, Message: Internal Server Error Code: Unknown)

image

okay what ? Is the ansible-galaxy down or what ?

okay looks like they fixed it in time.

$ ansible-galaxy collection list | grep kuber           
community.kubernetes 1.2.0  
community.kubernetes      1.1.1  

$ pip list | grep openshift
openshift           0.12.0

And it works now, I am not longer getting the error in k8s module. :)

TASK [k8s-app-deploy : deploy application on kubernetes] ***************************************************************************************************************************************************************
Wednesday 03 March 2021  07:25:44 +0000 (0:00:00.796)       0:00:04.205 ******* 
changed: [localhost]

@tima tima added duplicate This issue or pull request already exists and removed needs_info More information required in order to debug the issue such as console log, library versions etc. type/question Further information is requested labels Mar 4, 2021
@tima
Copy link
Collaborator

tima commented Mar 4, 2021

@codeaprendiz We did extensive amount of work in dealing with the breaks the kubernetes v12 client introduced (that the openshift client replies upon) before releasing 1.2.0 of this collection. See #314 for more.

It looks like you may have resolved this issue by upgrading. I'm going to close this one as a duplicate of the problem behind #314.

@tima tima closed this as completed Mar 4, 2021
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
duplicate This issue or pull request already exists
Projects
None yet
Development

No branches or pull requests

3 participants