Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The library always uses port 80 when using the K8S_AUTH_KUBECONFIG env var #1333

Closed
odra opened this issue Nov 24, 2020 · 33 comments
Closed
Labels
kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@odra
Copy link

odra commented Nov 24, 2020

What happened (please include outputs or screenshots):

Created a kubeconfig file stored in a different folder than the default one and I get the following error:

fatal: [localhost]: FAILED! => {"changed": false, "msg": "Failed to get client due to HTTPConnectionPool(host='localhost', port=80): Max retries exceeded with url: /version (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f6f87772f10>: Failed to establish a new connection: [Errno 111] Connection refused'))"}

What you expected to happen:

A successful request.

How to reproduce it (as minimally and precisely as possible):

  • pip install kubernetes==12.0.1
  • create a kubeconfig in another folder (changing the port if possible)
  • set K8S_AUTH_KUBECONFIG to the new kubeconfig file path
  • try to run a simple integration, such as creating a namespace
  • the error should show up

Anything else we need to know?

I am using the k8s ansible module but it works if I use an older version of the library (11.0.0).

Environment:

  • Kubernetes version (kubectl version):
Client Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.4", GitCommit:"d360454c9bcd1634cf4cc52d1867af5491dc9c5f", GitTreeState:"clean", BuildDate:"2020-11-11T13:17:17Z", GoVersion:"go1.15.2", Compiler:"gc", Platform:"linux/amd64"}

Kind Container: latest-1.16

  • OS (e.g., MacOS 10.13.6): Linux
  • Python version (python --version): Python 3.7.7
  • Python client version (pip list | grep kubernetes): -12.0.1
@odra odra added the kind/bug Categorizes issue or PR as related to a bug. label Nov 24, 2020
@tomplus
Copy link
Member

tomplus commented Nov 24, 2020

Hi @odra. Are you sure it worked in previous version of the package? Does kubectl work in your environment? Try to set KUBECONFIG (instead of K8S_AUTH_KUBECONFIG) to load your kube-config.

@odra
Copy link
Author

odra commented Nov 24, 2020

I tried to set both vars (KUBECONFIG and K8S_AUTH_KUBECONFIG) and it did not work.

Yes kubectl does work in my environment and it works with the version 11.x of the library.

I was actually wondering if there were any changes in the expected kubeconfig file format (for 12.x) in a way that it is trying to retrieve the server url from another property.

@jeichler
Copy link

i'm seeing the same issue in my environments. one is Fedora33, the other centos8. both received package updates recently and the ptyhon3-kubernetes client got bumped to 12.0.1

setting the kubeconfig via env var, or directly on the module did not work.

@ankur-gupta-guavus
Copy link

I am also seeing the same issue in our environment, we are usinh CentOS 7.9. Kubernetes client got upgraded to 12.0.1 and that's breaking our pipeline. We use ansible kubernetes collection to deploy applications into kubernetes, with this change even on passing kubeconfig directly, it always fails complaining that request is http.

@jmontleon
Copy link

I am seeing the same issue with our molecule tests using python-kubernetes.

@AuditeMarlow
Copy link

I'm experiencing the same problem. I'm using Ansible's k8s module and have v12.0.1 of the Kubernetes Python client installed.

@genevieve
Copy link

genevieve commented Feb 26, 2021

Experienced the same issue. Pinned the k8s module to v11.0 and it's back to working. Also had to pin an openshift pkg which depends on k8s v12.0 to v0.11

@sownak
Copy link

sownak commented Feb 26, 2021

yep, same error for me as well. Downgraded to v11 to make it work

@jeroentorrekens
Copy link

Credits to @genevieve for the find. For all those having the same issue:
pip3 install -Iv kubernetes==11.0.0

@typ-ex
Copy link

typ-ex commented Feb 27, 2021

Thank you for the workaround @jeroentorrekens

@arjunkrishnasb
Copy link

arjunkrishnasb commented Mar 29, 2021

Faced the same issue. Downgrading kuberenets to v11 helped. Thanks!
Now my packages are
kubernetes==11.0.0
openshift==0.11.0

Note: if you install openshift with 12, then it automatically reinstalls kubernetes to v12. So keep that in mind

@sodul
Copy link

sodul commented Apr 8, 2021

Another voice to the list of impacted folks. We had to roll back to 11.0.0. The lack of fixes is concerning as we are about to migrate to k8s 1.18 and soon 1.19 and 11.0.0 is not guaranteed to work well beyong k8s 1.17.

@tomplus there are several reports for this regression. Is there a fix planned in the near future?

joantomas added a commit to joantomas/tops that referenced this issue Apr 26, 2021
- RKE
- Fix versions to avoid kubernetes-client/python#1333
joantomas added a commit to joantomas/tops that referenced this issue Apr 26, 2021
- RKE
- Fix versions to avoid kubernetes-client/python#1333
derfabianpeter pushed a commit to Peter-SAARLAND/zero that referenced this issue May 3, 2021
@cjreyn
Copy link

cjreyn commented May 6, 2021

plus one on the impact of this bug

@sodul
Copy link

sodul commented May 19, 2021

Just tested with kubernetes==17.17.0 and openshift==0.12.0.

We are still getting Failed to get client due to HTTPConnectionPool(host='localhost', port=80): Max retries exceeded with url: /version (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f08f3c410a0>: Failed to establish a new connection: [Errno 111] Connection refused')

@sodul
Copy link

sodul commented May 28, 2021

We were able to move forward with the latest versions of the client (17.17.0), with the OpenShift library (0.12.0), by upgrading Ansible from 2.x to the latest 4.x (4.0.0). The rest of our code had no issue.

@vinshika
Copy link

Hi,

Today I have faced the same issue. As per the above I have installed the pip3 install -Iv kubernetes==11.0.0 in my master node. Once this has been installed I can create the POD using my ansible playbook.

@jeroentorrekens
Copy link

We were able to move forward with the latest versions of the client (17.17.0), with the OpenShift library (0.12.0), by upgrading Ansible from 2.x to the latest 4.x (4.0.0). The rest of our code had no issue.

Those versions don't seem to work for me:

jtorreke@jtorreke-laptop:~$ pip3 freeze | egrep "(ansible|openshift|kubernetes)"
ansible==4.1.0
ansible-core==2.11.2
kubernetes==17.17.0
openshift==0.12.0

And still getting the same error message.

@sodul
Copy link

sodul commented Jul 4, 2021

@jeroentorrekens
This is what I have on my machine and we do not have issues:

> pip3 freeze | egrep "(ansible|openshift|kubernetes)"
ansible==4.1.0
ansible-core==2.11.2
azure-mgmt-redhatopenshift==0.1.0
kubernetes==17.17.0
openshift==0.12.1

I'm not saying it will solve your problem in your case, just that it works for us. Good luck.

florist-gump added a commit to HumanBrainProject/clb-s2i-jupyterhub that referenced this issue Aug 4, 2021
Third party dependency where higher version causes issues. More details here: kubernetes-client/python#1333
@adalziso
Copy link

Problem disappeared after I installed the following versions:

ansible==4.5.0
ansible-core==2.11.4
ansible-runner==1.4.7
ansible-runner-http==1.0.0
kubernetes==12.0.1
openshift==0.12.1

@origliante
Copy link

origliante commented Sep 24, 2021

Problem disappeared after I installed the following versions:

ansible==4.5.0
ansible-core==2.11.4
ansible-runner==1.4.7
ansible-runner-http==1.0.0
kubernetes==12.0.1
openshift==0.12.1

Are you sure? With the following, it still happens here

ansible-4.5.0.tar.gz
ansible-core-2.11.4.tar.gz
ansible_runner-1.4.7-py3-none-any.whl
ansible_runner_http-1.0.0-py2.py3-none-any.whl
kubernetes-12.0.1-py2.py3-none-any.whl
openshift-0.12.1.tar.gz

@sodul
Copy link

sodul commented Sep 24, 2021

@origliante as an updated datapoint the problem is now gone for us and we have these versions:

> pip3 freeze | egrep "(ansible|openshift|kubernetes)"
ansible==4.4.0
ansible-core==2.11.3
azure-mgmt-redhatopenshift==1.0.0
kubernetes==18.20.0
openshift==0.12.1

@origliante
Copy link

origliante commented Sep 24, 2021

@origliante as an updated datapoint the problem is now gone for us and we have these versions:

> pip3 freeze | egrep "(ansible|openshift|kubernetes)"
ansible==4.4.0
ansible-core==2.11.3
azure-mgmt-redhatopenshift==1.0.0
kubernetes==18.20.0
openshift==0.12.1

How can you have that?

$ poetry add openshift@0.12.1 kubernetes@18.20.0

Updating dependencies
Resolving dependencies... (0.2s)

  SolverProblemError

  Because openshift (0.12.1) depends on kubernetes (>=12.0,<13.0)
   and pltlib depends on kubernetes (18.20.0), openshift is forbidden.
  So, because pltlib depends on openshift (0.12.1), version solving failed.

pip, same story:

openshift 0.12.1 requires kubernetes~=12.0, but you have kubernetes 18.20.0 which is incompatible.

@sodul
Copy link

sodul commented Sep 24, 2021

We consider the new pip resolver to be broken, it takes several minutes for it to decide how to resolve the dependency tree for example, and we have to use poorly managed packages such as Azure. We do not actually use openshift, but azure insists on importing it.

Try --use-deprecated=legacy-resolver the next time you pip install. You'll notice that it is much much faster and will result in less installation errors. I do understand that the new resolver is more 'correct' but until all package maintainers get saner dependencies (Azure again), and pip fixes the horrendous performance of the new resolver we will stay clear from it.

@zdzichu
Copy link

zdzichu commented Dec 12, 2021

There's a workaround (using older version) but what about a proper fix?

@origliante
Copy link

origliante commented Dec 13, 2021

@zdzichu try the following out:

From:
https://docs.ansible.com/ansible/latest/collections/community/kubernetes/k8s_module.html

Please update your tasks to use the new name kubernetes.core.k8s instead. It will be removed in version 3.0.0 of community.kubernetes.

By updating the tasks to kubernetes.core.k8s, the underlying module is fixed.

ansible-core-2.12.0
ansible_runner-2.0.3
kubernetes-18.20.0
(no openshift)

@OlGe404
Copy link

OlGe404 commented Feb 14, 2022

any updates on this?

i'm using python venv to setup the environment to run ansible in with the following specs, but it just doesn't work. I tested the K8S_AUTH_KUBECONFIG and KUBECONFIG envs, with and without each other, and my kubeconfig-file does work when using the kubectl cli.

ansible-galaxy collections:

Collection           Version
-------------------- -------
amazon.aws           3.0.0  
community.general    4.3.0  
community.kubernetes 2.0.1  
kubernetes.core      2.2.3 

pip packages:

ansible==4.10.0
ansible-compat==1.0.0
ansible-core==2.11.8
ansible-lint==5.4.0
arrow==1.2.2
bcrypt==3.2.0
binaryornot==0.4.4
boto3==1.20.54
botocore==1.23.54
bracex==2.2.1
cachetools==5.0.0
Cerberus==1.3.2
certifi==2021.10.8
cffi==1.15.0
chardet==4.0.0
charset-normalizer==2.0.12
click==8.0.3
click-help-colors==0.9.1
colorama==0.4.4
commonmark==0.9.1
cookiecutter==1.7.3
cryptography==36.0.1
enrich==1.2.7
google-auth==2.6.0
idna==3.3
Jinja2==3.0.3
jinja2-time==0.2.0
jmespath==0.10.0
kubernetes==11.0.0
MarkupSafe==2.0.1
molecule==3.6.0
oauthlib==3.2.0
packaging==21.3
paramiko==2.9.2
pathspec==0.9.0
pkg_resources==0.0.0
pluggy==1.0.0
poyo==0.5.0
pyasn1==0.4.8
pyasn1-modules==0.2.8
pycparser==2.21
Pygments==2.11.2
PyNaCl==1.5.0
pyparsing==3.0.7
python-dateutil==2.8.2
python-slugify==5.0.2
PyYAML==6.0
requests==2.27.1
requests-oauthlib==1.3.1
resolvelib==0.5.4
rich==11.2.0
rsa==4.8
ruamel.yaml==0.17.21
ruamel.yaml.clib==0.2.6
s3transfer==0.5.1
six==1.16.0
subprocess-tee==0.3.5
tenacity==8.0.1
text-unidecode==1.3
urllib3==1.26.8
wcmatch==8.3
websocket-client==1.2.3
yamllint==1.26.3

anon-software pushed a commit to anon-software/turing-pi-cluster that referenced this issue Mar 12, 2022
The issue and suggested solution that worked here are explained at
kubernetes-client/python#1333 (comment)

It appears that the combination of using the deprecated k8s module and installing
Kubernetes client Python library as a dependency of openshift library causes the
problem. Since the openshift library by itself is not needed, we can pull in
Kubernetes library explicitly. Specifying the new Kubernetes module for Ansible
completes the fix.
@Mionsz
Copy link

Mionsz commented Apr 28, 2022

Have you looked at the .kube/config file? There is a cmd line for token aquire:

apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: [...]
    server: https://[...].gr7.us-east-1.eks.amazonaws.com
  name: arn:aws:eks:us-east-1:[...]:cluster/my-cluster
contexts:
- context:
    cluster: arn:aws:eks:us-east-1:[...]:cluster/my-cluster
    user: arn:aws:eks:us-east-1:[...]:cluster/my-cluster
  name: arn:aws:eks:us-east-1:[...]:cluster/my-cluster
current-context: arn:aws:eks:us-east-1:[...]:cluster/my-cluster
kind: Config
preferences: {}
users:
- name: arn:aws:eks:us-east-1:[...]:cluster/my-cluster
  user:
    exec:
      apiVersion: client.authentication.k8s.io/v1alpha1
      args:
      - --region
      - us-east-1
      - eks
      - get-token
      - --cluster-name
      - my-cluster
      command: aws

This was causing issues when using wait condition for kubernetes.core.k8s_info.
You shoud change the exec part and use static token.

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jul 27, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Aug 26, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot closed this as not planned Won't fix, can't repro, duplicate, stale Sep 25, 2022
@enisozgen
Copy link

I have delete ansible via apt-remove and installed new version. I don't have this error now.

ansible                9.1.0
ansible-core           2.16.2
kubernetes             28.1.0
openshift              0.13.2

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests