-
Notifications
You must be signed in to change notification settings - Fork 3.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
The library always uses port 80 when using the K8S_AUTH_KUBECONFIG env var #1333
Comments
Hi @odra. Are you sure it worked in previous version of the package? Does kubectl work in your environment? Try to set |
I tried to set both vars (KUBECONFIG and K8S_AUTH_KUBECONFIG) and it did not work. Yes kubectl does work in my environment and it works with the version 11.x of the library. I was actually wondering if there were any changes in the expected kubeconfig file format (for 12.x) in a way that it is trying to retrieve the server url from another property. |
i'm seeing the same issue in my environments. one is Fedora33, the other centos8. both received package updates recently and the ptyhon3-kubernetes client got bumped to 12.0.1 setting the kubeconfig via env var, or directly on the module did not work. |
I am also seeing the same issue in our environment, we are usinh CentOS 7.9. Kubernetes client got upgraded to 12.0.1 and that's breaking our pipeline. We use ansible kubernetes collection to deploy applications into kubernetes, with this change even on passing kubeconfig directly, it always fails complaining that request is http. |
I am seeing the same issue with our molecule tests using python-kubernetes. |
I'm experiencing the same problem. I'm using Ansible's k8s module and have v12.0.1 of the Kubernetes Python client installed. |
Experienced the same issue. Pinned the k8s module to v11.0 and it's back to working. Also had to pin an |
yep, same error for me as well. Downgraded to v11 to make it work |
Credits to @genevieve for the find. For all those having the same issue: |
Thank you for the workaround @jeroentorrekens |
Faced the same issue. Downgrading kuberenets to v11 helped. Thanks! Note: if you install openshift with 12, then it automatically reinstalls kubernetes to v12. So keep that in mind |
Another voice to the list of impacted folks. We had to roll back to 11.0.0. The lack of fixes is concerning as we are about to migrate to k8s 1.18 and soon 1.19 and 11.0.0 is not guaranteed to work well beyong k8s 1.17. @tomplus there are several reports for this regression. Is there a fix planned in the near future? |
- RKE - Fix versions to avoid kubernetes-client/python#1333
- RKE - Fix versions to avoid kubernetes-client/python#1333
## [2.12.1](https://gitlab.com/p3r.one/apollo/compare/v2.12.0...v2.12.1) (2021-05-03) ### Bug Fixes * pinned kubernetes and openshift to earlier versions (kubernetes-client/python#1333) ([59f247b](https://gitlab.com/p3r.one/apollo/commit/59f247bd67ec10af76d52d687dfc0e4f878144e3))
plus one on the impact of this bug |
Just tested with We are still getting |
We were able to move forward with the latest versions of the client (17.17.0), with the OpenShift library (0.12.0), by upgrading Ansible from 2.x to the latest 4.x (4.0.0). The rest of our code had no issue. |
Hi, Today I have faced the same issue. As per the above I have installed the |
Those versions don't seem to work for me:
And still getting the same error message. |
@jeroentorrekens
I'm not saying it will solve your problem in your case, just that it works for us. Good luck. |
Third party dependency where higher version causes issues. More details here: kubernetes-client/python#1333
Problem disappeared after I installed the following versions: ansible==4.5.0 |
Are you sure? With the following, it still happens here
|
@origliante as an updated datapoint the problem is now gone for us and we have these versions:
|
How can you have that?
pip, same story:
|
We consider the new pip resolver to be broken, it takes several minutes for it to decide how to resolve the dependency tree for example, and we have to use poorly managed packages such as Azure. We do not actually use openshift, but azure insists on importing it. Try |
There's a workaround (using older version) but what about a proper fix? |
@zdzichu try the following out: From:
By updating the tasks to kubernetes.core.k8s, the underlying module is fixed.
|
any updates on this? i'm using python venv to setup the environment to run ansible in with the following specs, but it just doesn't work. I tested the K8S_AUTH_KUBECONFIG and KUBECONFIG envs, with and without each other, and my kubeconfig-file does work when using the kubectl cli. ansible-galaxy collections:
pip packages:
|
The issue and suggested solution that worked here are explained at kubernetes-client/python#1333 (comment) It appears that the combination of using the deprecated k8s module and installing Kubernetes client Python library as a dependency of openshift library causes the problem. Since the openshift library by itself is not needed, we can pull in Kubernetes library explicitly. Specifying the new Kubernetes module for Ansible completes the fix.
Have you looked at the apiVersion: v1
clusters:
- cluster:
certificate-authority-data: [...]
server: https://[...].gr7.us-east-1.eks.amazonaws.com
name: arn:aws:eks:us-east-1:[...]:cluster/my-cluster
contexts:
- context:
cluster: arn:aws:eks:us-east-1:[...]:cluster/my-cluster
user: arn:aws:eks:us-east-1:[...]:cluster/my-cluster
name: arn:aws:eks:us-east-1:[...]:cluster/my-cluster
current-context: arn:aws:eks:us-east-1:[...]:cluster/my-cluster
kind: Config
preferences: {}
users:
- name: arn:aws:eks:us-east-1:[...]:cluster/my-cluster
user:
exec:
apiVersion: client.authentication.k8s.io/v1alpha1
args:
- --region
- us-east-1
- eks
- get-token
- --cluster-name
- my-cluster
command: aws This was causing issues when using wait condition for kubernetes.core.k8s_info. |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. This bot triages issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /close not-planned |
@k8s-triage-robot: Closing this issue, marking it as "Not Planned". In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
I have delete ansible via apt-remove and installed new version. I don't have this error now.
|
What happened (please include outputs or screenshots):
Created a kubeconfig file stored in a different folder than the default one and I get the following error:
What you expected to happen:
A successful request.
How to reproduce it (as minimally and precisely as possible):
pip install kubernetes==12.0.1
K8S_AUTH_KUBECONFIG
to the new kubeconfig file pathAnything else we need to know?
I am using the k8s ansible module but it works if I use an older version of the library (11.0.0).
Environment:
kubectl version
):python --version
): Python 3.7.7pip list | grep kubernetes
): -12.0.1The text was updated successfully, but these errors were encountered: