Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Help with "SSL: CERTIFICATE_VERIFY_FAILED" error. #198

Open
ghost opened this issue Aug 28, 2018 · 10 comments

Comments

Projects
None yet
2 participants
@ghost
Copy link

commented Aug 28, 2018

Hello, I need help resolving CERTIFICATE_VERIFY_FAIL error. A simple test program below errors out at DynamicClient. Prior to running this program, I have already done oc login and can see my namespaces via openshift CLI commands but running the program below results in an error. Is there a configuration that I missed somewhere after I've installed the rest client? The modules below is what I have installed and I'm using python 2.7.

dictdiffer 0.7.1
openshift 0.6.3
kubernetes 6.0.0
Jinja2 2.10
python-string-utils 0.6.0
ruamel.yaml 0.15.61
six 1.11.0

Sample code:

from kubernetes import client, config
from openshift.dynamic import DynamicClient

k8s_client = config.new_client_from_config()
dyn_client = DynamicClient(k8s_client)

Error:
urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='*.com', port=****): Max retries exceeded with url: /version (Caused by SSLError(SSLError(1, u'[SSL: CERTIFICATE_VERIFY_FAIL
ED] certificate verify failed (_ssl.c:726)'),))

@fabianvf

This comment has been minimized.

Copy link
Member

commented Aug 29, 2018

It looks like it's improperly parsing the host (host in your error is .com), you could try something like k8s_client.configuration.host = $REAL_HOST_VALUE. I'm not sure why it wouldn't be picking up the kubeconfig properly though, can you paste it here? I would also explore the rest of the values in the kubernetes.configuration object, and see if the rest of them look sane.

@ghost

This comment has been minimized.

Copy link
Author

commented Aug 30, 2018

Thanks for the reply. I apologize, I should have mentioned earlier that I purposely masked the value for the host and port when I posted this issue but the kubeconfig does load as I can see contents of the k8s_client variable at debug time.

I was able to resolve the issue by adding the line below in ~/.kube/config file under the cluster section:

  • cluster:
    insecure-skip-tls-verify: true

Thanks for directing me to this file and just looking into it more helped resolved my issue. Thank you so much, I think this issue can now be be closed.

@fabianvf

This comment has been minimized.

Copy link
Member

commented Aug 31, 2018

Hmm, but you were able to access the cluster without skipping TLS when using oc? We should be able to properly load any configuration object that oc/kubectl can, so I don't want to close the issue out until we figure out what caused the discrepancy.

@ghost

This comment has been minimized.

Copy link
Author

commented Sep 5, 2018

Hi, thanks again for the reply. Yes, that's correct. I was able to access the cluster without skipping TLS.

Here are the steps that I was doing:

  1. Run CLI 'oc login ...' from command line in order to login. This command include/uses the 'https' as well as the login token.

  2. Run CLI 'oc projects' from command line in order to view all OpenShift namespaces and works fine as I can see all my namespaces.

  3. Run the test / sample program (see above)

Results in error below (host info and port purposely masked):

urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='******.ocp.***.com', port=***): Max retries exceeded with url: /version (Caused by SSLError(SSLError(1, u'[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:726)'),))
(openshift)
  1. The file ~/.kube/config is dynamically modified automatically/immediately after step #1 removing any prior changes I made so I need to re-insert the line every time: insecure-skip-tls-verify: true in the 'cluster' section before running the simple program I was testing.

  2. Run 'oc projects' again in order to test if oc cli still works even with the skip tls added to the ~/.kube/config file. It still works and I can still see all my namespaces.

  3. Run the simple test program, this time it works without the error and it lists all OpenShift namespaces I have access to.

Here is the complete test/sample program I am running:

from kubernetes import client, config
from openshift.dynamic import DynamicClient

k8s_client = config.new_client_from_config()

dyn_client = DynamicClient(k8s_client)
v1_projects = dyn_client.resources.get(api_version='project.openshift.io/v1', kind='Project')
project_list = v1_projects.get()
for project in project_list.items:
    print(project.metadata.name)

OpenShift and Kubernetes version:

  1. OpenShift Master: v3.7.23
  2. Kubernetes Master: v1.7.6+a08f5eeb62
@fabianvf

This comment has been minimized.

Copy link
Member

commented Sep 5, 2018

Interesting, I'm not able to reproduce this against OpenShift 3.10, does the cluster section of your kubeconfig have a certificate-authority or certificate-authority-data section? I'm also running the newer version of the openshift client (0.7.1) and kubernetes client (7.0.0).

I'll try to spin up an environment that more closely matches yours to see if something changed in the underlying kubernetes client, though it seems the configuration logic is largely unchanged since May 2017.

@ghost

This comment has been minimized.

Copy link
Author

commented Sep 5, 2018

It does not have the certificate-authority or certificate-authority-data section.

This is the current structure of my kubeconfig file:

apiVersion: v1
clusters:
- cluster:
    insecure-skip-tls-verify: true
    server: 
  name: 
contexts:
- context:
    cluster: 
    user: 
  name: 
current-context: 
kind: Config
preferences: {}
users:
- name: 
  user:
    token: 
@fabianvf

This comment has been minimized.

Copy link
Member

commented Sep 5, 2018

hmm, I wonder if there's a default certificate location that's not being set in the configuration object

@vinzent

This comment has been minimized.

Copy link

commented Jun 27, 2019

Also having this issue:

  • oc login - works fine
  • python openshift: CERTIFICATE_VERIFY_FAILED error

OS: RHEL 7.6
python: 2.7.5
openshift: 0.9.0
urllib3 1.25.3

testing done:

  • using a simple urrlib3 request works fine:
import urrlib3
http = urrlib3.PoolManager()
r = http.request('GET', 'https://openshift.cluster');
print(r.data)
  • curl: works fine
  • wget: works fine

The mentioned workaround with insecure-skip-tls-verify will allow to connect.

@vinzent

This comment has been minimized.

Copy link

commented Jun 27, 2019

@ghost @fabianvf I think i've located the root cause:

https://github.com/kubernetes-client/python/blob/master/kubernetes/client/rest.py#L77

the kubernets client calls certifi and passes this - overriding the good system ca config. :-(

@vinzent

This comment has been minimized.

Copy link

commented Jun 27, 2019

Workaround: Set system ca pem file in .kube/config

...
clusters:
- cluster:
    certificate-authority: /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem
    server: https://openshift.cluster
....
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.