Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Curl inside a pod: curl: (60) SSL certificate problem: unable to get local issuer certificate #12924

Closed
rahulanand16nov opened this issue Nov 10, 2021 · 11 comments
Labels
kind/support Categorizes issue or PR as a support question. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. long-term-support Long-term support issues that can't be fixed in code

Comments

@rahulanand16nov
Copy link

Hello maintainers,

I was trying to learn Kubernetes and got stuck on an issue for hours that prevents making an HTTPS request from any container to outside.

Things I did:
Create minkube/kind cluster:

minikube start

Installed Istio components

istioctl install

Tried to execute curl request from istiod component and I get:

istio-proxy@istiod-67764fc6c9-5cj57:/$ curl -v https://raw.githubusercontent.com/istio/tools/release-1.11/bin/root-transition.sh
*   Trying 95.216.67.149:443...
* TCP_NODELAY set
* Connected to raw.githubusercontent.com (95.216.67.149) port 443 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* successfully set certificate verify locations:
*   CAfile: /etc/ssl/certs/ca-certificates.crt
  CApath: /etc/ssl/certs
* TLSv1.3 (OUT), TLS handshake, Client hello (1):
* TLSv1.3 (IN), TLS handshake, Server hello (2):
* TLSv1.3 (IN), TLS handshake, Encrypted Extensions (8):
* TLSv1.3 (IN), TLS handshake, Certificate (11):
* TLSv1.3 (OUT), TLS alert, unknown CA (560):
* SSL certificate problem: unable to get local issuer certificate
* Closing connection 0
curl: (60) SSL certificate problem: unable to get local issuer certificate
More details here: https://curl.haxx.se/docs/sslcerts.html

curl failed to verify the legitimacy of the server and therefore could not
establish a secure connection to it. To learn more about this situation and
how to fix it, please visit the web page mentioned above.

I can curl fine from the outside.

@medyagh
Copy link
Member

medyagh commented Nov 10, 2021

@rahulanand16nov

istio-proxy@istiod-67764fc6c9-5cj57:

1- does not seem like u are inside minikube, are u using minikube ssh to get inside?
2- are you using a VPN ?

3- can you please attach this file

minikube logs --out=log.txt

@rahulanand16nov
Copy link
Author

rahulanand16nov commented Nov 11, 2021

@medyagh 1. I can curl fine when using minikube ssh but not when inside any container.

I applied the following resource:

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: toystore
  labels:
    app: toystore
spec:
  selector:
    matchLabels:
      app: toystore
  template:
    metadata:
      labels:
        app: toystore
    spec:
      containers:
        - name: toystore
          image: quay.io/3scale/authorino:echo-api
          env:
            - name: PORT
              value: "3000"
          ports:
            - containerPort: 3000
              name: http
  replicas: 1
---
apiVersion: v1
kind: Service
metadata:
  name: toystore
spec:
  selector:
    app: toystore
  ports:
    - port: 80
      protocol: TCP
      targetPort: 3000

used bash: kubectl exec deployment/toystore -it -- /bin/bash to make curl cmd as above and fails due to cert issue.

The fact that I can do the curl inside a similar container on another machine makes it look like a kubernetes/minikube problem.

  1. No, I am not using VPN.

  2. logs.txt

@rahulanand16nov
Copy link
Author

I deleted the whole cluster and added it again... To my surprise, output is different this time:

TCP_NODELAY set
* Expire in 149985 ms for 3 (transfer 0x561dc14b7f50)
* Expire in 200 ms for 4 (transfer 0x561dc14b7f50)
* Connected to www.google.com (95.216.67.149) port 443 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* successfully set certificate verify locations:
*   CAfile: none
  CApath: /etc/ssl/certs
* TLSv1.3 (OUT), TLS handshake, Client hello (1):
* TLSv1.3 (IN), TLS handshake, Server hello (2):
* TLSv1.3 (IN), TLS handshake, Encrypted Extensions (8):
* TLSv1.3 (IN), TLS handshake, Certificate (11):
* TLSv1.3 (OUT), TLS alert, certificate expired (557):
* SSL certificate problem: certificate has expired
* Closing connection 0
curl: (60) SSL certificate problem: certificate has expired
More details here: https://curl.haxx.se/docs/sslcerts.html

After searching, I stumbled upon kubeadm certs renew all

MISSING! certificate embedded in the kubeconfig file for the admin to use and for kubeadm itself
MISSING! certificate for serving the Kubernetes API
MISSING! certificate the apiserver uses to access etcd
MISSING! certificate for the API server to connect to kubelet
MISSING! certificate embedded in the kubeconfig file for the controller manager to use
MISSING! certificate for liveness probes to healthcheck etcd
MISSING! certificate for etcd nodes to communicate with each other
MISSING! certificate for serving etcd
MISSING! certificate for the front proxy client
MISSING! certificate embedded in the kubeconfig file for the scheduler manager to use

Done renewing certificates. You must restart the kube-apiserver, kube-controller-manager, kube-scheduler and etcd, so that they can use the new certificates.

This is consistent across kind clusters as well so might be kubernetes config issue. Can anyone point me in the right direction?

@spowelljr spowelljr added the kind/support Categorizes issue or PR as a support question. label Nov 15, 2021
@spowelljr
Copy link
Member

Hi @rahulanand16nov, when you start a fresh minikube cluster (non-existing) kubeadm init is run https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-init/

There you can see the following certs commands are run:

certs                        Certificate generation
  /ca                          Generate the self-signed Kubernetes CA to provision identities for other Kubernetes components
  /apiserver                   Generate the certificate for serving the Kubernetes API
  /apiserver-kubelet-client    Generate the certificate for the API server to connect to kubelet
  /front-proxy-ca              Generate the self-signed CA to provision identities for front proxy
  /front-proxy-client          Generate the certificate for the front proxy client
  /etcd-ca                     Generate the self-signed CA to provision identities for etcd
  /etcd-server                 Generate the certificate for serving etcd
  /etcd-peer                   Generate the certificate for etcd nodes to communicate with each other
  /etcd-healthcheck-client     Generate the certificate for liveness probes to healthcheck etcd
  /apiserver-etcd-client       Generate the certificate the apiserver uses to access etcd
  /sa                          Generate a private key for signing service account tokens along with its public key

So this should be handled on a fresh start, previously if you had a long running that was never deleted, the cert would never be renewed, but as of minikube v1.24.0 cert renewal will now run when running minikube start on existing clusters.

Does that answer your question?

@spowelljr spowelljr added triage/needs-information Indicates an issue needs more information in order to work on it. long-term-support Long-term support issues that can't be fixed in code labels Dec 15, 2021
@rahulanand16nov
Copy link
Author

Hi @spowelljr , Thanks for the reply! I am already using v1.24.0 of minikube:

❯ minikube version
minikube version: v1.24.0
commit: 76b94fb3c4e8ac5062daf70d60cf03ddcc0a741b

Even after deleting everything (did a complete reinstall of the OS as well), I have the following output:

❯ k exec -it -n istio-system istiod-67764fc6c9-wg2kg -- /bin/bash
istio-proxy@istiod-67764fc6c9-wg2kg:/$ curl -v https://www.google.com
*   Trying 195.201.199.239:443...
* TCP_NODELAY set
* Connected to www.google.com (195.201.199.239) port 443 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* successfully set certificate verify locations:
*   CAfile: /etc/ssl/certs/ca-certificates.crt
  CApath: /etc/ssl/certs
* TLSv1.3 (OUT), TLS handshake, Client hello (1):
* TLSv1.3 (IN), TLS handshake, Server hello (2):
* TLSv1.3 (IN), TLS handshake, Encrypted Extensions (8):
* TLSv1.3 (IN), TLS handshake, Certificate (11):
* TLSv1.3 (OUT), TLS alert, unknown CA (560):
* SSL certificate problem: unable to get local issuer certificate
* Closing connection 0
curl: (60) SSL certificate problem: unable to get local issuer certificate

@medyagh medyagh changed the title Not able to make curl requests out of any component Curl inside a pod: curl: (60) SSL certificate problem: unable to get local issuer certificate Jan 12, 2022
@medyagh
Copy link
Member

medyagh commented Jan 12, 2022

@rahulanand16nov I wonder do u have a Crop Cert ? (are u using a corp laptop)

have u tried copying your corp root cert into minikube?

here are instructions
https://minikube.sigs.k8s.io/docs/handbook/untrusted_certs/

@rahulanand16nov
Copy link
Author

@medyagh Yup, found 2015-RH-IT-Root-CA.pem in /etc/ssl/certs.

Copied

❯ ls ~/.minikube/certs
2015-RH-IT-Root-CA.pem  ca-key.pem  ca.pem  cert.pem  key.pem

deleted and started with minikube start --embed-certs

Same issue :(

@spowelljr spowelljr removed the triage/needs-information Indicates an issue needs more information in order to work on it. label Feb 2, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label May 3, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jun 2, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue or PR with /reopen
  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closing this issue.

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue or PR with /reopen
  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/support Categorizes issue or PR as a support question. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. long-term-support Long-term support issues that can't be fixed in code
Projects
None yet
Development

No branches or pull requests

5 participants