-
Notifications
You must be signed in to change notification settings - Fork 4.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Client Certificate Authentication doesnt work with ACM Certificate #9756
Comments
The second listener on the API ELB seems straightforward. If export config is using |
I agree with @johngmyers. Second TCP listener on the API ELB seems the best solution, just when |
A second TCP listener would work with setups that don't provide access to the instances directly such as private topologies without VPN. I had concern about the addition or removal of the second listener when the sslCertificate field is set or unset and how it might invalidate existing kubeconfig files, but adding or removing the field on its own is enough to invalidate existing kubeconfig files since the CA would need to be added or removed anyways. Needing to also update the server port is trivial. I guess the remaining source of confusion could be users upgrading their Kops version and seeing an additional listener being added, but we can make this change prominent in the release notes. Would there be concerns with using a nonstandard port to send TLS traffic? I'm thinking of corporate proxy situations, but it seems like they would be using their own CA rather than relying on an ACM certificate. Very locked-down firewalls might make upgrading more troublesome for users, but I suppose that would happen regardless of which situation we choose here. I'll open a PR for the second listener approach shortly. |
Is this new behavior in kops 1.18? We are running into the same issue. We use AWS ELB with ACM Cert applied in front of the master instances. Been working great with kops until now. Any other possible work arounds for this issue? |
The behavior that has changed in Kops 1.18 is the disabling of basic auth by default. This can be reenabled in Kubernetes 1.18 by following these docs but instead setting the API field value to |
This should be fixed in v1.19.0-beta.1 by migrating to an NLB. See https://github.com/kubernetes/kops/blob/master/permalinks/acm_nlb.md for more info. /close |
@rifelpet: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
1. What
kops
version are you running? The commandkops version
, will displaythis information.
Version 1.18.0 (git-698bf974d8)
2. What Kubernetes version are you running?
kubectl version
will print theversion if a cluster is running or provide the Kubernetes version specified as
a
kops
flag.1.18.6
3. What cloud provider are you using?
AWS
4. What commands did you run? What is the simplest way to reproduce this issue?
kops export kubecfg; kops -v 3 rolling-update cluster
5. What happened after the commands executed?
6. What did you expect to happen?
rolling update to succeed
7. Please provide your cluster manifest. Execute
kops get --name my.example.com -o yaml
to display your cluster manifest.You may want to remove your cluster name and other sensitive information.
8. Please run the commands with most verbose logging by adding the
-v 10
flag.Paste the logs into this report, or in a gist and provide the gist link here.
9. Anything else do we need to know?
When an ACM certificate is provided for the API ELB, the listener is switched from TCP to TLS. This causes client certificate authentication to break. Before Kops 1.18, users using kubeconfig files generated with
kops export kubecfg
were implicitly falling back to basic authentication. With basic auth being deprecated and removed we'll need to provide a method for client cert auth to work when ACM certificates are used.Ideas:
Kops creates an
api.internal.$clustername
A record that points to the master internal IPs. We could use this domain name in the kubeconfig file and have the client bypass the API ELB entirely. This requires:Add --internal flag for export kubecfg that targets the internal dns name #9732 implements this idea, minus the security group changes required.
Create a second (TCP) listener on the API ELB without the certificate, pointing to the same master ports
Establish an SSH tunnel to the masters, bypassing the API ELB
The text was updated successfully, but these errors were encountered: