Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Control plane load balancer SSL health check fails #17

Closed
mkmik opened this issue Dec 12, 2022 · 15 comments · Fixed by #18
Closed

Control plane load balancer SSL health check fails #17

mkmik opened this issue Dec 12, 2022 · 15 comments · Fixed by #18

Comments

@mkmik
Copy link
Contributor

mkmik commented Dec 12, 2022

after applying the sample config

$ kubectl apply -f samples/aws/k3s-cluster.yaml

cluster-api-k3s successfully creates the vpc, control plane instance and load balancer.

However the load balancer doesn't like how the apiserver on the control plane machine is talking https:

image

image

When I change the health check type to TCP it works just fine. The rest of the CAPI machinery successfully connects to the apiserver and proceeds with the bootstrap of the worker node just fine. The CA and the certificates appear to me to be correct.

@mkmik
Copy link
Contributor Author

mkmik commented Dec 12, 2022

aws elb describe-load-balancers --region us-east-1 says:

        {
            "LoadBalancerName": "k3-test-8-apiserver",
            "DNSName": "k3-test-8-apiserver-2070937331.us-east-1.elb.amazonaws.com",
            "CanonicalHostedZoneName": "k3-test-8-apiserver-2070937331.us-east-1.elb.amazonaws.com",
            "CanonicalHostedZoneNameID": "Z35SXDOTRQ7X7K",
            "ListenerDescriptions": [
                {
                    "Listener": {
                        "Protocol": "TCP",
                        "LoadBalancerPort": 6443,
                        "InstanceProtocol": "TCP",
                        "InstancePort": 6443
                    },
                    "PolicyNames": []
                }
            ],
            "Policies": {
                "AppCookieStickinessPolicies": [],
                "LBCookieStickinessPolicies": [],
                "OtherPolicies": []
            },
            "BackendServerDescriptions": [],
            "AvailabilityZones": [
                "us-east-1a"
            ],
            "Subnets": [
                "subnet-0b9782cedb617555a"
            ],
            "VPCId": "vpc-06f0de2c855d7990c",
            "Instances": [
                {
                    "InstanceId": "i-0fe77be91e59bf886"
                }
            ],
            "HealthCheck": {
                "Target": "SSL:6443",
                "Interval": 10,
                "Timeout": 5,
                "UnhealthyThreshold": 3,
                "HealthyThreshold": 5
            },
            "SourceSecurityGroup": {
                "OwnerAlias": "908067222188",
                "GroupName": "k3-test-8-apiserver-lb"
            },
            "SecurityGroups": [
                "sg-0cf2f8a9f723f1240"
            ],
            "CreatedTime": "2022-12-12T14:24:33.550000+00:00",
            "Scheme": "internet-facing"
        }

I can successfully create aws+kubeadm clusters, which use a similarly configured LB:

aws elb describe-load-balancers --region eu-central-1:

        {
            "LoadBalancerName": "test4-apiserver",
            "DNSName": "test4-apiserver-833021180.eu-central-1.elb.amazonaws.com",
            "CanonicalHostedZoneName": "test4-apiserver-833021180.eu-central-1.elb.amazonaws.com",
            "CanonicalHostedZoneNameID": "Z215JYRZR1TBD5",
            "ListenerDescriptions": [
                {
                    "Listener": {
                        "Protocol": "TCP",
                        "LoadBalancerPort": 6443,
                        "InstanceProtocol": "TCP",
                        "InstancePort": 6443
                    },
                    "PolicyNames": []
                }
            ],
            "Policies": {
                "AppCookieStickinessPolicies": [],
                "LBCookieStickinessPolicies": [],
                "OtherPolicies": []
            },
            "BackendServerDescriptions": [],
            "AvailabilityZones": [
                "eu-central-1b",
                "eu-central-1c",
                "eu-central-1a"
            ],
            "Subnets": [
                "subnet-02183db7a7be39f9f",
                "subnet-097f8c9c6eabd1b1e",
                "subnet-0ee4ca0f1bd467507"
            ],
            "VPCId": "vpc-0ad0425bf1e41496c",
            "Instances": [
                {
                    "InstanceId": "i-01a294ee6d59db373"
                }
            ],
            "HealthCheck": {
                "Target": "SSL:6443",
                "Interval": 10,
                "Timeout": 5,
                "UnhealthyThreshold": 3,
                "HealthyThreshold": 5
            },
            "SourceSecurityGroup": {
                "OwnerAlias": "908067222188",
                "GroupName": "test4-apiserver-lb"
            },
            "SecurityGroups": [
                "sg-0fc014bfd24c431e6"
            ],
            "CreatedTime": "2022-12-07T16:30:18.010000+00:00",
            "Scheme": "internet-facing"
        }

@mkmik
Copy link
Contributor Author

mkmik commented Dec 12, 2022

wireshark:

request from AWS load balancer:

Frame 52: 375 bytes on wire (3000 bits), 375 bytes captured (3000 bits)
Ethernet II, Src: MS-NLB-PhysServer-23_d1:98:6a:3c (02:17:d1:98:6a:3c), Dst: 02:cf:f1:83:e8:74 (02:cf:f1:83:e8:74)
Internet Protocol Version 4, Src: 10.0.23.36, Dst: 10.0.75.123
Transmission Control Protocol, Src Port: 14052, Dst Port: 6443, Seq: 1, Ack: 1, Len: 309
Transport Layer Security
    TLSv1.2 Record Layer: Handshake Protocol: Client Hello
        Content Type: Handshake (22)
        Version: TLS 1.0 (0x0301)
        Length: 304
        Handshake Protocol: Client Hello
            Handshake Type: Client Hello (1)
            Length: 300
            Version: TLS 1.2 (0x0303)
            Random: edea7523bc25036e9973d78474293ffe350d45e484478c658f5bff932cd5864b
            Session ID Length: 32
            Session ID: 48bfc0ae99baa54d78851973141692a4bd45f127cf2390e645151354ef037e61
            Cipher Suites Length: 58
            Cipher Suites (29 suites)
                Cipher Suite: TLS_RSA_WITH_AES_256_GCM_SHA384 (0x009d)
                Cipher Suite: TLS_RSA_WITH_AES_256_CBC_SHA256 (0x003d)
                Cipher Suite: TLS_RSA_WITH_AES_256_CBC_SHA (0x0035)
                Cipher Suite: TLS_RSA_WITH_CAMELLIA_256_CBC_SHA (0x0084)
                Cipher Suite: TLS_RSA_WITH_AES_128_GCM_SHA256 (0x009c)
                Cipher Suite: TLS_RSA_WITH_AES_128_CBC_SHA256 (0x003c)
                Cipher Suite: TLS_RSA_WITH_AES_128_CBC_SHA (0x002f)
                Cipher Suite: TLS_RSA_WITH_CAMELLIA_128_CBC_SHA (0x0041)
                Cipher Suite: TLS_RSA_WITH_RC4_128_SHA (0x0005)
                Cipher Suite: TLS_RSA_WITH_3DES_EDE_CBC_SHA (0x000a)
                Cipher Suite: TLS_DHE_DSS_WITH_AES_256_GCM_SHA384 (0x00a3)
                Cipher Suite: TLS_DHE_RSA_WITH_AES_256_GCM_SHA384 (0x009f)
                Cipher Suite: TLS_DHE_RSA_WITH_AES_256_CBC_SHA256 (0x006b)
                Cipher Suite: TLS_DHE_DSS_WITH_AES_256_CBC_SHA256 (0x006a)
                Cipher Suite: TLS_DHE_RSA_WITH_AES_256_CBC_SHA (0x0039)
                Cipher Suite: TLS_DHE_DSS_WITH_AES_256_CBC_SHA (0x0038)
                Cipher Suite: TLS_DHE_RSA_WITH_CAMELLIA_256_CBC_SHA (0x0088)
                Cipher Suite: TLS_DHE_DSS_WITH_CAMELLIA_256_CBC_SHA (0x0087)
                Cipher Suite: TLS_DHE_DSS_WITH_AES_128_GCM_SHA256 (0x00a2)
                Cipher Suite: TLS_DHE_RSA_WITH_AES_128_GCM_SHA256 (0x009e)
                Cipher Suite: TLS_DHE_RSA_WITH_AES_128_CBC_SHA256 (0x0067)
                Cipher Suite: TLS_DHE_DSS_WITH_AES_128_CBC_SHA256 (0x0040)
                Cipher Suite: TLS_DHE_RSA_WITH_AES_128_CBC_SHA (0x0033)
                Cipher Suite: TLS_DHE_DSS_WITH_AES_128_CBC_SHA (0x0032)
                Cipher Suite: TLS_DHE_RSA_WITH_CAMELLIA_128_CBC_SHA (0x0045)
                Cipher Suite: TLS_DHE_DSS_WITH_CAMELLIA_128_CBC_SHA (0x0044)
                Cipher Suite: TLS_DHE_RSA_WITH_3DES_EDE_CBC_SHA (0x0016)
                Cipher Suite: TLS_DHE_DSS_WITH_3DES_EDE_CBC_SHA (0x0013)
                Cipher Suite: TLS_EMPTY_RENEGOTIATION_INFO_SCSV (0x00ff)
            Compression Methods Length: 1
            Compression Methods (1 method)
            Extensions Length: 169
            Extension: session_ticket (len=129)
                Type: session_ticket (35)
                Length: 129
                Data (129 bytes)
            Extension: signature_algorithms (len=32)
            [JA3 Fullstring: 771,157-61-53-132-156-60-47-65-5-10-163-159-107-106-57-56-136-135-162-158-103-64-51-50-69-68-22-19-255,35-13,,]
            [JA3: 39d351a74309878c74d0beb9dda7cc4f]

response from k3s apiserver:

Frame 17: 73 bytes on wire (584 bits), 73 bytes captured (584 bits)
Ethernet II, Src: 0a:f7:16:4a:ff:2d (0a:f7:16:4a:ff:2d), Dst: 0a:3c:38:5d:f6:49 (0a:3c:38:5d:f6:49)
Internet Protocol Version 4, Src: 10.0.149.210, Dst: 10.0.125.179
Transmission Control Protocol, Src Port: 6443, Dst Port: 40346, Seq: 1, Ack: 149, Len: 7
Transport Layer Security
    TLSv1.2 Record Layer: Alert (Level: Fatal, Description: Handshake Failure)
        Content Type: Alert (21)
        Version: TLS 1.2 (0x0303)
        Length: 2
        Alert Message
            Level: Fatal (2)
            Description: Handshake Failure (40)

For comparison this is what kubeadm based deployment responds to the AWS LB:

Frame 56: 117 bytes on wire (936 bits), 117 bytes captured (936 bits)
Ethernet II, Src: MS-NLB-PhysServer-23_d1:98:6a:3c (02:17:d1:98:6a:3c), Dst: 02:cf:f1:83:e8:74 (02:cf:f1:83:e8:74)
Internet Protocol Version 4, Src: 10.0.23.36, Dst: 10.0.75.123
Transmission Control Protocol, Src Port: 14052, Dst Port: 6443, Seq: 310, Ack: 138, Len: 51
Transport Layer Security
    TLSv1.2 Record Layer: Change Cipher Spec Protocol: Change Cipher Spec
        Content Type: Change Cipher Spec (20)
        Version: TLS 1.2 (0x0303)
        Length: 1
        Change Cipher Spec Message
    TLSv1.2 Record Layer: Handshake Protocol: Encrypted Handshake Message
        Content Type: Handshake (22)
        Version: TLS 1.2 (0x0303)
        Length: 40
        Handshake Protocol: Encrypted Handshake Message

Frame 57: 97 bytes on wire (776 bits), 97 bytes captured (776 bits)
Ethernet II, Src: MS-NLB-PhysServer-23_d1:98:6a:3c (02:17:d1:98:6a:3c), Dst: 02:cf:f1:83:e8:74 (02:cf:f1:83:e8:74)
Internet Protocol Version 4, Src: 10.0.23.36, Dst: 10.0.75.123
Transmission Control Protocol, Src Port: 14052, Dst Port: 6443, Seq: 361, Ack: 138, Len: 31
Transport Layer Security
    TLSv1.2 Record Layer: Encrypted Alert
        Content Type: Alert (21)
        Version: TLS 1.2 (0x0303)
        Length: 26
        Alert Message: Encrypted Alert

Frame 64: 205 bytes on wire (1640 bits), 205 bytes captured (1640 bits)
Ethernet II, Src: 02:cf:f1:83:e8:74 (02:cf:f1:83:e8:74), Dst: MS-NLB-PhysServer-23_d1:98:6a:3c (02:17:d1:98:6a:3c)
Internet Protocol Version 4, Src: 10.0.75.123, Dst: 10.0.2.164
Transmission Control Protocol, Src Port: 6443, Dst Port: 5974, Seq: 18576, Ack: 656, Len: 139
Transport Layer Security
    TLSv1.2 Record Layer: Application Data Protocol: Hypertext Transfer Protocol
        Content Type: Application Data (23)
        Version: TLS 1.2 (0x0303)
        Length: 134
        Encrypted Application Data: 17b244717e4deddcc941c318594232bf613ae4da985e6e21035c345ee2818928cfbb631b…
        [Application Data Protocol: Hypertext Transfer Protocol]

....

@mkmik
Copy link
Contributor Author

mkmik commented Dec 12, 2022

this correlates with the k3s logs:

$  journalctl -u k3s
....
Dec 12 16:06:52 ip-10-0-149-210 k3s[1337]: time="2022-12-12T16:06:52.065175784Z" level=info msg="Cluster-Http-Server 2022/12/12 16:06:52 http: TLS handshake error from 10.0.11.159:9252: tls: no cipher suite supported by both client and server"

@mkmik
Copy link
Contributor Author

mkmik commented Dec 12, 2022

# cat /etc/rancher/k3s/config.yaml
cluster-init: true
disable-cloud-controller: true
kube-apiserver-arg:
- anonymous-auth=true
- tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_GCM_SHA384

@mkmik
Copy link
Contributor Author

mkmik commented Dec 12, 2022

so, the k3s api server is explicitly configured to accept TLS_RSA_WITH_AES_256_GCM_SHA384 (among others).

The same cipher is proposed by the client handshake Cipher Suite: TLS_RSA_WITH_AES_256_GCM_SHA384 (0x009d)

There must be something else.

@mkmik
Copy link
Contributor Author

mkmik commented Dec 12, 2022

another example; TLS_DHE_RSA_WITH_AES_128_GCM_SHA256 is supported by k3s, and we can use it locally with:

$ curl -k https://localhost:6443 --ciphers DHE-RSA-AES128-GCM-SHA256

but it fails with:

$ curl -k https://localhost:6443 --ciphers DHE-RSA-AES128-GCM-SHA256  --tlsv1.2 --tls-max 1.2
curl: (35) error:14094410:SSL routines:ssl3_read_bytes:sslv3 alert handshake failure

I find the TLS cipher/version matrix very hard to keep up with; I'll try to figure out which ciphers need to be enabled so that it k3s works with TLS1.2

@zawachte
Copy link
Collaborator

zawachte commented Dec 12, 2022

Thanks @mkmik, I got caught on this one as well.

I meant to post an issue when I was first looking into running this on aws. The only workaround I got working was changing the healthcheck from ssl to tcp after cluster creation.

I started a thread in slack about this https://kubernetes.slack.com/archives/CD6U2V71N/p1637946333265800.

Never cracked this ... had to move on to other things but it looks like you are already at the point where I gave up.

@richardcase works on cluster-api aws and is building a rke2 provider. He may have some insights into this issue.

@mkmik
Copy link
Contributor Author

mkmik commented Dec 12, 2022

According to sslyze the apiserver is only supporting the elliptic curve stuff (possibly because the cert has been created that way?)

 * TLS 1.2 Cipher Suites:
     Attempted to connect using 156 cipher suites.

     The server accepted the following 3 cipher suites:
        TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256     256       ECDH: X25519 (253 bits)
        TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384           256       ECDH: prime256v1 (256 bits)
        TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256           128       ECDH: prime256v1 (256 bits)

     The group of cipher suites supported by the server has the following properties:
       Forward Secrecy                    OK - Supported
       Legacy RC4 Algorithm               OK - Not Supported

@mkmik
Copy link
Contributor Author

mkmik commented Dec 13, 2022

I methodically compared the difference in behavior between a CAPA kubeadm based deployment and a cluster-api-k3s based one.

I ran a sample Go program with config generated by https://ssl-config.mozilla.org/#server=go&version=1.19&config=intermediate&hsts=false&guideline=5.6

which is the same config generator used by k8s itself, see https://github.com/k3s-io/k3s/blob/f8b661d590ecd1ed2ed04b3c51ff5e6d67cb092b/pkg/cli/server/server.go#L380

// generated 2022-12-13, Mozilla Guideline v5.6, Go 1.14.4, intermediate configuration, no HSTS
// https://ssl-config.mozilla.org/#server=go&version=1.14.4&config=intermediate&hsts=false&guideline=5.6
package main

import (
	"crypto/tls"
	"fmt"
	"log"
	"net/http"
)

func main() {
	mux := http.NewServeMux()
	mux.HandleFunc("/", func(w http.ResponseWriter, req *http.Request) {
		w.Write([]byte("This server is running the Mozilla intermediate configuration.\n"))
	})

	cfg := &tls.Config{
		MinVersion:               tls.VersionTLS10,
		PreferServerCipherSuites: true,
		CipherSuites: []uint16{
			tls.TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,
			tls.TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,
			tls.TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,
			tls.TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,
			tls.TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,
			tls.TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,
			tls.TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256,
			tls.TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256,
			tls.TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA,
			tls.TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,
			tls.TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,
			tls.TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,
			tls.TLS_RSA_WITH_AES_128_GCM_SHA256,
			tls.TLS_RSA_WITH_AES_256_GCM_SHA384,
			tls.TLS_RSA_WITH_AES_128_CBC_SHA256,
			tls.TLS_RSA_WITH_AES_128_CBC_SHA,
			tls.TLS_RSA_WITH_AES_256_CBC_SHA,
			tls.TLS_RSA_WITH_3DES_EDE_CBC_SHA,
		},
	}

	srv := &http.Server{
		Addr:      ":6443",
		Handler:   mux,
		TLSConfig: cfg,
		// Consider setting ReadTimeout, WriteTimeout, and IdleTimeout
		// to prevent connections from taking resources indefinitely.
	}

	log.Fatal(srv.ListenAndServeTLS(
		"/root/apiserver.crt",
		"/root/apiserver.key",
	))
}

I shut down the api-server and ran it on port 6443, looking at the TLS handshakes performed by the AWS LB using tcpdump.

Let's call HostA the host running kubeadm and Host3 running k3s.

The same program worked on HostA and didn't work on Host3.
The only difference were the TLS keypairs on the two machienes.
I copied the keypair from HostA to Host3 and it worked on Host3 too.

tcpdump revealed that the negotiated cipher suite was TLS_RSA_WITH_AES_128_GCM_SHA256 (0x009c)

I modified the test program to include the ciphers included by default by k3s:

        cfg := &tls.Config{
                MinVersion: tls.VersionTLS12,
                CipherSuites: []uint16{
                        tls.TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,
                        tls.TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,
                        tls.TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,
                        tls.TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,
                        tls.TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,
                        tls.TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,
                        tls.TLS_RSA_WITH_AES_128_GCM_SHA256,
                },
        }

It didn't work until I added tls.TLS_RSA_WITH_AES_128_GCM_SHA256, which is similar to the included tls.TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 but it doesn't have the ECDHE prefix.

This cipher is not chosen when I use the certificate created during k3s initialization. I conclude the certificate is not compatible with ECDHE.

TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 means:

  • the certificate contains a RSA public key.
  • the key exchange will be done using ECDHE (Elliptic-curve Diffie–Hellman)
  • the symmetric cipher used after the key exchange will be AES-GCM with a 128 bit key.
  • PRF (pseudo-random function) for key exchange is SHA256.

I have layperson knowledge about how TLS works and I assumed that the DH exchange didn't depend on the asymmetric crypto used to verify the certificate signature (here RSA I assume).

Yesterday I did quickly check the CA certificate and I found nothing unusual:

Certificate:
    Data:
        Version: 3 (0x2)
        Serial Number: 0 (0x0)
        Signature Algorithm: sha256WithRSAEncryption
        Issuer: CN = kubernetes
        Validity
            Not Before: Dec 12 14:21:18 2022 GMT
            Not After : Dec  9 14:26:18 2032 GMT
        Subject: CN = kubernetes
        Subject Public Key Info:
            Public Key Algorithm: rsaEncryption
                RSA Public-Key: (2048 bit)
                Modulus: <.......>
                Exponent: 65537 (0x10001)

However, when looking at the actual certificate that used by the server (which is signed by the CA but is not the CA certificate), I can see it's using elliptic curve crypto:

Certificate:
    Data:
        Version: 3 (0x2)
        Serial Number: 8063256806938441330 (0x6fe671584e50f672)
        Signature Algorithm: sha256WithRSAEncryption
        Issuer: CN = kubernetes
        Validity
            Not Before: Dec 12 14:21:18 2022 GMT
            Not After : Dec 12 14:26:55 2023 GMT
        Subject: CN = kube-apiserver
        Subject Public Key Info:
            Public Key Algorithm: id-ecPublicKey
                Public-Key: (256 bit)
                pub:
                    04:55:59:f2:0e:57:4d:4b:ed:a6:47:38:ee:6d:43:
                    20:6e:45:b8:a8:44:a9:19:2d:01:0f:da:72:7c:8a:
                    03:d5:17:ba:55:8a:79:26:67:97:80:65:4a:3d:54:
                    37:ef:3d:af:98:83:cc:09:80:19:a9:3b:7b:11:8c:
                    42:25:d0:d5:69
                ASN1 OID: prime256v1
                NIST CURVE: P-256

The file name is /var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt and presumably created either by the k3s installer or by the k3s binary itself on first run.


This to investigate:

  • what's the easiest way to let k3s create the server certificate using Public Key Algorithm: rsaEncryption
  • is it possible to teach ELB to support TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 ?
  • how hard would it be to configure the ELB to use TCP healthcheck instead of "SSL" healthcheck? who creates that LB? CAPA?

@mkmik
Copy link
Contributor Author

mkmik commented Dec 13, 2022

how hard would it be to configure the ELB to use TCP healthcheck instead of "SSL" healthcheck? who creates that LB? CAPA?

kubernetes-sigs/cluster-api-provider-aws#3124 implemented the healthCheckProtocol field in the controlPlaneLoadBalancer

@mkmik
Copy link
Contributor Author

mkmik commented Dec 13, 2022

trying out

apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
 kind: AWSCluster
 metadata:
   name: k3-test-8
 spec:
 #  bastion: 
 #    enabled: true
   network:
     vpc:
       availabilityZoneUsageLimit: 1
   region: us-east-1
   sshKeyName: default
+  controlPlaneLoadBalancer:
+    healthCheckProtocol: TCP

@mkmik
Copy link
Contributor Author

mkmik commented Dec 13, 2022

is it possible to teach ELB to support TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 ?

confirmed: no

@mkmik
Copy link
Contributor Author

mkmik commented Dec 13, 2022

Ok successfully created a k3s cluster!

$ clusterctl describe cluster k3-test-9
NAME                                                          READY  SEVERITY  REASON  SINCE  MESSAGE
Cluster/k3-test-9                                             True                     113s
├─ClusterInfrastructure - AWSCluster/k3-test-9                True                     3m1s
├─ControlPlane - KThreesControlPlane/k3-test-9-control-plane  True                     113s
│ └─Machine/k3-test-9-control-plane-8lgzc                     True                     2m35s
└─Workers
  └─MachineDeployment/k3-test-9-md-0                          True                     47s
    └─Machine/k3-test-9-md-0-f8c778d8f-mp4qb                  True                     83s

@mkmik
Copy link
Contributor Author

mkmik commented Dec 13, 2022

@zawachte thanks for the links; they contained the necessary references to learn that CAPA can actually control the healthcheck protocol.

I propose closing this issue with https://github.com/zawachte/cluster-api-k3s/pull/18

@zawachte
Copy link
Collaborator

Oh! This was the feature I was looking for years ago!

Happy to see they implemented it and everything is working smoothly. Great find!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants