You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
All the steps worked mostly fine and the nodes are healthy in the AWS NLB. I do not see any issue with any pods. But when we hit the NLB url --> https://nlburl.amazonaws.com it gives an error/message as "default backend - 404". Same error comes up when I type within each of the nodes when i type localhost. Version and other cmd outputs shown below.
We are expecting the NLB url should give us the rancher UI for managing the kubernetes cluster.
Thoughts or inputs on if this is expected behaviour & how to debug and fix the issue and get the rancher UI loaded?
We installed Kubernetes with RKE in our AWS environment as per the link - https://rancher.com/docs/rancher/v2.x/en/installation/ha/
All the steps worked mostly fine and the nodes are healthy in the AWS NLB. I do not see any issue with any pods. But when we hit the NLB url --> https://nlburl.amazonaws.com it gives an error/message as "default backend - 404". Same error comes up when I type within each of the nodes when i type localhost. Version and other cmd outputs shown below.
We are expecting the NLB url should give us the rancher UI for managing the kubernetes cluster.
Thoughts or inputs on if this is expected behaviour & how to debug and fix the issue and get the rancher UI loaded?
ubuntu@xxx:/tmp$ ./rke -v
rke version v0.1.14
ubuntu@xxx:/tmp$ kubectl version
Client Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.1", GitCommit:"eec55b9ba98609a46fee712359c7b5b365bdd920", GitTreeState:"clean", BuildDate:"2018-12-13T10:39:04Z", GoVersion:"go1.11.2", Compiler:"gc", Platform:"linux/amd64"}
ubuntu@xxx:/tmp$ kubectl --kubeconfig /tmp/kube_config_cluster.yml get ingress -n cattle-system -o wide
NAME HOSTS ADDRESS PORTS AGE
rancher rancher.mydomain.com 1.2.3.4,5.6.7.8,9.0.1.2 80, 443 19h
ubuntu@xxx:/tmp$ kubectl --kubeconfig /tmp/kube_config_cluster.yml get nodes
NAME STATUS ROLES AGE VERSION
1.2.3.4 Ready controlplane,etcd,worker 21h v1.11.5
5.6.7.8 Ready controlplane,etcd,worker 21h v1.11.5
9.0.1.2 Ready controlplane,etcd,worker 21h v1.11.5
ubuntu@xxx:/tmp$ kubectl --kubeconfig /tmp/kube_config_cluster.yml describe ingress -n cattle-system
Name: rancher
Namespace: cattle-system
Address: 1.2.3.4,5.6.7.8,9.0.1.2
Default backend: default-http-backend:80 ()
TLS:
tls-rancher-ingress terminates rancher.mydomain.com
Rules:
Host Path Backends
rancher.mydomain.com
rancher:80 ()
Annotations:
certmanager.k8s.io/issuer: rancher
field.cattle.io/publicEndpoints: [{"addresses":["1.2.3.4","5.6.7.8","9.0.1.2"],"port":443,"protocol":"HTTPS","serviceName":"cattle-system:rancher","ingressName":"cattle-system:rancher","hostname":"rancher.mydomain.com","allNodes":false}]
nginx.ingress.kubernetes.io/proxy-connect-timeout: 30
nginx.ingress.kubernetes.io/proxy-read-timeout: 1800
nginx.ingress.kubernetes.io/proxy-send-timeout: 1800
Events:
ubuntu@1.2.3.4:/tmp$ curl localhost
default backend - 404
The text was updated successfully, but these errors were encountered: