-
Notifications
You must be signed in to change notification settings - Fork 76
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Failing CloudInit and Kubelet setup during worker node startup #18
Comments
We dug into this offline. It turns out that the root cause here is that some worker nodes were being created in public subnets and had no route to the EKS API server. This was because the |
If both public and private subnets are attached to an EKS cluster, the API server will only be exposed to the private subnets. Any workers attached to the public subnets will be unable to contact the API server and will fail to register as nodes. These changes filter public subnets from the set of subnets passed to the worker nodes iff the EKS cluster is also attached to private subnets. Fixes #18.
If both public and private subnets are attached to an EKS cluster, the API server will only be exposed to the private subnets. Any workers attached to the public subnets will be unable to contact the API server and will fail to register as nodes. These changes filter public subnets from the set of subnets passed to the worker nodes iff the EKS cluster is also attached to private subnets. Fixes #18.
This was due to incorrect EKS networking setup. Had workers deploying to public subnets. Fix is to only put workers on the private subnets. |
If both public and private subnets are attached to an EKS cluster, the API server will only be exposed to the private subnets. Any workers attached to the public subnets will be unable to contact the API server and will fail to register as nodes. These changes filter public subnets from the set of subnets passed to the worker nodes iff the EKS cluster is also attached to private subnets. Fixes #18.
If both public and private subnets are attached to an EKS cluster, the API server will only be exposed to the private subnets. Any workers attached to the public subnets will be unable to contact the API server and will fail to register as nodes. These changes filter public subnets from the set of subnets passed to the worker nodes iff the EKS cluster is also attached to private subnets. Fixes pulumi#18. cherry-picked from fe96413
If both public and private subnets are attached to an EKS cluster, the API server will only be exposed to the private subnets. Any workers attached to the public subnets will be unable to contact the API server and will fail to register as nodes. These changes filter public subnets from the set of subnets passed to the worker nodes iff the EKS cluster is also attached to private subnets. Fixes pulumi#18. cherry-picked from fe96413
If both public and private subnets are attached to an EKS cluster, the API server will only be exposed to the private subnets. Any workers attached to the public subnets will be unable to contact the API server and will fail to register as nodes. These changes filter public subnets from the set of subnets passed to the worker nodes iff the EKS cluster is also attached to private subnets. Fixes #18. cherry-picked from fe96413
Recently in the past day all EKS clusters I create via
@pulumi/eks
'snew eks.Cluster
are failing to start up worker nodes. One symptom is that some or all of the nodes fail to join the EKS cluster.A snippet of the log is pasted further below. The code used to create the cluster, which was working properly last week, is here:
The journalctl output on one of the hosts shows the following problems:
failed to run Kubelet: could not init cloud provider "aws": error finding instance i-0f016b7e718a12801
Failed to start Apply the settings specified in cloud-config.
Unit cloud-config.service entered failed state.
The text was updated successfully, but these errors were encountered: