-
Notifications
You must be signed in to change notification settings - Fork 38.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Stopp and Restart Kubernetes Cluster on AWS #15160
Comments
If it helps I can post the complete System Log of the EC2 Instance. |
I have exactly the same problem and I don't have any clue on what's going on. My master node does not get up after being stopped/rebooted. My setup is:
The logs from master node boot contain:
and so on... And then finally:
Could you please help us out with this one? UPDATE 1: UPDATE 2: Could this be related to the fact that the instance storage on m3.medium is only 4GB? |
@romanek-adam Maybe this helps you too! |
Having the same issue with ps. Cluster build on AWS using
|
@stemau98 I wouldn't dare remove disk formatting code. It might have worked for you but results are unpredictable. There's definitely something wrong with m3 instance types so the issue name should be updated accordingly. |
@romanek-adam & @stemau98 what version/s are you on? |
on kuberntes version 1.1 but i compiled it by my self |
@geoah 1.0.6 |
FYI: Restarting minions works fine on m4 instances. |
I see an issue with the master which is related: |
We too hit the restart failure on 1.1 with m3.medium. |
just met this issue on v1.1.3 (kubectl config was empty, probably lost data on attached ebs storage, wasn't able to re-gain control and recover with etcd, couldn't find more info about how to re-configure master back to existing cluster on aws) kubectl get nodes :: from kubelet log :: |
this is a major deal here for production use.
|
Same issue with m3.medium and ubuntu vivid. Lost cluster after master rebooted (emergency mode). |
This is now fixed in 1.2: restart & stop/start should work reliably. |
Is there a way to only stop a Kubernetes cluster on AWS and not to destroy the whole cluster?
I tested to stop Instances with Ubuntu Vivid Image and get an error after booting my Instances for the second time.
The System Log of the Instances shows Welcom to emergency mode and Dependency failed for Local File Systems and Dependency failed for /mnt/ephemeral.
The text was updated successfully, but these errors were encountered: