-
Notifications
You must be signed in to change notification settings - Fork 191
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
unable to apply cluster api stack to bootstrap cluster #81
Comments
The command gets further for me than it does for you, although it still doesn't finish:
|
I diff'd my provider components yaml file against yours and other than the expected differences, the only thing I see is the changes merged in #85. |
From your debugging output, it looks like the CRD is successfully created (that's what is defined in the provider components yaml) but the machine(s) cannot be successfully applied to your cluster. What does your machines.yaml look like? And can you verify that you have the correct validation (the disk part is at https://github.com/kubernetes-sigs/cluster-api-provider-gcp/blob/master/config/crds/gceproviderconfig_v1alpha1_gcemachineproviderspec.yaml#L19-L36)? |
Thanks much Robert for taking a look. Let me know what is to be done. |
Those files look the same as mine (diffs only show different project names). I also noticed that you are using 1.12.4 instead of 1.12.0 as the k8s version in minikube but that doesn't seem to change any output that I'm seeing. What version of kubectl is on your path? I think that there was an issue with kubectl at some point and I stopped upgrading. It looks like I'm still using 1.8.11:
|
The issue that caused me to pin kubectl was kubernetes-sigs/cluster-api#137. It looks like that may have been fixed in a later release, but I haven't gone back and tried newer ones. |
Client Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.0", GitCommit:"925c127ec6b946659ad0fd596fa959be43f0cc05", GitTreeState:"clean", BuildDate:"2017-12-15T21:07:38Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"darwin/amd64"} |
Changing it the kubectl client version to v1.8.11 and modifying the code to pass validate==false still doesn't work. I get the same output |
So "timed out waiting for the condition" looks like it means that the machine didn't go ready in time. You might want to try passing I'm not sure if "Cleaning up bootstrap cluster" means the objects were deleted, but you could check the status of the machine in the default namespace during the 30 minute timeout... Maybe there are some clues there. You could also SSH to the machine using |
I did v=10 earlier and below is the output I0103 17:00:57.298647 22785 clusterclient.go:577] Waiting for Machine gce-master-njnlj to become ready... I tried ssh but the machine doesn't seem to exist. bsingarayan@bsingarayan-mbp ~/g/s/s/c/p/c/c/clientset> gcloud compute ssh gce-master-njnlj
[49] us-east4-c ERROR: (gcloud.compute.ssh) Could not fetch resource:
Alternatively, I did log into the console gcloud account and I don't see any machines in the compute instance section. |
@bsingarayan - either changing the kubectl version or modifying the code to disable validation fixed your initial error and we are now seeing the same issue. |
Found the first issue - I'd pushed a new version of the gcp-provider-controller-manager image and forgot to set acls to publicly readable. You should now see the master machine get created on GCP running clusterctl using the latest image. |
On the GCE VM I see both
kube-proxy is running on the master but what is interesting is that while I can successfully curl the service IP from on the machine, I cannot reach it from within a busybox pod that I run on the master. And I noticed that coredns on the master is also crash looping (just a bit slower than the cluster api provider pods):
|
From my busybox pod I see a timeout trying to reach both the cluster IP and also the external IP for the kubernetes service:
|
Since the problem seems related to be the lack of pod <-> pod network connectivity, I tried installing calico into my cluster, but that doesn't seem to have fixed anything. |
Your system looks much better. I updated my sanbox with your commits, and I am running into below error. The machines are not getting created. ./bin/clusterctl create cluster --provider google -c cmd/clusterctl/examples/google/out/cluster.yaml -m cmd/clusterctl/examples/google/out/machines.yaml -p cmd/clusterctl/examples/google/out/provider-components.yaml -a cmd/clusterctl/examples/google/out/addons.yaml --minikube="kubernetes-version=v1.12.4" --v=4 Everything looks great. Please enjoy minikube! I0104 11:54:29.398109 28375 createbootstrapcluster.go:37] Cleaning up bootstrap cluster. |
I get the below error when using the same clusterctl create command I0104 15:26:40.158070 70886 clusterclient.go:574] Waiting for Machine gce-master-nplkn to become ready... ... |
I'm hitting this exact issue, where
|
Could you please exact steps to fix this issue as a workaround? It would unblock me and get it going. Thanks |
@babu-selector this is hack that I did:
hope it's useful for you |
Thanks alot @girikuncoro minikube logs shows the cluster creation is not progressing
Another thing I noticed is gcp pod wasn't getting created in minikube.
|
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Rotten issues close after 30d of inactivity. Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
@fejta-bot: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
I am following the exact guidelines as shown in
![screen shot 2018-12-27 at 1 48 40 pm](https://user-images.githubusercontent.com/19411970/50495492-50882480-09de-11e9-8038-b6d2e0fd45a0.png)
https://github.com/kubernetes-sigs/cluster-api-provider-gcp#getting-started
and cluster creation fails
Please let me know how to proceed. I haven't done any fancy stuffs, just following the guidelines.
and below are some useful information.
bsingarayan@bsingarayan-mbp ~/g/s/s/c/c/c/e/g/out> minikube version
minikube version: v0.32.0
I manually brought up minikube and applied the provider-components.yaml file and it had the same issue.
![screen shot 2018-12-27 at 1 52 16 pm](https://user-images.githubusercontent.com/19411970/50495563-c1c7d780-09de-11e9-9df1-23aff79b5366.png)
Below is the provider-components.yaml file for reference
provider-components.yaml.txt
BTW- I also discussed this issue in slack (cluster-api) thread and suggested to create an issue here.
Thanks
The text was updated successfully, but these errors were encountered: