This repository has been archived by the owner on Mar 28, 2020. It is now read-only.
-
Notifications
You must be signed in to change notification settings - Fork 740
etcd-operator panics on self-hosted bootkube #851
Labels
Comments
How did you scale etcd? What requests you sent to etcd operator? Can you reproduce this issue? Any reproduce steps? |
I've updated my comment to make it more clear. In the current cluster state I can reproduce this everytime. Let me setup a new cluster, and see if I can reproduce as well |
On a new cluster, I killed the operator a few times, scaled up and down more than once but can't reproduce anymore. I'll leave it up to you to close this issue |
@janwillies OK. I think we might hit a race. Just want to make sure it does not happen all the time and kind of confirm my guess. we will get fixed for you soon. |
hongchaodeng
added a commit
to hongchaodeng/etcd-operator
that referenced
this issue
Mar 2, 2017
hongchaodeng
added a commit
to hongchaodeng/etcd-operator
that referenced
this issue
Mar 2, 2017
hongchaodeng
added a commit
to hongchaodeng/etcd-operator
that referenced
this issue
Mar 2, 2017
hongchaodeng
added a commit
to hongchaodeng/etcd-operator
that referenced
this issue
Mar 2, 2017
hongchaodeng
added a commit
to hongchaodeng/etcd-operator
that referenced
this issue
Mar 2, 2017
Sign up for free
to subscribe to this conversation on GitHub.
Already have an account?
Sign in.
I'm running a self-hosted bootkube cluster (see kubernetes-retired/bootkube#346) and when trying to scale etcd I ran into the following problems:
Scaling etcd:
Output:
etcd-operator log:
I'm guessing it's because etcd-operator panics&restarts and can’t find the already-running etcd cluster (
"size": 0
)@xiang90 @hongchaodeng
The text was updated successfully, but these errors were encountered: