Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

There is no local etcd, removing 'etcdctl set'. #6281

Closed
wants to merge 1 commit into from

Conversation

gosharplite
Copy link
Contributor

No description provided.

@brendandburns
Copy link
Contributor

@kelseyhightower thoughts?

@rjnagal
Copy link
Contributor

rjnagal commented Apr 6, 2015

@kelseyhightower can you take a look?

@erictune
Copy link
Member

@AntonioMeireles can you take a look at this PR and say if it looks safe?

@erictune
Copy link
Member

@pires any comment here?

@erictune
Copy link
Member

@bussyjd this touches the same area as your #6890. Want to comment on this PR?

@pires
Copy link
Contributor

pires commented Apr 16, 2015

To me it feels like we should have kept the single etcd in the master node, where apiserver runs and not a cluster like @AntonioMeireles implemented.

In this case, removing this line (or the entire entry is OK), because

flannel:
    interface: eth1
    etcd_endpoints: http://<master-private-ip>:4001

@bussyjd
Copy link
Contributor

bussyjd commented Apr 17, 2015

@erictune If a single master node is the way to go it then makes sense to get rid of this line IMO with @pires 's PR
I am sorry if I bring back an old discussion but running etcd only on the master feels like it might cause some issues in niche cases when the master is not available (flanneld nw conf update for example). It defeats the purpose of etcd's HA.
Now might be the right time to weight the pros and cons if it haven't been done before otherwise It's totally fine to discard #6555 and #6890 in favor of #6281

@pires
Copy link
Contributor

pires commented Apr 17, 2015

@bussyjd @AntonioMeireles was the one who brought etcd HA to light, since I was in favor of a simpler approach. People here will tell you etcd is just an implementation detail that may change in the future and have that said, I am to believe they don't actually care about etcd best-practices. And I agree with this idea. So, to me, if you're not happy with a single etcd node cluster, perhaps it's time to assemble it outside of Kubernetes?

Anyway, #6281 works for me.

@bussyjd
Copy link
Contributor

bussyjd commented Apr 17, 2015

@pires Thank you for the info.

So, to me, if you're not happy with a single etcd node cluster, perhaps it's time to assemble it outside of Kubernetes?

Is that in any world a productive answer?

@erictune It is now clear that case #6281 is the way to go.

@pires
Copy link
Contributor

pires commented Apr 17, 2015

Is that in any world a productive answer?

@bussyjd I'm sorry if it offended you in some way, but it's actually how I use it in production. Once again, etcd is just an implementation detail. How coupled do we want its high-availability to be to Kubernetes?

@AntonioMeireles
Copy link
Contributor

[sorry lag, mutitasking too much]

several issues/angles at stake (and now we even have etcd2 to grasp as it is already in latest CoreOS alpha).

  • one approach we could use is the old one were the minions are absolutelly dumb and have no etcd (1, 2, whatever) of any kind, with everything pointing to master. with that approach one will always have to "cheat" one way or another and make sure that in no circunstance the nodes have the chance to want/need etcd before master is up. [this is easy to trigger specially in Vagrant with fast boxes/ssds and without VAGRANT_NO_PARALLEL=true.
  • another approach that @pires thinks is too much (its is, but handy for testing) is just have a local etcd cluster, spread among master and nodes.
  • finally, the last approach is while having a single-node etcd-master running on master having on all the minions etcd in proxy/client mode pointing to master. this way and without further cheating even if they start well in advance the minions will wait orderly that etcd (and whatever deps on it) is up and ready in master. [this is what is being done in this master/node and works afaict flawlessly.] Also, from my understanding this approach is the recomended way by upstream of consuming etcd and should cover all use cases.

@pires et all - any objection for a PR that implements last option (IMHO the way to go), plus full etcd2 awareness, plus fixing kubelet/kube-register interaction in 0.15.x ? (as afaik this 3 issues are the ones currently affecting this ?]

@pires
Copy link
Contributor

pires commented Apr 17, 2015

@AntonioMeireles I'm OK with depending on etcd2 (CoreOS 563.0.0) and follow the proxy approach.

@AntonioMeireles
Copy link
Contributor

@pires can you please when you time hand pick the mods from my tree to yours (now that they are closer again :-) ) for "independent" testing :-) to see if this time we get rid of all corner cases :-). Anyway PR will available for discussion after my lunch :-)

@pires
Copy link
Contributor

pires commented Apr 17, 2015

@AntonioMeireles working on the proxy thing right now.

@pires
Copy link
Contributor

pires commented Apr 17, 2015

@AntonioMeireles done and can confirm it's working.

@AntonioMeireles
Copy link
Contributor

@pires - thnks - working on PR.

@AntonioMeireles
Copy link
Contributor

fellows,

submited #6973. the discussion/review should probably jump to there 👼

@brendandburns
Copy link
Contributor

closing this as it appears to be obsoleted by #6973

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

8 participants