Join GitHub today
GitHub is home to over 20 million developers working together to host and review code, manage projects, and build software together.
By default we should seek a hyper-converged architecture #70
Comments
|
@marcoceppi We could get even more converged if we released a kubernetes-core bundle with our new architecture for those who don't need the ElasticCo stuff. |
|
I agree, I'll discuss this to see what the right direction is |
|
As an update to this issue, we're exploring making a core bundle, and doing a level of inheritance |
mbruzek
added
the
kind/feature
label
Oct 5, 2016
chuckbutler
assigned
wwwtyro
Oct 5, 2016
chuckbutler
added this to the 16.10 milestone
Oct 5, 2016
mbruzek
assigned
Cynerva
Oct 5, 2016
chuckbutler
referenced this issue
Oct 5, 2016
Closed
Kubernetes does not run on the LXD provider in Juju #4
chuckbutler
changed the title from
By default we should seek a hyper-converged arcitecture
to
By default we should seek a hyper-converged architecture
Oct 5, 2016
|
Testing on openstack, where LXD containers are -not- addressable. When deploying a modified bundle with the following mapping:
The
Will do additional testing to see what else we hit. |
|
Chicken/egg. with ETCD in the lxd container, you will either need addressability into that container - via some means of proxy like socat, or the etcd unit has to be in the principal space. Another option would be to deploy an etcd proxy subordinate to bind on the principal unit. Which I know the project calico team has pioneered. https://github.com/projectcalico/layer-etcd-proxy/blob/master/metadata.yaml#L26 I think if we swap that out for a True, build this and relate it both to the etcd container in lxd, and attach the sub to the kubeapi-load-balancer principal unit, we're in business with converging this and giving the flannel units access to the converged etcd application. |
|
@chuckbutler Thanks, I'll give that a go later today or tomorrow. For now I'm continuing to look for other problems. With etcd moved to a top-level machine:
The deployment seems to stall with this status:
On 0/lxd/1, flannel service looks like it's having fun:
Unit logs: I'll leave this deployment up in case there's more to look into. |
|
Looks like the missing files in After manually enabling Hurraaay rabbit hole! I'm going to shift my attention back to etcd now. |
If we're willing to deploy etcd-proxy directly on the host machine, would we also be willing to deploy etcd directly on the host machine? I don't think it needs to be a subordinate - there's nothing stopping us from deploying multiple applications to one machine. This deployment, for example, does not show any early problems:
|
|
For now, I propose that we test with this bundle:
4 machines for 1 compute - a bit short of our early hopes. We can reduce that by 1 more if we shove |
|
+1 to this latest suggestion. Should we create a new bundle with a different name in the same repository or overwrite the original bundle? @chuckbutler @wwwtyro @marcoceppi |
|
@Cynerva Good investigative work. I wanted to follow up that i have a similar bundle deployed in our lab environment (manually provisioned)
This seems to be a fairly reasonable deployment, but we'll also need to ensure we're cleaning up after charm removal with this pattern, as its co-locating on a pricnipal unit. There are certain members of our community that extremely dislike this pattern as it breaks encapsulation. I'll leave it up to a thought exercise on our team if we can reasonably pull this off or if we will need to revisit deployment in LXD containers, to isolate the applications. The subordinate of 'etcd-proxy' would give you the added benefit of making a quorem on a single host using multiple lxd containers. Allowing users to use a proper etd cluster with replication, but still suffering from a SPOF until we get routeable LXD addressing on all clouds, which then removes the need for the proxy. |
This was referenced Oct 12, 2016
|
Closing this, I don't think there's anything left to do here. |
marcoceppi commentedOct 3, 2016
If possible, the following components should be placed in LXD machines:
The cost of getting three compute nodes is too high in today's bundle, however, not every cloud (not many) support address-ability of containers so public accessible components, kube-api-proxy, kibana need to be on "metal", along with the workers (since k8s + docker don't operate in lxd /yet/).
But a footprint of 5 machines for 3 compute instead of 9 machines for 3 compute seem much more tolerable. This will require intensive testing on public, private, and metal to insure no regressions