Skip to content
This repository has been archived by the owner on Apr 17, 2019. It is now read-only.

[ansible] add source_type: docker (run Kubernetes parts in containers) #673

Closed
rutsky opened this issue Mar 30, 2016 · 10 comments
Closed

[ansible] add source_type: docker (run Kubernetes parts in containers) #673

rutsky opened this issue Mar 30, 2016 · 10 comments
Labels
area/ansible lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@rutsky
Copy link
Contributor

rutsky commented Mar 30, 2016

Most of Kubernetes parts that are being deployed can be run in Docker containers.
In Core OS installation guide all parts of Kubernetes cluster is deployed in Docker containers (with kubelet in rkt container due to issues with mount namespace propagation AFAIK).

Running Kubernetes in Docker should be almost identical on all platforms (since all platforms has Docker), so this may be good default way of running Kubernetes.

Also this corresponds to cluster deployment proposal: as I see all components will be run either as Docker container, systemd-nspawn container, or as DaemonSet.

I believe work on running Kubernetes components in containers is already started by core Kubernetes developers (for example kubernetes/kubernetes#23233) and it would be nice to coordinate our efforts so that we wouldn't do same job several times.

/cc: @eparis, @danehans, @mikedanese

@rutsky rutsky changed the title [ansible] add source_type: docker [ansible] add source_type: docker (run Kubernetes parts in containers) Mar 30, 2016
@danehans
Copy link

@rutsky +1 to this and make source_type: docker the default for coreos. I intended to bring coreos in this direction, I just wanted to follow existing design patterns of contrib/ansible and later align with upstream coreos.

/cc: @adamschaub @stephenrlouie

@mikedanese
Copy link
Contributor

@rutsky
Copy link
Contributor Author

rutsky commented Apr 7, 2016

This may be related: kubernetes/kubernetes#23174

According to the initial bug report description it is assumed that everything (maybe except kubelet at this moment) will be deployed in containers.

@danehans
Copy link

danehans commented Apr 7, 2016

@rutsky are you waiting on the repo move or for running the kubelet in a docker container to work on this issue? I'm just trying to coordinate, as implementing K8S HA is a high priority in my world. From my understanding, we can implement K8S HA according to best practices [1] by using source_type: docker. If you think it's going to be a while for source_type: docker to drop, then I'm inclined to add K8S HA beforehand.

[1] http://kubernetes.io/docs/admin/high-availability/

@gitschaub
Copy link
Contributor

Is 'source_type' the correct place for this configuration? In context of the current master role, source_type indicates the process of retrieving/installing the required binaries, not running/monitoring them. Would it be better to add in something like 'ansible_service_mgr=docker/systemd/sysvinit' as in https://github.com/kubespray/kargo/blob/master/roles/kubernetes/master/tasks/main.yml#L32?

@rutsky
Copy link
Contributor Author

rutsky commented Apr 8, 2016

@danehans I don't understand how source_type: docker may help with K8s masters HA setup? I think it's orthogonal, as @adamschaub noticed.

Masters HA setup is low priority for me atm.

source_type: docker is low priority for me too, but if anyone will start this transition I can help with testing and fixing related issues that I will be able to reproduce in my local testing setup (Vagrant+VirtualBox+Ansible+CoreOS).

@danehans
Copy link

danehans commented Apr 8, 2016

@rutsky I met with @adamschaub yesterday to discuss k8s ha in more detail. I agree that source_type: docker is not a requirement for k8s ha.

We would like to implement k8s ha according to best practices [1], which deploys k8s api/scheduler/controller-manager services in pods. In this scenario, if more than 1 [masters] is defined in the inventory, these k8s services are not deployed as they are done today (using systemd units, config files, etc.).

As @adamschaub mentioned above, maybe it makes sense to introduce ansible_service_mgr=docker/systemd/sysvinit, where a user would specify multiple [masters], triggering ansible_service_mgr=docker for instantiating k8s api/scheduler/controller-manager services in pods.

I am open to other ideas on how to support running these k8s services in pods, while maintaining backwards compatibility. Let us know your thoughts.

[1] http://kubernetes.io/docs/admin/high-availability/

/cc @eparis

@fejta-bot
Copy link

Issues go stale after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

Prevent issues from auto-closing with an /lifecycle frozen comment.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or @fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Dec 16, 2017
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or @fejta.
/lifecycle rotten
/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jan 15, 2018
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
area/ansible lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

7 participants