Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[WIP] kubeadm and add-on docs #1265

Merged
merged 9 commits into from Sep 26, 2016
Merged

[WIP] kubeadm and add-on docs #1265

merged 9 commits into from Sep 26, 2016

Conversation

lukemarsden
Copy link
Contributor

@lukemarsden lukemarsden commented Sep 19, 2016

DO NOT MERGE

This is a WIP.

This PR introduces:

  • a new "quickstart" on how to use kubeadm to easily install Kubernetes
  • a new "add-ons" page, intended to be a user-friendly central place for folks to go to look at the list of add-ons they can install on their cluster

Currently I've only added Weaveworks add-ons to the latter, but we should encourage other add-ons to be added to this list.


This change is Reviewable

@lavalamp
Copy link
Member

please squash this :)

@lavalamp lavalamp removed their assignment Sep 19, 2016
@lukemarsden
Copy link
Contributor Author

@lavalamp squashed :)

Copy link
Member

@caseydavenport caseydavenport left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Taking a first pass at this.

I didn't manage to get the ubuntu instructions to work - the provided install for kubelet-kubeadm didn't work. Not sure if it is meant to yet. Also some minor typos.

Also have a few concerns regarding the addons doc - mostly its relationship with the existing home for Kubernetes addons and the verbosity.

@@ -0,0 +1,37 @@
---
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it's a bit confusing that these are called "addons", given there are a bunch of addons that already exist in the cluster/addons directory that have different semantics than what is described here. How does this relate to the ones here? https://github.com/kubernetes/kubernetes/tree/master/cluster/addons

I think we either want to document the existing addons in that directory / the relationship this document has to the things in that directory, or rename the things in this document to something else (e.g "extensions", whatever).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am thinking what makes most sense is to split the definition of add-ons into "built-in add-ons" and "3rd party add-ons". The former will obey the semantics in the cluster addons README and the latter will be "just kubectl apply -f it" style external add-ons like Weave Net. How does that sound @caseydavenport?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm, I think there is some overlap between this comment and the comments below.

The best thing in the short term is probably to:

  • keep calling all of these things addons, but not mandating an installation method. Each addon should document its own installation procedure. This keeps the docs leaner and easier to maintain, since the documentation for each addon lives with that addon, and some addons will be more / less complex than others.
  • We can link to each of the existing /cluster/addons/X addons in this doc.
  • As the old /cluster directory is torn down, those existing addons will either find new homes or go away.

WDPT?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd love to see all addons move to being kubectl apply -f. It'll be much easier to install and manage. They were done the way they were done because we didn't have daemonsets at the time, etc.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'll have a go at wordsmithing this to get a balance between simplicity and explaining to users that we're transitioning from built-in addons to "self-hosted" ones.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have attempted to simplify the add-ons page per this (and other) discussion: https://lukemarsden.github.io/docs/admin/addons/

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

See my comment here: https://github.com/kubernetes/kubernetes.github.io/pull/1265/files#r80270865

To summarize, I think the clarification re: addons vs the "/cluster/addons" directory is much better! I still think we could improve the rest of the page so that it comes across less like marketing.


You can learn more about Weave for Kubernetes on the project's [GitHub page](https://github.com/weaveworks/weave-kube).

You can see a complete list of available network add-ons on the [add-ons page](/docs/admin/addons/).
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this line and link should be moved up and combined with the above "Several projects provide Kubernetes pod networks".

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1 I love all of the work y'all are doing. But please lets centralize the docs to a single list and then link off to the external docs for installation. These docs are going to sprawl and break if we don't focus it to a single location.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure, will make this change.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've moved the "You can see a complete list of available network add-ons" line up, and then repeated it again below.


## Networking

* [Weave Net](https://github.com/weaveworks/weave-kube) is a fast, reliable pod network that carries on working even in the face of network partitions, and doesn't have any infrastructure or database dependencies. Install it on a cluster whose kubelets are configured to expect a [CNI network plugin](/docs/admin/network-plugins/):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this document will get big and cumbersome if we add similar levels of detail to each arbitrary cluster extension that the community writes.

This doc should maintain a simple list, with links to the community-supported documentation/home for each item.

e.g

## Ingress Controllers

* [nginx](https://github.com/kubernetes/contrib/tree/master/ingress/controllers/nginx)
* . . .

## Networking

* [Weave Net](https://github.com/weaveworks/weave-kube)
* . . .

## Cluster Visualization

* [Kubernetes Dashboard]( . . . )
* [Weave Scope]( . . . )
* . . .

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, I think that we can be a little less verbose here and instead document all addons listed in cluster/addons plus dashboard, ingress, etc.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm assuming that alternative networking would be documented here as well in the future? (ex. when Canal combines weave + flannel or 1.0.0 of Flannel)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@cdrage Yes, I have a PR open against this branch to do that very thing!

https://github.com/lukemarsden/lukemarsden.github.io/pull/1

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1

I think we do need a 1 sentence description of each add-on though otherwise users have to click all the links to know what they're looking at. Keeping it to 1 sentence and linking to more verbose docs at the appropriate URL totally makes sense though.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I also think eventually a table view might be good for this. It'll force us to keep the one sentence descriptions short. I would like to keep the kubectl apply -f copyable commands though to demonstrate how easy it is to install these add-ons to users.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I really don't think the kubectl apply -f commands should be documented here. This needs to be just a collection of links, with any documentation living elsewhere.

A short description (less than one line) is fine, but that should be it. Installation instructions should live with the addon documentation.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK, I've removed the kubectl apply -f instructions from the addons page.


### Joining your nodes

The nodes are where your containers (your workload) will run.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

s/containers/Pods ?

s/workload/workloads

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixed, thanks!


### Installing a pod network

You must install a pod network add-on so that your pods can communicate with eachother on different hosts.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

s/eachother/each other

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixed, thanks!


1. The cluster we create here won't have cloud-provider integrations, so for example won't work with (for example) [Load Balancers](/docs/user-guide/load-balancer/) (LBs) or [Persistent Volumes](/docs/user-guide/persistent-volumes/walkthrough/) (PVs).

Instead we will use the [NodePort feature of services](/docs/user-guide/services/) to demonstrate exposing the sample application on the internet.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This line is in the future tense, but the user has already used the NodePort in the past :)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Will fix

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixed

Instead we will use the [NodePort feature of services](/docs/user-guide/services/) to demonstrate exposing the sample application on the internet.
To easily obtain a cluster which works with LBs and PVs Kubernetes, try [the "hello world" GKE tutorial](/docs/hellonode) or [one of the other cloud-specific installation tutorials](/docs/getting-started-guides/).
1. The cluster we create here will have a single master, with a single `etcd` database running on it.
Adding HA support (multiple `etcd` servers, multiple API servers, etc) is still a work-in-progress.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Adding HA support to the kubeadm tool is still a work-in-progress (Kubernetes supports it in the general case)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixed, thanks!


Instead we will use the [NodePort feature of services](/docs/user-guide/services/) to demonstrate exposing the sample application on the internet.
To easily obtain a cluster which works with LBs and PVs Kubernetes, try [the "hello world" GKE tutorial](/docs/hellonode) or [one of the other cloud-specific installation tutorials](/docs/getting-started-guides/).
1. The cluster we create here will have a single master, with a single `etcd` database running on it.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

s/will have/has

Looks like this section was intended to be at the top of the doc.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yep

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixed

@caseydavenport
Copy link
Member

This is a WIP and the instructions do not yet work.

D'oh, I should read things better.

@@ -1,2 +0,0 @@
kubernetes.io
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think you meant to change this

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Will revert before I put this up for final review


## Networking

* [Weave Net](https://github.com/weaveworks/weave-kube) is a fast, reliable pod network that carries on working even in the face of network partitions, and doesn't have any infrastructure or database dependencies. Install it on a cluster whose kubelets are configured to expect a [CNI network plugin](/docs/admin/network-plugins/):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, I think that we can be a little less verbose here and instead document all addons listed in cluster/addons plus dashboard, ingress, etc.


* [Weave Net Policy](https://github.com/weaveworks/weave-npc/tree/initial-implementation) extends Weave Net to support the [Kubernetes policy API](/docs/user-guide/networkpolicies/) so that you can securely isolate different pods from each other based on namespaces and labels:

$ kubectl apply -f https://raw.githubusercontent.com/weaveworks/weave-npc/40f5461f2f840eb8a223710e227f687cbaa55d0f/k8s/daemonset.yaml
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This isn't great. If you strongly want this, tag commit weaveworks-experiments/weave-npc@40f5461 with a version like v0.1.0 or v1.0.1

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd love to see instructions like this move to a weave owned page. It'll be easier to keep up to date as weave changes and will scale better as we document more types of add-ons.

If we have community supported add-ons (that are part of the k8s project) perhaps we should create new pages for those so that they keep this page clean?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, this was just a temporary placeholder URL. weave-npc is going to be merged into weave-kube so I will kill this section entirely.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixed


You will:

1. Install packages on all machines
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Install the kubeadm package on all machines

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Mention prerequisites? You'll need to have docker installed on each machine starting out.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

install what exactly? (kubectl / docker / rkt / kubeadm?)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This tl;dr is not meant to be followed, it's meant to give a sense of how easy the process is to encourage people to continue. Prerequisites are mentioned below.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also, you don't need to have docker installed on each machine before starting out.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've killed the tl;dr section, it was basically a repetition of the objectives section and was causing confusion.

1. Run `kubeadm init` on the machine you want to become a master
1. Run `kubeadm join` on the machines you want to become nodes

You will then be able to install a pod network with `kubectl apply`.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

...install a CNI overlay network with kubectl apply

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd prefer not to use "CNI overlay network"

If anything, call it a "CNI network provider" or "CNI network plugin" - not all of these are overlays!

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I didn't want to mention CNI because users shouldn't have to understand what that is to use this. I can call it a "network provider" rather than "pod network" if there's a strong feeling about this, but I like "pod network" because users probably grok pods, and grok that they need a network...

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've gone with pod network everywhere in this doc for consistency and also described why you need one: "You must install a pod network add-on so that your pods can communicate with each other when they are on different hosts."

## Prerequisites

1. One or more machines running Ubuntu 16.04 or Fedora 24
1. 2GB or more of RAM per machine
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Where does this come from?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pr-requisites should be higher up ^^

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wanted the tl;dr to be at the top. This is the start of the "full guide". Maybe I can make that clearer somehow.

@luxas 2GB came from my own experimentation. 512mb cluster nodes OOM as soon as you put workload on them. Maybe 1GB is OK but 2GB is a safe bet.


1. One or more machines running Ubuntu 16.04 or Fedora 24
1. 2GB or more of RAM per machine
1. A network connection between the machines with all ports open
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This sentence is really scary.

To me it sounds like:
We're going to install a secure k8s cluster, let's open all ports on all machines!

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1 - It should be clear which ports are required.

e.g "The machines will need to be able to reach the master on TCP X, and other nodes on TCP Y,Z"

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is tricky. By default most folks will run wide open between nodes in the cluster but they may want to lock it down. Perhaps say "full connectivity" instead of "all ports open". We can then have an advanced section that talks about how to run over wider links or in a more locked down way?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The default setup will be quite locked down and secure, so I think saying full connectivity between all machines like @jbeda suggested is best

But yes, we should have more in-depth docs somewhere also...

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what about private networking? seems better to say "full connectivity is required" and something about preferring private networking.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm happy to write an in-depth doc that describe more precise, detailed requirements and deployment options and also explain the nitty gritty of what kubeadm is actually doing. But I'll do that in a second pass. For now I'll clarify that this only means that there should be full connectivity between the machines not between the machines and the internet. And I guess we can mention that private networks also work :)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does anyone have a list of the TCP ports required to be open between nodes/masters? If so I will add the here. We can easily amend this doc after the release as well btw.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've got with "A network connection with open ports between the machines (public or private network is fine)"

## Objectives

1. Install a secure Kubernetes cluster on your machines
1. Install a pod network on the cluster so that application components (pods) can talk to eachother
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

each other

1. Install a secure Kubernetes cluster on your machines
1. Install a pod network on the cluster so that application components (pods) can talk to eachother
1. Install a sample microservices application (a socks shop) on the cluster

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Probably a non-objectives would fit in here also

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think that's necessary. What did you have in mind?

* If the machine is running Ubuntu 16.04, run:

# curl -sSL https://get.docker.com/ | sh
# apt-get install -y socat
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The real flow here will be

curl -sSL https://get.docker.com/ | sh
echo "deb https://packages.cloud.google.com/apt kubernetes-xenial-unstable main" > /etc/apt/sources.list.d/k8s.list
curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
apt-get update && apt-get install -y kubeadm

@mikedanese We might want to promote kubeadm from unstable on the real release

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also point out that the kubeadm package will install kubelet, kubectl and kubernetes-cni as well, and other dependencies of the kubelet

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's not quite right (we're using Xenial's docker now), but yes, I'll update this once we have official debs.

Copy link
Member

@luxas luxas left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A good start, but still there are things left to be done.
I'd like a little more in depth section somewhere that describes what kubeadm does under the hood; like when Kelsey describes Kubernetes the Hard Way

Also we should document the possible customization flags we have; service-cidr, cloud-provider, the external etcd options, etc.

A limitation right now is that kubectl logs doesn't work I think (since the node names aren't resolvable)
We should add that to a caveats section and how to work around (modifying /etc/hosts?)

enabled=1
gpgcheck=0
EOF
# yum install kubelet kubeadm kubectl kubelet-plugin-cni
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Only kubeadm should be specified here, kubeadm should depend on the rest.
Also I think kubelet-plugin-cni should be named kubernetes-cni as on the debian side
@dgoodwin ^

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I will rename on my end.

# dpkg -i debian/bin/*.deb

If the machine is running Fedora 24, run:

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

First docker should be installed from curl -sSL https://get.docker.com/ | sh

The master is the machine where the "control plane" components run, including `etcd` (the cluster database) and the API server (which the `kubectl` CLI communicates with).
All of these components will run in containers started by `kubelet`.

To initialize the master, pick one of the machines you previously installed `kubelet` and `kubeadm` on, and run:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/kubelet and kubeadm/kubeadm/


# kubeadm init --schedule-workload

* If you do not want to be able to schedule workloads on the master (perhaps for security reasons), run:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Switch these two, kubeadm init is more important and should be described first

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wanted to put the use-case for development up first, to force people to think about whether they want that or not. I think it's likely to be more common than the security-conscious --schedule-workload=false.

# kubeadm join --token f0c861.753c505740ecde4c 138.68.135.192

A few seconds later, you should notice that running `kubectl get nodes` on the master shows a cluster with as many machines as you created.
Your cluster is now bootstrapped!
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

securely bootstrapped


## Limitations

1. The cluster we create here won't have cloud-provider integrations, so for example won't work with (for example) [Load Balancers](/docs/user-guide/load-balancer/) (LBs) or [Persistent Volumes](/docs/user-guide/persistent-volumes/walkthrough/) (PVs).
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There is the cloud-provider option, we should document that if the user specifies that, it will (or should) work

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LBs and PVs are possible, just not out of the box. Wouldn't it be enough to say, that the above guide does have some limitations especially within automatic and convenient solutions for LBs and PVs?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've never seen this work, and I don't know what you have to do to your cloud provider instances to make it possible for them to provision PVs and LBs. If someone can educate me on that, though, I'd be happy to document it :)

@luxas
Copy link
Member

luxas commented Sep 20, 2016

@kubernetes/sig-cluster-lifecycle

@luxas luxas added this to the 1.4 milestone Sep 20, 2016
# # XXX Shouldn't be necessary, need to add this to the kubeadm configure step
# systemctl daemon-reload && systemctl restart kubelet

If the machine is running Fedora 24, run:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also should be valid for CentOS 7, and Red Hat Enterprise Linux 7.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm, except on Fedora technically it should be "dnf" not "yum" for the commmand invocations. I think yum is still aliased to dnf though so they should work there. (/etc/yum.repos.d is valid in either case)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we're switching to CentOS 7 now, will update accordingly


This page lists some of the available add-ons.

## Networking
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've verified that the following networking addons work with kubeadm clusters:

@lukemarsden how would you like to add these? Should I just open a PR against your branch?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cool yeah please do!

Copy link
Contributor

@jbeda jbeda left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Very cool! I love how it is shaping up.

@@ -10,8 +10,10 @@ toc:
path: /docs/whatisk8s/
- title: Downloading or Building Kubernetes
path: /docs/getting-started-guides/binary_release/
- title: Hello World Walkthrough
- title: Hello World Walkthrough on GKE
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Spell out "Google Container Engine"?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The title gets a bit long, but maybe "Hello World on Google Container Engine" will fit

@@ -0,0 +1,37 @@
---
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd love to see all addons move to being kubectl apply -f. It'll be much easier to install and manage. They were done the way they were done because we didn't have daemonsets at the time, etc.


* [Weave Net Policy](https://github.com/weaveworks/weave-npc/tree/initial-implementation) extends Weave Net to support the [Kubernetes policy API](/docs/user-guide/networkpolicies/) so that you can securely isolate different pods from each other based on namespaces and labels:

$ kubectl apply -f https://raw.githubusercontent.com/weaveworks/weave-npc/40f5461f2f840eb8a223710e227f687cbaa55d0f/k8s/daemonset.yaml
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd love to see instructions like this move to a weave owned page. It'll be easier to keep up to date as weave changes and will scale better as we document more types of add-ons.

If we have community supported add-ons (that are part of the k8s project) perhaps we should create new pages for those so that they keep this page clean?


## Visualization & Control

* [Weave Scope](https://www.weave.works/documentation/scope-latest-installing/#k8s) is a tool for graphically visualizing your containers, pods, services etc.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think a simple link here is all we should do. Having full installation instructions for each add-on just won't scale as more vendors get involved.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree with @jbeda.
Either link to a sub document (they should be responsible for) or link to documentation they own specifically. Keeping the install and instructions in the k8s repo simplifies community updates from within k8s, but it might be reasonable to let the vendor take care of this outside k8s. This would push issues and updates to the docs further to the vendor.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's cool I'll move the verbose instructions over to a README on our github. Thanks!


You will:

1. Install packages on all machines
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Mention prerequisites? You'll need to have docker installed on each machine starting out.

* If the machine is running Ubuntu 16.04, run:

# curl -sSL https://get.docker.com/ | sh
# apt-get install -y socat
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

socat is a strange enough dependency folks may be curious. If we really need it why isn't it a package dependency?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is. All dependencies of kubelet will be installed automatically. This should definitely don't be here, as I pointed out above.

You must install a pod network add-on so that your pods can communicate with eachother when they are on different hosts.

Several projects provide Kubernetes pod networks.
A simple one with no infrastructure or database dependencies is Weave Net, which you can install by running, on the master:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we structure this as a list so that we can put other options here? It'd be great to have 2 supported options out the gate.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I can include this in my PR to add Calico / Canal to the addons page (mentioned in another comment).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Based on user testing of this doc so far I think it's important to have one recommended option that is marked "easy to install", and I'd selfishly like that to be Weave Net. However it's also important to emphasize that the user has choice, so I will add an explicit call-out to Calico and Canal in the text. @caseydavenport if you can give me some words to use to describe them and the distinction between them and somewhere to link to, I'll add that here. Thanks!

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@lukemarsden, I think we don't want this doc saying that any one solution is "recommended" by Kubernetes. I think @jbeda's suggestion of structuring this as a list or something similar is appropriate, so long as it's not really verbose.

I've had a go at tweaking this to be agnostic, while maintaining ease of use for the guide follower: https://github.com/caseydavenport/caseydavenport.github.io/pull/1/files

What do people think of something like this?


**You should run this on the master before you try to deploy any applications to your cluster.**

Once the command has completed, a few seconds later you should see the `weave-net` pods and the `kube-dns` pod go into `Running` in the output of `kubectl get pods --all-namespaces`. **This signifies that your cluster is ready.**
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This assumes that folks are using weave-net. We need to avoid kubeadm looking like a weave only solution.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ack.


Here you will install the NodePort version of the Socks Shop, which doesn't depend on Load Balancer integration, since our cluster doesn't have that:

# kubectl apply -f https://raw.githubusercontent.com/lukemarsden/microservices-demo/master/deploy/kubernetes/definitions/wholeWeaveDemo-NodePort.yaml
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does socks shop have weave specific stuff in it? We should make sure that this doesn't look to be tied to the networking solution.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe we can put it in kubernetes/kubernetes/examples for the time being?
I think the examples are on their way out from the main repo, but until the Google guys have made the kubernetes/examples repo, @lukemarsden can maintain it in the main repo

That would give a much more professional look to it.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The socks shop is not weave-specific, it's a generic microservices demo that will work on any kubernetes cluster (and lots of other things too) and lives in a vendor-independent org on github. I need to make a PR to add a NodePort version that will work with this guide and then I can replace the lukemarsden URL with one that links there instead.

* Mailing List: [kubernetes-sig-cluster-lifecycle](https://groups.google.com/forum/#!forum/kubernetes-sig-cluster-lifecycle)
* [GitHub Issues](https://github.com/kubernetes/kubernetes/issues): please tag `kubeadm` issues with `@kubernetes/sig-cluster-lifecycle`

## Limitations
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should make it clear that kubeadm is a work in progress and that these will be addressed in the fullness of time.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Absolutely. Will add text to that effect here. Thanks!

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

<master/apiclient> all control plane components are healthy after 61.346626 seconds
<master/apiclient> waiting for at least one node to register and become ready
<master/apiclient> first node is ready after 4.506807 seconds
<master/discovery> created essential addon: kube-discovery
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is kube-discovery an addon or a static pod?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's a Deployment

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is going away as soon as kube-discovery gets merged into the apiserver, so I don't think we need to think too hard about this line of output right now


1. Install packages on all machines
1. Run `kubeadm init` on the machine you want to become a master
1. Run `kubeadm join` on the machines you want to become nodes
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we assume everyone doing this non-expert guide to know the difference between master and nodes? Might be clearer to name them worker nodes. (The master is some kind of node too)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We've already debated about this a lot.
master/node is the current convention

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I will explain what masters and nodes are in a short sentence here, as I have tried to do with other new concepts introduced in this doc.


Instead we will use the [NodePort feature of services](/docs/user-guide/services/) to demonstrate exposing the sample application on the internet.
To easily obtain a cluster which works with LBs and PVs Kubernetes, try [the "hello world" GKE tutorial](/docs/hellonode) or [one of the other cloud-specific installation tutorials](/docs/getting-started-guides/).
1. The cluster we create here will have a single master, with a single `etcd` database running on it.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe explain or link to the limitations. When does data loss occur, when is HA needed.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'll add a couple words around that here, sure.

* [Weave Scope](https://www.weave.works/documentation/scope-latest-installing/#k8s) is a tool for graphically visualizing your containers, pods, services etc.
Register for a [Weave Cloud account](https://cloud.weave.works/) to get a service token and then replace `<token>` with your token below:

$ kubectl apply -f https://cloud.weave.works/launch/k8s/weavescope.yaml?service-token=<token>
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

https://cloud.weave.works/launch/k8s/weavescope.yaml?service-token=<token> should be in quotation marks.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

good spot!


## Overview

This quickstart will show you how to easily install a secure Kubernetes cluster on any machines running Linux, using a tool called `kubeadm` which is part of Kubernetes.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: "running Linux" is not "running Ubuntu 16.04 or Fedora 24"

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

perhaps go into detail (any requirements for kubeadm?ex. any machine with kernel version blah blah running docker?)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I will wordsmith this. I want to keep it simple at the top and add complexity gradually to make for a consumable doc, without oversimplifying.


## Networking

* [Weave Net](https://github.com/weaveworks/weave-kube) is a fast, reliable pod network that carries on working even in the face of network partitions, and doesn't have any infrastructure or database dependencies. Install it on a cluster whose kubelets are configured to expect a [CNI network plugin](/docs/admin/network-plugins/):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm assuming that alternative networking would be documented here as well in the future? (ex. when Canal combines weave + flannel or 1.0.0 of Flannel)


## Overview

This quickstart will show you how to easily install a secure Kubernetes cluster on any machines running Linux, using a tool called `kubeadm` which is part of Kubernetes.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

perhaps go into detail (any requirements for kubeadm?ex. any machine with kernel version blah blah running docker?)


You will:

1. Install packages on all machines
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

install what exactly? (kubectl / docker / rkt / kubeadm?)


1. Install packages on all machines
1. Run `kubeadm init` on the machine you want to become a master
1. Run `kubeadm join` on the machines you want to become nodes
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

s,machine,machine(s),g

## Prerequisites

1. One or more machines running Ubuntu 16.04 or Fedora 24
1. 2GB or more of RAM per machine
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pr-requisites should be higher up ^^


1. One or more machines running Ubuntu 16.04 or Fedora 24
1. 2GB or more of RAM per machine
1. A network connection between the machines with all ports open
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what about private networking? seems better to say "full connectivity is required" and something about preferring private networking.

* SSH into the machine and become `root` if you are not already (for example, run `sudo su -`).
* If the machine is running Ubuntu 16.04, run:

# curl -sSL https://get.docker.com/ | sh
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if we're running curl -sSL https://get.docker.com/ | sh to get docker, we may as well get the kubeadm binary from the github release pages...

You must install a pod network add-on so that your pods can communicate with eachother when they are on different hosts.

Several projects provide Kubernetes pod networks.
A simple one with no infrastructure or database dependencies is Weave Net, which you can install by running, on the master:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

run by applying to the master perhaps?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think "installing" and "running" are more commonly understood verbs than "applying" for the target audience of this doc.

# curl -s -L \
"https://www.dropbox.com/s/shhs46bzhex7dxo/debs-9b4337.txz?dl=1" | tar xJv
"https://www.dropbox.com/s/tso6dc7b94ch2sk/debs-5ab576.txz?dl=1" | tar xJv
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think a dropbox link should be here...

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, this will be replaced by the packages that @mikedanese is going to bake as soon as kubernetes/kubernetes#33262 and kubernetes/kubernetes#32203 land. Same for RPMs, they will be hosted on packages.cloud.google.com.

Instead we will use the [NodePort feature of services](/docs/user-guide/services/) to demonstrate exposing the sample application on the internet.
To easily obtain a cluster which works with LBs and PVs Kubernetes, try [the "hello world" GKE tutorial](/docs/hellonode) or [one of the other cloud-specific installation tutorials](/docs/getting-started-guides/).
1. The cluster we create here will have a single master, with a single `etcd` database running on it.
Adding HA support (multiple `etcd` servers, multiple API servers, etc) is still a work-in-progress.
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I will add that this is a WIP only for kubeadm and that it can be made to work for k8s in general already.

@googlebot
Copy link

We found a Contributor License Agreement for you (the sender of this pull request) and all commit authors, but as best as we can tell these commits were authored by someone else. If that's the case, please add them to this pull request and have them confirm that they're okay with these commits being contributed to Google. If we're mistaken and you did author these commits, just reply here to confirm.

@spiffxp
Copy link
Member

spiffxp commented Sep 23, 2016

@spiffxp note to self, this set of feature issue docs for 1.4 isn't done yet

@pwittrock
Copy link
Member

When this is merged you must update the CHANGELOG.md with the docs link in kubernetes/kubernetes master. See this PR which removes the TODO:

https://github.com/kubernetes/kubernetes/pull/33418/files

@simonswine
Copy link

Would be great if we could have some really generic instructions in the getting started (without packages for example for running on CoreOS):

This worked for a custom build of the latest HEAD of the implementation PR:

# Install kubectl, kubelet and kubeadm on every host
mkdir -p /opt/bin
curl -o /opt/bin/kubeadm https://s3-eu-west-1.amazonaws.com/jetstack.io-kubernetes-builds/release/v1.5.0-alpha.0.1375.a023085a5fa018/bin/linux/amd64/kubeadm
curl -o /opt/bin/hyperkube https://s3-eu-west-1.amazonaws.com/jetstack.io-kubernetes-builds/release/v1.5.0-alpha.0.1375.a023085a5fa018/bin/linux/amd64/hyperkube
chmod +x /opt/bin/kubeadm /opt/bin/hyperkube
ln -s hyperkube /opt/bin/kubectl
ln -s hyperkube /opt/bin/kubelet
curl -L https://github.com/containernetworking/cni/releases/download/v0.3.0/cni-v0.3.0.tgz | tar xvzf - -C /opt/cni/bin/
mkdir -p /opt/cni/bin

cat > /etc/systemd/system/kubelet.service <<'EOF'
[Unit]
Description=kubelet: The Kubernetes Node Agent
Documentation=http://kubernetes.io/docs/

[Service]
Environment="KUBELET_KUBECONFIG_ARGS=--kubeconfig=/etc/kubernetes/kubelet.conf --require-kubeconfig=true"
Environment="KUBELET_SYSTEM_PODS_ARGS=--pod-manifest-path=/etc/kubernetes/manifests --allow-privileged=true"
Environment="KUBELET_NETWORK_ARGS=--network-plugin=cni"
Environment="KUBELET_DNS_ARGS=--cluster-dns=100.64.0.2 --cluster-domain=cluster.local"
Environment="KUBELET_EXTRA_ARGS=--v=4"
ExecStart=/opt/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_SYSTEM_PODS_ARGS $KUBELET_NETWORK_ARGS $KUBELET_DNS_ARGS $KUBELET_EXTRA_ARGS

Restart=always
StartLimitInterval=0
RestartSec=10

[Install]
WantedBy=multi-user.target
EOF

I also liked @errordeveloper's approach with using a docker image to bootstrap kubelet/kubeadm/cni.

@devin-donnelly
Copy link
Contributor

Any word on these changes, @lukemarsden ? I'd like to get this in before tomorrow morning when we merge and launch.

@lukemarsden
Copy link
Contributor Author

lukemarsden commented Sep 26, 2016

@devin-donnelly I'll have the changes made and this PR ready to go for you before you wake up today (unless you wake up really early 😄). However @mikedanese indicated to me that he is likely to be cutting the 1.4 release on Tuesday now. Is that right? We should wait until we have the official packages referred to in this doc before we announce/release this doc.

@lukemarsden lukemarsden reopened this Sep 26, 2016
@lukemarsden
Copy link
Contributor Author

@caseydavenport @devin-donnelly I've addressed your review feedback in my latest two commits. Thanks!

@devin-donnelly I think you can remove "Tech Review: Open Issues" now. And hopefully "Docs Review: Open Issues" as well :)

@pwittrock We fixed the changelog in https://github.com/kubernetes/kubernetes/pull/33262/files#diff-4ac32a78649ca5bdd8e0ba38b7006a1eL231 which landed this morning.

This PR is now only blocked on @mikedanese updating packages.cloud.google.com with 1.4.0 packages that include kubeadm. I will update this PR and remove the WIP tag soon as this has happened.

Copy link
Member

@luxas luxas left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@lukemarsden I still have some changes I'd really much like to see


## Visualization &amp; Control

* [Weave Scope](https://www.weave.works/documentation/scope-latest-installing/#k8s) is a tool for graphically visualizing your containers, pods, services etc. Use it in conjunction with a [Weave Cloud account](https://cloud.weave.works/) or host the UI yourself.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Mind adding dashboard here also? That's kind of helpful for new users

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done, thanks


## Overview

This quickstart shows you how to easily install a secure Kubernetes cluster on machines running Ubuntu 16.04 or CentOS 7.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How about Fedora 24 and RHEL?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we haven't tested them so I'm not going to advertise that we work with them. We can do so once we have tested them.

## Prerequisites

1. One or more machines running Ubuntu 16.04 or CentOS 7
1. 2GB or more of RAM per machine
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why is this here?
I'm running Kubernetes fine on my 1GB Raspberry Pi's, this only confuses users, as 2GB is not a minimum
I've also run Kubernetes on 512 MB droplets

Please remove

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've updated this to 1GB and mentioned that any less than that would leave very little room for a user's apps.


1. One or more machines running Ubuntu 16.04 or CentOS 7
1. 2GB or more of RAM per machine
1. A network connection with open ports between the machines (public or private network is fine)
Copy link
Member

@luxas luxas Sep 26, 2016

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why not Full connectivity between all machines in the cluster? Maybe also add (they should be able to ping each other)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added the former.

* `kubeadm`: the command to bootstrap the cluster.

For each host in turn:

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Mention that apt-transport-https is required if it doesn't exist

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

added

enabled=1
gpgcheck=0
EOF
# yum install docker kubelet kubeadm kubectl kubernetes-cni
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@dgoodwin When you get the deps right, it should be just kubeadm
Deps:
kubeadm => kubelet, kubectl
kubelet => kubernetes-cni

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'll wait for @dgoodwin to ask me to change the instructions in the docs

## What's next

* Learn more about [Kubernetes concepts and kubectl in Kubernetes 101](/docs/user-guide/walkthrough/).
* Install Kubernetes with [a cloud provider configurations](/docs/getting-started-guides/) to add Load Balancer and Persistent Volume support.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why is this here?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

because users might reasonably want to do this. soon we should document using kubeadm for cloud provider integrations and then we can take this out.


Please note: `kubeadm` is a work in progress and these limitations will be addressed in due course.

1. The cluster created here doesn't have cloud-provider integrations, so for example won't work with (for example) [Load Balancers](/docs/user-guide/load-balancer/) (LBs) or [Persistent Volumes](/docs/user-guide/persistent-volumes/walkthrough/) (PVs).
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It doesn't by default, but refer to the kubeadm reference doc if you want to do enable cloud provider integrations

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we can add ref to that doc once it exists

Workaround: use `docker logs` on the nodes where the containers are running as a workaround.
1. There is not yet an easy way to generate a `kubeconfig` file which can be used to authenticate to the cluster remotely with `kubectl` on, for example, your workstation.

Workaround: copy the kubelet's `kubeconfig` from the master: use `scp root@<master>:/etc/kubernetes/kubelet.conf .` and then e.g. `kubectl --kubeconfig ./kubelet.conf get nodes` from your workstation.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/etc/kubernetes/admin.conf should be copied instead of kubelet.conf

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixed

## Cleanup

* To uninstall the socks shop, run `kubectl delete -f microservices-demo/deploy/kubernetes/manifests` on the master.
* To uninstall Kubernetes, simply delete the machines you created for this tutorial.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have a script for this, which we should refer to. Saying delete your machines is kind of awful 😄

systemctl stop kubelet; 
docker rm -f $(docker ps -q); mount | grep "/var/lib/kubelet/*" | awk '{print $3}' | xargs umount 1>/dev/null 2>/dev/null; 
rm -rf /var/lib/kubelet /etc/kubernetes /var/lib/etcd /etc/cni; 
ip link set cbr0 down; ip link del cbr0; 
ip link set cni0 down; ip link del cni0;
systemctl start kubelet; 

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'll put this inside <details> tag

@luxas
Copy link
Member

luxas commented Sep 26, 2016

@lukemarsden Also, please squash the commits

@lukemarsden
Copy link
Contributor Author

@luxas I've addressed your review feedback and squashed. Thanks!

@luxas
Copy link
Member

luxas commented Sep 26, 2016

The only issue left is the package installation guide, but I guess that will be cleaned up when @mikedanese have pushed the final packages

@lukemarsden lukemarsden reopened this Sep 26, 2016
Copy link
Contributor Author

@lukemarsden lukemarsden left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Marking what needs to change before we launch this.

* SSH into the machine and become `root` if you are not already (for example, run `sudo su -`).
* If the machine is running Ubuntu 16.04, run:

# apt-get install -y docker.io socat apt-transport-https
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These instructions need to be updated when @mikedanese's official packages land.


If the machine is running CentOS 7, run:

# cat <<EOF > /etc/yum.repos.d/k8s.repo
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These instructions need to be updated when @mikedanese's official packages land.


To initialize the master, pick one of the machines you previously installed `kubelet` and `kubeadm` on, and run:

# kubeadm init --use-kubernetes-version v1.4.0-beta.11
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Need to remove --use-kubernetes-version v1.4.0-beta.11 when @mikedanese's official packages land.

@devin-donnelly devin-donnelly merged commit 778c119 into kubernetes:release-1.4 Sep 26, 2016
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet