New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Ubuntu-distro: deprecate & merge ubuntu single node work to ubuntu cluster node stuff #5498

Merged
merged 1 commit into from May 7, 2015

Conversation

Projects
None yet
7 participants
@resouer
Member

resouer commented Mar 16, 2015

This PR is based on suggestions form @rjnagal & @bgrant0607 , we have tried to merge the existing ubuntu bare-metal setup into one part.

The merged doc & scripts can both deal with single node setup and cluster setup, it also support flannel as overlay network.

We also use a remote deploy method to make everything automated.

@googlebot googlebot added the cla: yes label Mar 16, 2015

@resouer

This comment has been minimized.

Show comment
Hide comment
@resouer
Member

resouer commented Mar 17, 2015

@rjnagal

This comment has been minimized.

Show comment
Hide comment
@rjnagal

rjnagal Mar 17, 2015

Member

Thanks for the update @resouer

The steps for setting up kube on ubuntu are too complicated right now. Is there any way we can whittle it down to one setup script that accepts master and minions details and automatically sets up these machines with required components.

At the least we should try to put most of the setup in cluster/ubuntu-cluster/configure.sh.
Making and installing other binaries doesn't need to be a separate prerequisite. WDYT?

Member

rjnagal commented Mar 17, 2015

Thanks for the update @resouer

The steps for setting up kube on ubuntu are too complicated right now. Is there any way we can whittle it down to one setup script that accepts master and minions details and automatically sets up these machines with required components.

At the least we should try to put most of the setup in cluster/ubuntu-cluster/configure.sh.
Making and installing other binaries doesn't need to be a separate prerequisite. WDYT?

@resouer

This comment has been minimized.

Show comment
Hide comment
@resouer

resouer Mar 18, 2015

Member

@rjnagal We'd like to refactor it. But there're two approaches and we hope to reach an agreement firstly:
Option 1: Use Ansible to take care of the machines
Option 2: Keep using shell scripts to build & copy files across machines

Which one do you suggest?

Personally, I prefer Ansible which only requires a jump machine and ssh-key configuration. But pure shell solution will be much simpler although less powerful.

Member

resouer commented Mar 18, 2015

@rjnagal We'd like to refactor it. But there're two approaches and we hope to reach an agreement firstly:
Option 1: Use Ansible to take care of the machines
Option 2: Keep using shell scripts to build & copy files across machines

Which one do you suggest?

Personally, I prefer Ansible which only requires a jump machine and ssh-key configuration. But pure shell solution will be much simpler although less powerful.

@rjnagal

This comment has been minimized.

Show comment
Hide comment
@rjnagal

rjnagal Mar 18, 2015

Member

@resouer Adding ansible dependency just to setup minions might be an overkill. If we can setup clusters directly from cluster/ubuntu/utils.sh, we might even be able to use generic https://get.k8s.io script to set things up.

To start simple, we can decide on the information we need from the user and read in the config from an env variable.

Member

rjnagal commented Mar 18, 2015

@resouer Adding ansible dependency just to setup minions might be an overkill. If we can setup clusters directly from cluster/ubuntu/utils.sh, we might even be able to use generic https://get.k8s.io script to set things up.

To start simple, we can decide on the information we need from the user and read in the config from an env variable.

@resouer

This comment has been minimized.

Show comment
Hide comment
@resouer

resouer Mar 19, 2015

Member

@rjnagal Thanks for your advice.

Base on your advice, we can simplify the installations workflow like this:

0. export MASTER="user@xxxx" MINIONS="user@xxxx, user@xxxx"
1. set ubuntu-cluster as provider
2. run cluster//kube-up.sh

And in kube-up.sh:

We don't need to start machines (already there)
We don't need user input the role of the machine as we can guess its role by read local ip (any other better approach?).
The only problem is how we deal with user&password and ssh&scp stuff, for now it seems we need manual authorized_keys setup ...

Currently, we only support 1:N model, we can update this to N:M ASAP when we test it through.

What do you feel about this design? I don't know if it is possible that owners accept bare-metal as a provider, but we'd like to make that happen if it make sense.

Member

resouer commented Mar 19, 2015

@rjnagal Thanks for your advice.

Base on your advice, we can simplify the installations workflow like this:

0. export MASTER="user@xxxx" MINIONS="user@xxxx, user@xxxx"
1. set ubuntu-cluster as provider
2. run cluster//kube-up.sh

And in kube-up.sh:

We don't need to start machines (already there)
We don't need user input the role of the machine as we can guess its role by read local ip (any other better approach?).
The only problem is how we deal with user&password and ssh&scp stuff, for now it seems we need manual authorized_keys setup ...

Currently, we only support 1:N model, we can update this to N:M ASAP when we test it through.

What do you feel about this design? I don't know if it is possible that owners accept bare-metal as a provider, but we'd like to make that happen if it make sense.

@resouer

This comment has been minimized.

Show comment
Hide comment
@resouer

resouer Mar 23, 2015

Member

@rjnagal Could you please do a design review. please 😄

Member

resouer commented Mar 23, 2015

@rjnagal Could you please do a design review. please 😄

@rjnagal

This comment has been minimized.

Show comment
Hide comment
@rjnagal

rjnagal Mar 23, 2015

Member

I think this is a much better approach than we have now. Go for it! :)

On Sun, Mar 22, 2015 at 6:34 PM, Harry Zhang notifications@github.com
wrote:

@rjnagal https://github.com/rjnagal Could you please do a design
review. please [image: 😄]


Reply to this email directly or view it on GitHub
#5498 (comment)
.

Member

rjnagal commented Mar 23, 2015

I think this is a much better approach than we have now. Go for it! :)

On Sun, Mar 22, 2015 at 6:34 PM, Harry Zhang notifications@github.com
wrote:

@rjnagal https://github.com/rjnagal Could you please do a design
review. please [image: 😄]


Reply to this email directly or view it on GitHub
#5498 (comment)
.

@WIZARD-CXY

This comment has been minimized.

Show comment
Hide comment
@WIZARD-CXY

WIZARD-CXY Apr 16, 2015

Contributor

ping @rjnagal @resouer Our team has almost finished the refactoring work, but there exists a problem to solve.Because we need to put the kubernetes config file in the /etc/ directory and use ubuntu upstart to manage kubernetes jobs, these operations need sudo privilege.We don't want to abuse the sudo privilege in our deployment script.So we decide just ask user to input sudo password when necessary.Although this is a bad scale plan when the machine number is large.
The other option is letting user to dispatch root ssh-public key to all machines going to run k8s by using ssh-copy-id and allow root user login first .Then everything is easy.
what do you think?Any other good ideas?

Contributor

WIZARD-CXY commented Apr 16, 2015

ping @rjnagal @resouer Our team has almost finished the refactoring work, but there exists a problem to solve.Because we need to put the kubernetes config file in the /etc/ directory and use ubuntu upstart to manage kubernetes jobs, these operations need sudo privilege.We don't want to abuse the sudo privilege in our deployment script.So we decide just ask user to input sudo password when necessary.Although this is a bad scale plan when the machine number is large.
The other option is letting user to dispatch root ssh-public key to all machines going to run k8s by using ssh-copy-id and allow root user login first .Then everything is easy.
what do you think?Any other good ideas?

@erictune

This comment has been minimized.

Show comment
Hide comment
@erictune

erictune Apr 16, 2015

Member

This is how locally works. Seems fine for now. Suggestions:

  • use sudo -b command rather than sudo command & so that the password
    requests are serialized, if there are multiple sudos
  • use the -p option with a helpful description of why the password is
    being requested.

On Wed, Apr 15, 2015 at 7:16 PM, Xingyu Chen notifications@github.com
wrote:

@rjnagal https://github.com/rjnagal Our team has almost finished the
refactoring work, but there exists a problem to solve.Because we need to
put the kubernetes config file in the /etc/ directory and use ubuntu
upstart to manage kubernetes jobs, these operations need sudo privilege.We
don't want to abuse the sudo privilege in our deployment script.So we
decide just ask user to input sudo password when necessary.Although this is
a bad scale plan when the machine number is large, what do you think?Any
other good ideas?


Reply to this email directly or view it on GitHub
#5498 (comment)
.

Member

erictune commented Apr 16, 2015

This is how locally works. Seems fine for now. Suggestions:

  • use sudo -b command rather than sudo command & so that the password
    requests are serialized, if there are multiple sudos
  • use the -p option with a helpful description of why the password is
    being requested.

On Wed, Apr 15, 2015 at 7:16 PM, Xingyu Chen notifications@github.com
wrote:

@rjnagal https://github.com/rjnagal Our team has almost finished the
refactoring work, but there exists a problem to solve.Because we need to
put the kubernetes config file in the /etc/ directory and use ubuntu
upstart to manage kubernetes jobs, these operations need sudo privilege.We
don't want to abuse the sudo privilege in our deployment script.So we
decide just ask user to input sudo password when necessary.Although this is
a bad scale plan when the machine number is large, what do you think?Any
other good ideas?


Reply to this email directly or view it on GitHub
#5498 (comment)
.

@googlebot

This comment has been minimized.

Show comment
Hide comment
@googlebot

googlebot Apr 22, 2015

We found a Contributor License Agreement for you (the sender of this pull request) and all commit authors, but as best as we can tell these commits were authored by someone else. If that's the case, please add them to this pull request and have them confirm that they're okay with these commits being contributed to Google. If we're mistaken and you did author these commits, just reply here to confirm.

googlebot commented Apr 22, 2015

We found a Contributor License Agreement for you (the sender of this pull request) and all commit authors, but as best as we can tell these commits were authored by someone else. If that's the case, please add them to this pull request and have them confirm that they're okay with these commits being contributed to Google. If we're mistaken and you did author these commits, just reply here to confirm.

@googlebot googlebot added cla: no and removed cla: yes labels Apr 22, 2015

@resouer

This comment has been minimized.

Show comment
Hide comment
@resouer

resouer Apr 22, 2015

Member

@googlebot CLA confirmed. Fixing CI.

Member

resouer commented Apr 22, 2015

@googlebot CLA confirmed. Fixing CI.

@WIZARD-CXY

This comment has been minimized.

Show comment
Hide comment
@WIZARD-CXY

WIZARD-CXY Apr 22, 2015

Contributor

@googlebot CLA confirmed

Contributor

WIZARD-CXY commented Apr 22, 2015

@googlebot CLA confirmed

@WIZARD-CXY

This comment has been minimized.

Show comment
Hide comment
@WIZARD-CXY

WIZARD-CXY Apr 22, 2015

Contributor

Change log
1 Merge the old single node and multiple nodes k8s deployment into just one
2 Refact the deployment code so that the deployment is more automatic than ever before.User only needs to input some key configurations as the new guide described.No more login to multiple machines.The deployment is done remotely.
3 Update the new deploying guide and tested ok by using k8s 0.15.0 version
@resouer @erictune @rjnagal Please review the changes.
Looking forward to your suggestions.

Contributor

WIZARD-CXY commented Apr 22, 2015

Change log
1 Merge the old single node and multiple nodes k8s deployment into just one
2 Refact the deployment code so that the deployment is more automatic than ever before.User only needs to input some key configurations as the new guide described.No more login to multiple machines.The deployment is done remotely.
3 Update the new deploying guide and tested ok by using k8s 0.15.0 version
@resouer @erictune @rjnagal Please review the changes.
Looking forward to your suggestions.

@WIZARD-CXY

This comment has been minimized.

Show comment
Hide comment
@WIZARD-CXY

WIZARD-CXY Apr 22, 2015

Contributor

The cla check is complaing CLAs are signed, but unable to verify author consent . I already signed before, need some other commitment? @rjnagal

Contributor

WIZARD-CXY commented Apr 22, 2015

The cla check is complaing CLAs are signed, but unable to verify author consent . I already signed before, need some other commitment? @rjnagal

@rjnagal

This comment has been minimized.

Show comment
Hide comment
@rjnagal

rjnagal Apr 22, 2015

Member

the cla check is confused because the pull request is from @resouer but the commits are from @WIZARD-CXY

I'll update it.

Member

rjnagal commented Apr 22, 2015

the cla check is confused because the pull request is from @resouer but the commits are from @WIZARD-CXY

I'll update it.

@WIZARD-CXY

This comment has been minimized.

Show comment
Hide comment
@WIZARD-CXY

WIZARD-CXY Apr 23, 2015

Contributor

Since cluster dns is not enabled for now. But according to this issue #6667 .It is recommended to have one.So I will update the scripts and doc to automatically set up the cluster dns.

Contributor

WIZARD-CXY commented Apr 23, 2015

Since cluster dns is not enabled for now. But according to this issue #6667 .It is recommended to have one.So I will update the scripts and doc to automatically set up the cluster dns.

@WIZARD-CXY

This comment has been minimized.

Show comment
Hide comment
@WIZARD-CXY

WIZARD-CXY Apr 23, 2015

Contributor

After some studying about skydns, https://github.com/GoogleCloudPlatform/kubernetes/tree/master/cluster/addons/dns .I think it might be easy to setup a cluster dns, but the below issues is bothering me.

Known issues

Kubernetes installs do not configure the nodes' resolv.conf files to use the cluster DNS by default, because that process is inherently distro-specific. This should probably be implemented eventually.

It means I must manually set the nameserver name in every pod's /etc/resolv.conf for ubuntu or maybe change my base image? It is a lot of work indeed. So I decide to fix the portal_net ip range problem first and when the cluster dns is fully supported, I will enable it in this ubuntu cluster set-up @resouer @rjnagal @thockin ,what do you think?

Contributor

WIZARD-CXY commented Apr 23, 2015

After some studying about skydns, https://github.com/GoogleCloudPlatform/kubernetes/tree/master/cluster/addons/dns .I think it might be easy to setup a cluster dns, but the below issues is bothering me.

Known issues

Kubernetes installs do not configure the nodes' resolv.conf files to use the cluster DNS by default, because that process is inherently distro-specific. This should probably be implemented eventually.

It means I must manually set the nameserver name in every pod's /etc/resolv.conf for ubuntu or maybe change my base image? It is a lot of work indeed. So I decide to fix the portal_net ip range problem first and when the cluster dns is fully supported, I will enable it in this ubuntu cluster set-up @resouer @rjnagal @thockin ,what do you think?

@thockin

This comment has been minimized.

Show comment
Hide comment
@thockin

thockin Apr 23, 2015

Member

kubelet will already set resolv.conf for docker containers with the various
docker params (--dns and --dns-search)

On Wed, Apr 22, 2015 at 7:37 PM, Xingyu Chen notifications@github.com
wrote:

After some studying about skydns,
https://github.com/GoogleCloudPlatform/kubernetes/tree/master/cluster/addons/dns
.I think it might be easy to setup a cluster dns, but the below issues is
bothering me.
`

Known issues

Kubernetes installs do not configure the nodes' resolv.conf files to use
the cluster DNS by default, because that process is inherently
distro-specific. This should probably be implemented eventually.

`
It means I must manually set the nameserver name in every pod's
/etc/resolv.conf for ubuntu or maybe change my base image? It is a lot of
work indeed. So I decide to fix the portal_net ip range problem first and
when the cluster dns is fully supported, I will enable it in this ubuntu
cluster set-up @resouer https://github.com/resouer @rjnagal
https://github.com/rjnagal @thockin https://github.com/thockin ,what
do you think?


Reply to this email directly or view it on GitHub
#5498 (comment)
.

Member

thockin commented Apr 23, 2015

kubelet will already set resolv.conf for docker containers with the various
docker params (--dns and --dns-search)

On Wed, Apr 22, 2015 at 7:37 PM, Xingyu Chen notifications@github.com
wrote:

After some studying about skydns,
https://github.com/GoogleCloudPlatform/kubernetes/tree/master/cluster/addons/dns
.I think it might be easy to setup a cluster dns, but the below issues is
bothering me.
`

Known issues

Kubernetes installs do not configure the nodes' resolv.conf files to use
the cluster DNS by default, because that process is inherently
distro-specific. This should probably be implemented eventually.

`
It means I must manually set the nameserver name in every pod's
/etc/resolv.conf for ubuntu or maybe change my base image? It is a lot of
work indeed. So I decide to fix the portal_net ip range problem first and
when the cluster dns is fully supported, I will enable it in this ubuntu
cluster set-up @resouer https://github.com/resouer @rjnagal
https://github.com/rjnagal @thockin https://github.com/thockin ,what
do you think?


Reply to this email directly or view it on GitHub
#5498 (comment)
.

@WIZARD-CXY

This comment has been minimized.

Show comment
Hide comment
@WIZARD-CXY

WIZARD-CXY Apr 23, 2015

Contributor

got it

Contributor

WIZARD-CXY commented Apr 23, 2015

got it

@WIZARD-CXY

This comment has been minimized.

Show comment
Hide comment
@WIZARD-CXY

WIZARD-CXY Apr 29, 2015

Contributor

@resouer @rjnagal @erictune Just update the doc and scripts to support skydns running on the ubuntu k8s.So this pr is finished.Tested several times with ok. Please review this work and thanks for the patience.

Contributor

WIZARD-CXY commented Apr 29, 2015

@resouer @rjnagal @erictune Just update the doc and scripts to support skydns running on the ubuntu k8s.So this pr is finished.Tested several times with ok. Please review this work and thanks for the patience.

@WIZARD-CXY

This comment has been minimized.

Show comment
Hide comment
@WIZARD-CXY

WIZARD-CXY Apr 30, 2015

Contributor

@jainvipin may you please review?This approach will merge your previous single node and multi-node into a new one.

Contributor

WIZARD-CXY commented Apr 30, 2015

@jainvipin may you please review?This approach will merge your previous single node and multi-node into a new one.

@jainvipin

This comment has been minimized.

Show comment
Hide comment
@jainvipin

jainvipin Apr 30, 2015

Contributor

@WIZARD-CXY - the approach looks refined and better; please go ahead merge the changes to make them uniform.

Contributor

jainvipin commented Apr 30, 2015

@WIZARD-CXY - the approach looks refined and better; please go ahead merge the changes to make them uniform.

@WIZARD-CXY

This comment has been minimized.

Show comment
Hide comment
@WIZARD-CXY

WIZARD-CXY Apr 30, 2015

Contributor

@jainvipin thanks for the quick response!

Contributor

WIZARD-CXY commented Apr 30, 2015

@jainvipin thanks for the quick response!

@rjnagal

This comment has been minimized.

Show comment
Hide comment
@rjnagal

rjnagal Apr 30, 2015

Member

I'll try it out today.

Can you rebase in the meantime?

Member

rjnagal commented Apr 30, 2015

I'll try it out today.

Can you rebase in the meantime?

@WIZARD-CXY

This comment has been minimized.

Show comment
Hide comment
@WIZARD-CXY

WIZARD-CXY May 1, 2015

Contributor

Sorry for the latency since I'm in China, just wake up.You can use this new guide https://github.com/ZJU-SEL/kubernetes/blob/refactor-ubuntu/docs/getting-started-guides/ubuntu.md to help you set up.

Contributor

WIZARD-CXY commented May 1, 2015

Sorry for the latency since I'm in China, just wake up.You can use this new guide https://github.com/ZJU-SEL/kubernetes/blob/refactor-ubuntu/docs/getting-started-guides/ubuntu.md to help you set up.

Show outdated Hide outdated cluster/ubuntu/build.sh
@@ -14,7 +14,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
# Download the etcd, flannel, and K8s binaries automatically
# Download the etcd, flannel, and K8s binaries automatically and stored in binaries directory

This comment has been minimized.

@rjnagal

rjnagal May 1, 2015

Member

You don't need to build flannel. Just grab the flannel release binary as with all other tools:
https://github.com/coreos/flannel/releases/download/v0.4.0/flannel-0.4.0-linux-amd64.tar.gz

@rjnagal

rjnagal May 1, 2015

Member

You don't need to build flannel. Just grab the flannel release binary as with all other tools:
https://github.com/coreos/flannel/releases/download/v0.4.0/flannel-0.4.0-linux-amd64.tar.gz

@WIZARD-CXY

This comment has been minimized.

Show comment
Hide comment
@WIZARD-CXY

WIZARD-CXY May 2, 2015

Contributor

@rjnagal take your advice and change the code accordingly.Tested in my lab with success.Please review

Contributor

WIZARD-CXY commented May 2, 2015

@rjnagal take your advice and change the code accordingly.Tested in my lab with success.Please review

@WIZARD-CXY

This comment has been minimized.

Show comment
Hide comment
@WIZARD-CXY

WIZARD-CXY May 5, 2015

Contributor

sorry @rjnagal I can't both get shippable and travis ci right. When I click Detail of Shippable ,it just said Waiting for build information..., nothing useful to indicate where the code goes wrong.

Contributor

WIZARD-CXY commented May 5, 2015

sorry @rjnagal I can't both get shippable and travis ci right. When I click Detail of Shippable ,it just said Waiting for build information..., nothing useful to indicate where the code goes wrong.

@rjnagal

This comment has been minimized.

Show comment
Hide comment
@rjnagal

rjnagal May 5, 2015

Member

@WIZARD-CXY you can ignore shippable failure as long as travis is passing. Its a bit flaky,

I tried out the new script and the cluster came up in a snap. Thanks for spending the time redoing the logic. It's much simpler to use now.

Can you squash your 4 commits into one.

Member

rjnagal commented May 5, 2015

@WIZARD-CXY you can ignore shippable failure as long as travis is passing. Its a bit flaky,

I tried out the new script and the cluster came up in a snap. Thanks for spending the time redoing the logic. It's much simpler to use now.

Can you squash your 4 commits into one.

Show outdated Hide outdated cluster/kubectl.sh
@@ -107,6 +107,17 @@ if [[ "$KUBERNETES_PROVIDER" == "gke" ]]; then
"--kubeconfig=${HOME}/.config/gcloud/kubernetes/kubeconfig"
"--context=gke_${PROJECT}_${ZONE}_${CLUSTER_NAME}"
)
elif [[ "$KUBERNETES_PROVIDER" == "libvirt-coreos" ]]; then

This comment has been minimized.

@rjnagal

rjnagal May 5, 2015

Member

why add libvirt-coreos for ubuntu?

@rjnagal

rjnagal May 5, 2015

Member

why add libvirt-coreos for ubuntu?

Show outdated Hide outdated cluster/ubuntu/deployAddons.sh
# See the License for the specific language governing permissions and
# limitations under the License.
# deploy the addons service after the cluster is available

This comment has been minimized.

@rjnagal

rjnagal May 5, 2015

Member

s/addons service/add-on services/

@rjnagal

rjnagal May 5, 2015

Member

s/addons service/add-on services/

Show outdated Hide outdated cluster/validate-cluster.sh
@@ -81,4 +81,4 @@ done
echo "Validate output:"
echo "${kubectl_output}"
echo -e "${color_green}Cluster validation succeeded${color_norm}"
echo -e "${color_green}Cluster validation succeeded${color_norm}"

This comment has been minimized.

@rjnagal

rjnagal May 5, 2015

Member

add a new line at the end.

@rjnagal

rjnagal May 5, 2015

Member

add a new line at the end.

Show outdated Hide outdated docs/getting-started-guides/ubuntu.md
@@ -0,0 +1,177 @@
# Kubernetes deployed on ubuntu nodes
This document describes how to deploy kubernetes on ubuntu nodes, including 1 master node and 3 minion nodes, and people uses this approach can scale to **any number of minion nodes** by changing some settings with ease. Although there exists saltstack based ubuntu k8s installation , it may be tedious and hard for a guy that knows little about saltstack but want to build a really distributed k8s cluster. This new approach of kubernets deployment is much more easy and automatical than the previous one.

This comment has been minimized.

@rjnagal

rjnagal May 5, 2015

Member

typo: kubernets

@rjnagal

rjnagal May 5, 2015

Member

typo: kubernets

Show outdated Hide outdated docs/getting-started-guides/ubuntu.md
### **Prerequisites:**
*1 The minion nodes have installed docker version 1.2+ and bridge-utils to manipulate linux bridge*
*2 All machines can communicate with each orther, no need to connect Internet (should use private docker registry in this case)*

This comment has been minimized.

@rjnagal

rjnagal May 5, 2015

Member

s/orther/other

@rjnagal

rjnagal May 5, 2015

Member

s/orther/other

Show outdated Hide outdated docs/getting-started-guides/ubuntu.md
### **Main Steps**
#### I. Make *kubernetes* , *etcd* and *flanneld* binaries
On your laptop, copy `cluster/ubuntu` directory to your workspace.

This comment has been minimized.

@rjnagal

rjnagal May 5, 2015

Member

Maybe add an initial step on getting kube from get.k8s.io

@rjnagal

rjnagal May 5, 2015

Member

Maybe add an initial step on getting kube from get.k8s.io

This comment has been minimized.

@WIZARD-CXY

WIZARD-CXY May 6, 2015

Contributor

we can use the get.k8s.io or in fact cluster/get-kube.sh to get the latest k8s, but we still need etcd and flannel binaries to build the whole cluster.

get.k8s.io is downloading k8s from https://storage.googleapis.com/kubernetes-release/release/${release}/kubernetes.tar.gz. We are downloading the k8s directly from github.They are the same I assume.Besides our way can customize the k8s version,while the get.k8s.io does not.

@WIZARD-CXY

WIZARD-CXY May 6, 2015

Contributor

we can use the get.k8s.io or in fact cluster/get-kube.sh to get the latest k8s, but we still need etcd and flannel binaries to build the whole cluster.

get.k8s.io is downloading k8s from https://storage.googleapis.com/kubernetes-release/release/${release}/kubernetes.tar.gz. We are downloading the k8s directly from github.They are the same I assume.Besides our way can customize the k8s version,while the get.k8s.io does not.

This comment has been minimized.

@rjnagal

rjnagal May 6, 2015

Member

Downloading from github is fine. I only meant that the steps should mention getting kubernetes repo explicitly before 'copy cluster/ubuntu'.

All other changes LGTM

@rjnagal

rjnagal May 6, 2015

Member

Downloading from github is fine. I only meant that the steps should mention getting kubernetes repo explicitly before 'copy cluster/ubuntu'.

All other changes LGTM

This comment has been minimized.

@WIZARD-CXY

WIZARD-CXY May 7, 2015

Contributor

Oh, I see your point. will do,sir.

@WIZARD-CXY

WIZARD-CXY May 7, 2015

Contributor

Oh, I see your point. will do,sir.

Show outdated Hide outdated docs/getting-started-guides/ubuntu.md
> We used flannel here because we want to use overlay network, but please remember it is not the only choice, and it is also not a k8s' necessary dependence. Actually you can just build up k8s cluster natively, or use flannel, Open vSwitch or any other SDN tool you like, we just choose flannel here as a example.
#### II. Configue and start the kubernetes cluster

This comment has been minimized.

@rjnagal

rjnagal May 5, 2015

Member

s/Configue/Configure/

@rjnagal

rjnagal May 5, 2015

Member

s/Configue/Configure/

Show outdated Hide outdated docs/getting-started-guides/ubuntu.md
#### IV. Trouble Shooting
Generally, what of this approach did is quite simple:

This comment has been minimized.

@rjnagal

rjnagal May 5, 2015

Member

s/what of/what/

@rjnagal

rjnagal May 5, 2015

Member

s/what of/what/

Show outdated Hide outdated docs/getting-started-guides/ubuntu.md
3. Create and start flannel network
So, whenver you have problem, do not blame Kubernetes, **check etcd configuration first**

This comment has been minimized.

@rjnagal

rjnagal May 5, 2015

Member

s/whenevr/whenever/
Maybe change to say: If you see a problem, try the following :

@rjnagal

rjnagal May 5, 2015

Member

s/whenevr/whenever/
Maybe change to say: If you see a problem, try the following :

Show outdated Hide outdated docs/getting-started-guides/ubuntu.md
Please try:
1. Check `/var/log/upstart/etcd.log` for suspicisous etcd log

This comment has been minimized.

@rjnagal

rjnagal May 5, 2015

Member

s/suspicisous/suspicious/

@rjnagal

rjnagal May 5, 2015

Member

s/suspicisous/suspicious/

@WIZARD-CXY

This comment has been minimized.

Show comment
Hide comment
@WIZARD-CXY

WIZARD-CXY May 6, 2015

Contributor

@rjnagal , thanks for the review and I fix the bug according to your comment.Besides finish the squash work.

I think get.k8s.io is more easy for user to use but lack getting etcd and flannel binary and the version customization.Use our build.sh is just fine.If you really want to use it, I can give it a shot and maybe need some update.

Contributor

WIZARD-CXY commented May 6, 2015

@rjnagal , thanks for the review and I fix the bug according to your comment.Besides finish the squash work.

I think get.k8s.io is more easy for user to use but lack getting etcd and flannel binary and the version customization.Use our build.sh is just fine.If you really want to use it, I can give it a shot and maybe need some update.

@WIZARD-CXY

This comment has been minimized.

Show comment
Hide comment
@WIZARD-CXY

WIZARD-CXY May 7, 2015

Contributor

@rjnagal, ok now , looking forward for the merge

Contributor

WIZARD-CXY commented May 7, 2015

@rjnagal, ok now , looking forward for the merge

Merge the old single-node and multi-node ubuntu deployment into one b…
…etter approach and update the guidance
@rjnagal

This comment has been minimized.

Show comment
Hide comment
@rjnagal

rjnagal May 7, 2015

Member

LGTM

Thanks for your patience, @WIZARD-CXY :)

Member

rjnagal commented May 7, 2015

LGTM

Thanks for your patience, @WIZARD-CXY :)

rjnagal added a commit that referenced this pull request May 7, 2015

Merge pull request #5498 from ZJU-SEL/refactor-ubuntu
Ubuntu-distro: deprecate & merge ubuntu single node work to ubuntu cluster node stuff

@rjnagal rjnagal merged commit 36bb479 into kubernetes:master May 7, 2015

3 of 4 checks passed

cla/google CLAs are signed, but unable to verify author consent
Shippable Shippable builds completed
Details
continuous-integration/travis-ci/pr The Travis CI build passed
Details
coverage/coveralls Coverage remained the same at 49.46%
Details

@WIZARD-CXY WIZARD-CXY deleted the ZJU-SEL:refactor-ubuntu branch May 8, 2015

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment