Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Ubuntu-distro: deprecate & merge ubuntu single node work to ubuntu cluster node stuff #5498

Merged
merged 1 commit into from May 7, 2015

Conversation

resouer
Copy link
Contributor

@resouer resouer commented Mar 16, 2015

This PR is based on suggestions form @rjnagal & @bgrant0607 , we have tried to merge the existing ubuntu bare-metal setup into one part.

The merged doc & scripts can both deal with single node setup and cluster setup, it also support flannel as overlay network.

We also use a remote deploy method to make everything automated.

@resouer
Copy link
Contributor Author

resouer commented Mar 17, 2015

ping @rjnagal & @bgrant0607

@rjnagal
Copy link
Contributor

rjnagal commented Mar 17, 2015

Thanks for the update @resouer

The steps for setting up kube on ubuntu are too complicated right now. Is there any way we can whittle it down to one setup script that accepts master and minions details and automatically sets up these machines with required components.

At the least we should try to put most of the setup in cluster/ubuntu-cluster/configure.sh.
Making and installing other binaries doesn't need to be a separate prerequisite. WDYT?

@resouer
Copy link
Contributor Author

resouer commented Mar 18, 2015

@rjnagal We'd like to refactor it. But there're two approaches and we hope to reach an agreement firstly:
Option 1: Use Ansible to take care of the machines
Option 2: Keep using shell scripts to build & copy files across machines

Which one do you suggest?

Personally, I prefer Ansible which only requires a jump machine and ssh-key configuration. But pure shell solution will be much simpler although less powerful.

@rjnagal
Copy link
Contributor

rjnagal commented Mar 18, 2015

@resouer Adding ansible dependency just to setup minions might be an overkill. If we can setup clusters directly from cluster/ubuntu/utils.sh, we might even be able to use generic https://get.k8s.io script to set things up.

To start simple, we can decide on the information we need from the user and read in the config from an env variable.

@resouer
Copy link
Contributor Author

resouer commented Mar 19, 2015

@rjnagal Thanks for your advice.

Base on your advice, we can simplify the installations workflow like this:

0. export MASTER="user@xxxx" MINIONS="user@xxxx, user@xxxx"
1. set ubuntu-cluster as provider
2. run cluster//kube-up.sh

And in kube-up.sh:

We don't need to start machines (already there)
We don't need user input the role of the machine as we can guess its role by read local ip (any other better approach?).
The only problem is how we deal with user&password and ssh&scp stuff, for now it seems we need manual authorized_keys setup ...

Currently, we only support 1:N model, we can update this to N:M ASAP when we test it through.

What do you feel about this design? I don't know if it is possible that owners accept bare-metal as a provider, but we'd like to make that happen if it make sense.

@resouer
Copy link
Contributor Author

resouer commented Mar 23, 2015

@rjnagal Could you please do a design review. please 😄

@rjnagal
Copy link
Contributor

rjnagal commented Mar 23, 2015

I think this is a much better approach than we have now. Go for it! :)

On Sun, Mar 22, 2015 at 6:34 PM, Harry Zhang notifications@github.com
wrote:

@rjnagal https://github.com/rjnagal Could you please do a design
review. please [image: 😄]


Reply to this email directly or view it on GitHub
#5498 (comment)
.

@WIZARD-CXY
Copy link
Contributor

ping @rjnagal @resouer Our team has almost finished the refactoring work, but there exists a problem to solve.Because we need to put the kubernetes config file in the /etc/ directory and use ubuntu upstart to manage kubernetes jobs, these operations need sudo privilege.We don't want to abuse the sudo privilege in our deployment script.So we decide just ask user to input sudo password when necessary.Although this is a bad scale plan when the machine number is large.
The other option is letting user to dispatch root ssh-public key to all machines going to run k8s by using ssh-copy-id and allow root user login first .Then everything is easy.
what do you think?Any other good ideas?

@erictune
Copy link
Member

This is how locally works. Seems fine for now. Suggestions:

  • use sudo -b command rather than sudo command & so that the password
    requests are serialized, if there are multiple sudos
  • use the -p option with a helpful description of why the password is
    being requested.

On Wed, Apr 15, 2015 at 7:16 PM, Xingyu Chen notifications@github.com
wrote:

@rjnagal https://github.com/rjnagal Our team has almost finished the
refactoring work, but there exists a problem to solve.Because we need to
put the kubernetes config file in the /etc/ directory and use ubuntu
upstart to manage kubernetes jobs, these operations need sudo privilege.We
don't want to abuse the sudo privilege in our deployment script.So we
decide just ask user to input sudo password when necessary.Although this is
a bad scale plan when the machine number is large, what do you think?Any
other good ideas?


Reply to this email directly or view it on GitHub
#5498 (comment)
.

@googlebot
Copy link

We found a Contributor License Agreement for you (the sender of this pull request) and all commit authors, but as best as we can tell these commits were authored by someone else. If that's the case, please add them to this pull request and have them confirm that they're okay with these commits being contributed to Google. If we're mistaken and you did author these commits, just reply here to confirm.

@resouer
Copy link
Contributor Author

resouer commented Apr 22, 2015

@googlebot CLA confirmed. Fixing CI.

@WIZARD-CXY
Copy link
Contributor

@googlebot CLA confirmed

@WIZARD-CXY
Copy link
Contributor

Change log
1 Merge the old single node and multiple nodes k8s deployment into just one
2 Refact the deployment code so that the deployment is more automatic than ever before.User only needs to input some key configurations as the new guide described.No more login to multiple machines.The deployment is done remotely.
3 Update the new deploying guide and tested ok by using k8s 0.15.0 version
@resouer @erictune @rjnagal Please review the changes.
Looking forward to your suggestions.

@WIZARD-CXY
Copy link
Contributor

The cla check is complaing CLAs are signed, but unable to verify author consent . I already signed before, need some other commitment? @rjnagal

@rjnagal
Copy link
Contributor

rjnagal commented Apr 22, 2015

the cla check is confused because the pull request is from @resouer but the commits are from @WIZARD-CXY

I'll update it.

@WIZARD-CXY
Copy link
Contributor

Since cluster dns is not enabled for now. But according to this issue #6667 .It is recommended to have one.So I will update the scripts and doc to automatically set up the cluster dns.

@WIZARD-CXY WIZARD-CXY force-pushed the refactor-ubuntu branch 4 times, most recently from dfb4425 to beebfd0 Compare May 5, 2015 02:11
@WIZARD-CXY
Copy link
Contributor

sorry @rjnagal I can't both get shippable and travis ci right. When I click Detail of Shippable ,it just said Waiting for build information..., nothing useful to indicate where the code goes wrong.

@rjnagal
Copy link
Contributor

rjnagal commented May 5, 2015

@WIZARD-CXY you can ignore shippable failure as long as travis is passing. Its a bit flaky,

I tried out the new script and the cluster came up in a snap. Thanks for spending the time redoing the logic. It's much simpler to use now.

Can you squash your 4 commits into one.

@@ -107,6 +107,17 @@ if [[ "$KUBERNETES_PROVIDER" == "gke" ]]; then
"--kubeconfig=${HOME}/.config/gcloud/kubernetes/kubeconfig"
"--context=gke_${PROJECT}_${ZONE}_${CLUSTER_NAME}"
)

elif [[ "$KUBERNETES_PROVIDER" == "libvirt-coreos" ]]; then
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why add libvirt-coreos for ubuntu?

@WIZARD-CXY
Copy link
Contributor

@rjnagal , thanks for the review and I fix the bug according to your comment.Besides finish the squash work.

I think get.k8s.io is more easy for user to use but lack getting etcd and flannel binary and the version customization.Use our build.sh is just fine.If you really want to use it, I can give it a shot and maybe need some update.

@WIZARD-CXY WIZARD-CXY force-pushed the refactor-ubuntu branch 3 times, most recently from fa4d845 to 657ee0b Compare May 7, 2015 11:17
@WIZARD-CXY
Copy link
Contributor

@rjnagal, ok now , looking forward for the merge

@rjnagal
Copy link
Contributor

rjnagal commented May 7, 2015

LGTM

Thanks for your patience, @WIZARD-CXY :)

rjnagal added a commit that referenced this pull request May 7, 2015
Ubuntu-distro: deprecate & merge ubuntu single node work to ubuntu cluster node stuff
@rjnagal rjnagal merged commit 36bb479 into kubernetes:master May 7, 2015
@WIZARD-CXY WIZARD-CXY deleted the refactor-ubuntu branch May 8, 2015 01:09
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

7 participants