New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[kubeadm] Add support for clusterName in config file. #60852

Merged
merged 4 commits into from Apr 12, 2018

Conversation

@karan
Member

karan commented Mar 6, 2018

What this PR does / why we need it: Adds a --cluster-name arg to kubeadm init.

Which issue(s) this PR fixes (optional, in fixes #<issue number>(, fixes #<issue_number>, ...) format, will close the issue(s) when PR gets merged):
See kubernetes/kube-deploy#636
Code inspired by #52470

Special notes for your reviewer:

Release note:

Adds --cluster-name to kubeadm init for specifying the cluster name in kubeconfig.
@karan

This comment has been minimized.

Member

karan commented Mar 6, 2018

/retest

1 similar comment
@karan

This comment has been minimized.

Member

karan commented Mar 6, 2018

/retest

@karan

This comment has been minimized.

Member

karan commented Mar 6, 2018

Tests seem to be failing with merge conflicts - my branch and this PR are up-to-date so not sure why.

$ co master
Switched to branch 'master'
Your branch is up to date with 'origin/master'.

karangoel at karangoel-macbookpro in ~/.gvm/pkgsets/go1.9.2/global/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases on master (go1.9.2)
$ git sync
Already up to date.
Everything up-to-date

karangoel at karangoel-macbookpro in ~/.gvm/pkgsets/go1.9.2/global/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases on master (go1.9.2)
$ co -
Switched to branch 'kubeadm-cluster-name'

karangoel at karangoel-macbookpro in ~/.gvm/pkgsets/go1.9.2/global/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases on kubeadm-cluster-name (go1.9.2)
$ git rebase origin/master
Current branch kubeadm-cluster-name is up to date.
@karan

This comment has been minimized.

Member

karan commented Mar 6, 2018

/assign @krousey

@krousey

This comment has been minimized.

Member

krousey commented Mar 6, 2018

@karan
I assume you have a remote defined called upstream that points to github.com/kubernetes/kubernetes (git remotes list). Rebase like this:

$ git fetch upstream
$ git checkout kubeadm-cluster-name
$ git rebase upstream/master
@karan

This comment has been minimized.

Member

karan commented Mar 6, 2018

Correct, upstream is k8s/k8s.

$ co master
Switched to branch 'master'
Your branch is up to date with 'origin/master'.

$ git fetch upstream

$ co -
Switched to branch 'kubeadm-cluster-name'

$ git rebase upstream/master
Current branch kubeadm-cluster-name is up to date.
@krousey

This comment has been minimized.

Member

krousey commented Mar 6, 2018

I see build errors:

      # k8s.io/kubernetes/cmd/kubeadm/app/phases/certs
cmd/kubeadm/app/phases/certs/certs.go:353:63: not enough arguments in call to pkiutil.NewCertificateAuthority
	have ()
	want (string)
# k8s.io/kubernetes/cmd/kubeadm/test/certs
cmd/kubeadm/test/certs/util.go:31:55: not enough arguments in call to pkiutil.NewCertificateAuthority
	have ()

Seems like you didn't update the logic that generates ETCD certs as well.

@karan

This comment has been minimized.

Member

karan commented Mar 6, 2018

With the latest changes, this builds locally now.

$ bazel build //cmd/kubeadm/...:all
INFO: Analysed 177 targets (1 packages loaded).
INFO: Found 177 targets...
INFO: Elapsed time: 65.023s, Critical Path: 41.15s
INFO: Build completed successfully, 22 total actions

@karan karan changed the title from Add cluster-name to kubeadm init to Add --cluster-name to kubeadm init Mar 6, 2018

@karan

This comment has been minimized.

Member

karan commented Mar 7, 2018

/assign luxas

@karan

This comment has been minimized.

Member

karan commented Mar 7, 2018

This PR is ready for review, the one failing test seems to be failing for other PRs as well. I believe #60589 captures that.

@krousey

This comment has been minimized.

Member

krousey commented Mar 7, 2018

I see that you mainly want to change the cluster context name in the kubeconfig file. This is also changing the names the certificates are signed with. I would leave the certificates alone, because the cluster name chosen could present some sensitive material that could be publically discoverable.

Like I could create a cluster name "super-secret-project" which is fine to have as a context in a private kubeconfig file. It is not fine to have that in the certificate that anyone who attempts an handshake with the API server will see.

@karan

This comment has been minimized.

Member

karan commented Mar 7, 2018

That's good feedback Kris. I've updated the PR and excluded cert changes.

@@ -184,6 +184,10 @@ func AddInitConfigFlags(flagSet *flag.FlagSet, cfg *kubeadmapiext.MasterConfigur
&cfg.NodeName, "node-name", cfg.NodeName,
`Specify the node name.`,
)
flagSet.StringVar(
&cfg.ClusterName, "cluster-name", cfg.ClusterName,

This comment has been minimized.

@dixudx

dixudx Mar 9, 2018

Member

Set ClusterName to "kubernetes" by default in L110.

cfg := &kubeadmapiext.MasterConfiguration{ClusterName: kubeadmapiext.DefaultClusterName}

This comment has been minimized.

@karan

karan Mar 9, 2018

Member

Good call.

This comment has been minimized.

@timothysc

timothysc Apr 11, 2018

Member

So we have a moratorium on flags, I'd be ok with a config option with defaults.

Also, if it's part of the struct I sure hope we don't need to change so many signatures.

This comment has been minimized.

@dixudx

dixudx Apr 12, 2018

Member

We should stop adding new flags now. I'd prefer to set this using with a configuration file.

@karan

This comment has been minimized.

Member

karan commented Mar 12, 2018

Gentle ping on the review.

@@ -152,6 +152,9 @@ func AddJoinConfigFlags(flagSet *flag.FlagSet, cfg *kubeadmapiext.NodeConfigurat
flagSet.StringVar(
&cfg.NodeName, "node-name", "",
"Specify the node name.")
flagSet.StringVar(
&cfg.NodeName, "cluster-name", "",

This comment has been minimized.

@dixudx

dixudx Mar 13, 2018

Member

We should have a default value here, right? Instead of current empty string, which will be misleading.

Change to flagSet.StringVar(&cfg.NodeName, "cluster-name", cfg.NodeName, "Specify the cluster name.").

This comment has been minimized.

@karan

karan Mar 20, 2018

Member

Sure. Done

@@ -152,6 +152,9 @@ func AddJoinConfigFlags(flagSet *flag.FlagSet, cfg *kubeadmapiext.NodeConfigurat
flagSet.StringVar(
&cfg.NodeName, "node-name", "",
"Specify the node name.")
flagSet.StringVar(

This comment has been minimized.

@dixudx

dixudx Mar 21, 2018

Member

Set ClusterName to "kubernetes" by default in L104.

cfg := &kubeadmapiext.NodeConfiguration{ClusterName: kubeadmapiext.DefaultClusterName}

This comment has been minimized.

@dixudx

dixudx Apr 12, 2018

Member

Ditto. No more new flags. Setting this in a configuration file is better.

@dixudx

Only 2 nits, others lgtm. It' ready to go.

@@ -107,7 +107,7 @@ var (
// NewCmdInit returns "kubeadm init" command.
func NewCmdInit(out io.Writer) *cobra.Command {
cfg := &kubeadmapiext.MasterConfiguration{}
cfg := &kubeadmapiext.MasterConfiguration{ClusterName: "kubernetes"}

This comment has been minimized.

@dixudx

dixudx Mar 22, 2018

Member

use kubeadmapiext.DefaultClusterName instead?

This comment has been minimized.

@karan

karan Mar 22, 2018

Member

Done

@@ -101,7 +101,7 @@ var (
// NewCmdJoin returns "kubeadm join" command.
func NewCmdJoin(out io.Writer) *cobra.Command {
cfg := &kubeadmapiext.NodeConfiguration{}
cfg := &kubeadmapiext.NodeConfiguration{ClusterName: "kubernetes"}

This comment has been minimized.

@dixudx

dixudx Mar 22, 2018

Member

use kubeadmapiext.DefaultClusterName instead?

This comment has been minimized.

@karan

karan Mar 22, 2018

Member

Done

@luxas

What's the use-case / motivation behind this?
I'm very hesitant against adding more flags to kubeadm init, in fact I think we've declared a freeze there. I could maybe see it going into the config file, but I don't want to over-engineer that either. We have phases for a reason, can the use case here be done via that API instead? In other words, is this an advanced use-case enough to justify going phases route vs the init one?

/assign @timothysc

@karan

This comment has been minimized.

Member

karan commented Mar 22, 2018

The use case here is to be able to generate a kubeconfig that references the cluster with a name other than "kubernetes" so we can have multiple clusters in the same config file. The motivation is to allow the cluster-api tooling to manage multiple clusters, by referencing names rather than IP addresses.

This PR is not urgent so we can wait until the freeze.

@xiangpengzhao

just a couple of nits. otherwise LGTM. However, @luxas concern sounds reasonable.

@@ -224,6 +227,8 @@ type NodeConfiguration struct {
Token string `json:"token"`
// CRISocket is used to retrieve container runtime info.
CRISocket string `json:"criSocket,omitempty"`
// ClusterName is the name for the cluster in kubeconfig.
ClusterName string `json:"clusterName"`

This comment has been minimized.

@xiangpengzhao

xiangpengzhao Apr 3, 2018

Member

It should be an optional field, right?

This comment has been minimized.

@karan

karan Apr 11, 2018

Member

Right. made it so

@@ -107,7 +107,7 @@ var (
// NewCmdInit returns "kubeadm init" command.
func NewCmdInit(out io.Writer) *cobra.Command {
cfg := &kubeadmapiext.MasterConfiguration{}
cfg := &kubeadmapiext.MasterConfiguration{ClusterName: kubeadmapiext.DefaultClusterName}

This comment has been minimized.

@xiangpengzhao

xiangpengzhao Apr 3, 2018

Member

Is ClusterName: kubeadmapiext.DefaultClusterName needed to be specified here? I think it will be defaulted to the value by legacyscheme.Scheme.Default.

This comment has been minimized.

@karan

karan Apr 11, 2018

Member

You're right. Did not know that's what legacyscheme did. Thanks

@@ -101,7 +101,7 @@ var (
// NewCmdJoin returns "kubeadm join" command.
func NewCmdJoin(out io.Writer) *cobra.Command {
cfg := &kubeadmapiext.NodeConfiguration{}
cfg := &kubeadmapiext.NodeConfiguration{ClusterName: kubeadmapiext.DefaultClusterName}

This comment has been minimized.

@xiangpengzhao

xiangpengzhao Apr 3, 2018

Member

same as above

This comment has been minimized.

@karan

karan Apr 11, 2018

Member

Done

@k8s-ci-robot k8s-ci-robot added size/M and removed size/L labels Apr 12, 2018

@karan karan changed the title from Add --cluster-name to kubeadm init to [kubeadm] Add support for clusterName in config file. Apr 12, 2018

@karan

This comment has been minimized.

Member

karan commented Apr 12, 2018

This is now not using a flag but config file.

@timothysc

Thank you for scoping this down.

/lgtm
/approve

@k8s-ci-robot k8s-ci-robot added the lgtm label Apr 12, 2018

@k8s-ci-robot

This comment has been minimized.

Contributor

k8s-ci-robot commented Apr 12, 2018

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: karan, timothysc

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-merge-robot

This comment has been minimized.

Contributor

k8s-merge-robot commented Apr 12, 2018

Automatic merge from submit-queue (batch tested with PRs 58178, 62491, 60852). If you want to cherry-pick this change to another branch, please follow the instructions here.

@k8s-merge-robot k8s-merge-robot merged commit 38da981 into kubernetes:master Apr 12, 2018

15 checks passed

Submit Queue Queued to run github e2e tests a second time.
Details
cla/linuxfoundation karan authorized
Details
pull-kubernetes-bazel-build Job succeeded.
Details
pull-kubernetes-bazel-test Job succeeded.
Details
pull-kubernetes-cross Skipped
pull-kubernetes-e2e-gce Job succeeded.
Details
pull-kubernetes-e2e-gce-device-plugin-gpu Job succeeded.
Details
pull-kubernetes-e2e-gke Skipped
pull-kubernetes-e2e-kops-aws Job succeeded.
Details
pull-kubernetes-integration Job succeeded.
Details
pull-kubernetes-kubemark-e2e-gce Job succeeded.
Details
pull-kubernetes-local-e2e Skipped
pull-kubernetes-node-e2e Job succeeded.
Details
pull-kubernetes-typecheck Job succeeded.
Details
pull-kubernetes-verify Job succeeded.
Details
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment