Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

the v1alpha3 usage is unclear: Unable to init kubeadm 1.12 with config file #1152

Closed
ulm0 opened this issue Oct 2, 2018 · 32 comments · Fixed by kubernetes/kubernetes#69332
Closed
Assignees
Labels
kind/documentation Categorizes issue or PR as related to documentation. lifecycle/active Indicates that an issue or PR is actively being worked on by a contributor. priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. sig/cluster-lifecycle Categorizes an issue or PR as relevant to SIG Cluster Lifecycle. sig/release Categorizes an issue or PR as relevant to SIG Release.
Milestone

Comments

@ulm0
Copy link

ulm0 commented Oct 2, 2018

Is this a BUG REPORT or FEATURE REQUEST?:

/kind bug
/sig release
/sig cluster-lifecycle

reported in kubernetes/kubernetes#69242

What happened:

According to changelog:

  • v1alpha3 has split apart MasterConfiguration into separate components; InitConfiguration, ClusterConfiguration, JoinConfiguration, KubeletConfiguration, and KubeProxyConfiguration
  • Different configuration types can be supplied all in the same file separated by ---.

Printed the new config format (kubeadm config print-default) to see what changes i need to make in order to spin up a new 1.12 cluster, decided to test the default config (kubeadm config print-default > master.yml) i got the from command and just changed the criSocket from docker to containerd, thing is i'm unable to init the cluster due to the following output:

kubeadm init --config master.yml
invalid configuration: kinds [InitConfiguration MasterConfiguration JoinConfiguration NodeConfiguration] are mutually exclusive

What you expected to happen:

Cluster to init with the new config format

How to reproduce it (as minimally and precisely as possible):

Use the default config and try to init the cluster with it

Anything else we need to know?:

Running just kubead init works (or kubeadm init --cri-socket /var/run/containerd/containerd.sock in my case)

Environment:

  • Kubernetes version (use kubectl version):
Client Version: version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.0", GitCommit:"0ed33881dc4355495f623c6f22e7dd0b7632b7c0", GitTreeState:"clean", BuildDate:"2018-09-27T17:05:32Z", GoVersion:"go1.10.4", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.0", GitCommit:"0ed33881dc4355495f623c6f22e7dd0b7632b7c0", GitTreeState:"clean", BuildDate:"2018-09-27T16:55:41Z", GoVersion:"go1.10.4", Compiler:"gc", Platform:"linux/amd64"}
  • Cloud provider or hardware configuration:
    • on-prem, esxi host v6.0 managed by vcenter server 6.7
  • OS (e.g. from /etc/os-release):
NAME="Ubuntu"
VERSION="16.04.5 LTS (Xenial Xerus)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 16.04.5 LTS"
VERSION_ID="16.04"
VERSION_CODENAME=xenial
UBUNTU_CODENAME=xenial
  • Kernel (e.g. uname -a):
Linux master 4.15.18-041518-generic #201804190330 SMP Thu Apr 19 07:34:21 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
  • Install tools: template built with packer, kubernetes installed using apt-get and containerd downloaded by wget
  • Others:
master.yml
apiEndpoint:
  advertiseAddress: 0.0.0.0
  bindPort: 6443
apiVersion: kubeadm.k8s.io/v1alpha3
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: abcdef.0123456789abcdef
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
nodeRegistration:
  criSocket: /var/run/containerd/containerd.sock
  name: master
  taints:
  - effect: NoSchedule
    key: node-role.kubernetes.io/master
---
apiVersion: kubeadm.k8s.io/v1alpha3
auditPolicy:
  logDir: /var/log/kubernetes/audit
  logMaxAge: 2
  path: ""
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controlPlaneEndpoint: ""
etcd:
  local:
    dataDir: /var/lib/etcd
    image: ""
imageRepository: k8s.gcr.io
kind: ClusterConfiguration
kubernetesVersion: v1.12.0
networking:
  dnsDomain: cluster.local
  podSubnet: ""
  serviceSubnet: 10.96.0.0/12
unifiedControlPlaneImage: ""
---
apiEndpoint:
  advertiseAddress: 0.0.0.0
  bindPort: 6443
apiVersion: kubeadm.k8s.io/v1alpha3
caCertPath: /etc/kubernetes/pki/ca.crt
clusterName: kubernetes
discoveryFile: ""
discoveryTimeout: 5m0s
discoveryToken: abcdef.0123456789abcdef
discoveryTokenAPIServers:
- kube-apiserver:6443
discoveryTokenUnsafeSkipCAVerification: true
kind: JoinConfiguration
nodeRegistration:
  criSocket: /var/run/containerd/containerd.sock
  name: master
tlsBootstrapToken: abcdef.0123456789abcdef
token: abcdef.0123456789abcdef
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
bindAddress: 0.0.0.0
clientConnection:
  acceptContentTypes: ""
  burst: 10
  contentType: application/vnd.kubernetes.protobuf
  kubeconfig: /var/lib/kube-proxy/kubeconfig.conf
  qps: 5
clusterCIDR: ""
configSyncPeriod: 15m0s
conntrack:
  max: null
  maxPerCore: 32768
  min: 131072
  tcpCloseWaitTimeout: 1h0m0s
  tcpEstablishedTimeout: 24h0m0s
enableProfiling: false
healthzBindAddress: 0.0.0.0:10256
hostnameOverride: ""
iptables:
  masqueradeAll: false
  masqueradeBit: 14
  minSyncPeriod: 0s
  syncPeriod: 30s
ipvs:
  excludeCIDRs: null
  minSyncPeriod: 0s
  scheduler: ""
  syncPeriod: 30s
kind: KubeProxyConfiguration
metricsBindAddress: 127.0.0.1:10249
mode: ""
nodePortAddresses: null
oomScoreAdj: -999
portRange: ""
resourceContainer: /kube-proxy
udpIdleTimeout: 250ms
---
address: 0.0.0.0
apiVersion: kubelet.config.k8s.io/v1beta1
authentication:
  anonymous:
    enabled: false
  webhook:
    cacheTTL: 2m0s
    enabled: true
  x509:
    clientCAFile: /etc/kubernetes/pki/ca.crt
authorization:
  mode: Webhook
  webhook:
    cacheAuthorizedTTL: 5m0s
    cacheUnauthorizedTTL: 30s
cgroupDriver: cgroupfs
cgroupsPerQOS: true
clusterDNS:
- 10.96.0.10
clusterDomain: cluster.local
configMapAndSecretChangeDetectionStrategy: Watch
containerLogMaxFiles: 5
containerLogMaxSize: 10Mi
contentType: application/vnd.kubernetes.protobuf
cpuCFSQuota: true
cpuCFSQuotaPeriod: 100ms
cpuManagerPolicy: none
cpuManagerReconcilePeriod: 10s
enableControllerAttachDetach: true
enableDebuggingHandlers: true
enforceNodeAllocatable:
- pods
eventBurst: 10
eventRecordQPS: 5
evictionHard:
  imagefs.available: 15%
  memory.available: 100Mi
  nodefs.available: 10%
  nodefs.inodesFree: 5%
evictionPressureTransitionPeriod: 5m0s
failSwapOn: true
fileCheckFrequency: 20s
hairpinMode: promiscuous-bridge
healthzBindAddress: 127.0.0.1
healthzPort: 10248
httpCheckFrequency: 20s
imageGCHighThresholdPercent: 85
imageGCLowThresholdPercent: 80
imageMinimumGCAge: 2m0s
iptablesDropBit: 15
iptablesMasqueradeBit: 14
kind: KubeletConfiguration
kubeAPIBurst: 10
kubeAPIQPS: 5
makeIPTablesUtilChains: true
maxOpenFiles: 1000000
maxPods: 110
nodeLeaseDurationSeconds: 40
nodeStatusUpdateFrequency: 10s
oomScoreAdj: -999
podPidsLimit: -1
port: 10250
registryBurst: 10
registryPullQPS: 5
resolvConf: /etc/resolv.conf
rotateCertificates: true
runtimeRequestTimeout: 2m0s
serializeImagePulls: true
staticPodPath: /etc/kubernetes/manifests
streamingConnectionIdleTimeout: 4h0m0s
syncFrequency: 1m0s
volumeStatsAggPeriod: 1m0s
@k8s-ci-robot k8s-ci-robot added kind/bug Categorizes issue or PR as related to a bug. sig/release Categorizes an issue or PR as relevant to SIG Release. sig/cluster-lifecycle Categorizes an issue or PR as relevant to SIG Cluster Lifecycle. labels Oct 2, 2018
@neolit123
Copy link
Member

hi, you should not pass kind: JoinConfiguration to init.
just remove that whole section from the yaml and try again.

@neolit123 neolit123 added the priority/awaiting-more-evidence Lowest priority. Possibly useful, but not yet enough support to actually get it done. label Oct 2, 2018
@ReSearchITEng
Copy link

ReSearchITEng commented Oct 2, 2018

Hello,
While trying the new config ymls of kubeadm 12, it was noticed:

  1. the generated yml:
    • keeps, in the apiEndpoint.advertiseAddress, the dummy "1.2.3.4" address instead of defaulting to the default one of the machine (e.g. the one returned by hostname -i )
    • has "cgroupDriver: cgroupfs" - probably dummy, not autodetected (which in my case is systemd)
  2. There are 5 kinds of configs, but not very clear which is for what:
    • kind: InitConfiguration
    • kind: JoinConfiguration
    • kind: ClusterConfiguration
    • kind: KubeProxyConfiguration
    • kind: KubeletConfiguration
      The doc (https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-config/) does not present which sections are required on the master, which on the node.
      (while sounds easy that Init is for the master and Join for the nodes, it's not perfectly clear about the other 3).
      From my tests, I could put on master all but Join:
    • kind: InitConfiguration
    • kind: ClusterConfiguration
    • kind: KubeProxyConfiguration
    • kind: KubeletConfiguration
      and on the node:
    • kind: JoinConfiguration
    • kind: KubeProxyConfiguration
    • kind: KubeletConfiguration

When: "kind: ClusterConfiguration" was added to the node, some interesing message came up:

converting (v1alpha3.ClusterConfiguration) to (kubeadm.JoinConfiguration): NodeRegistration not present in src

Can anyone help with details?

@neolit123
Copy link
Member

the generated yml:
keeps, in the apiEndpoint.advertiseAddress, the dummy "1.2.3.4" address instead of defaulting to the default one of the machine (e.g. the one returned by hostname -i )
has "cgroupDriver: cgroupfs" - probably dummy, not autodetected (which in my case is systemd)

your cgroupfs driver and network address will be detected at runtime.
we don't want to write these as dynamic defaults for now.
but we might do that eventually.

There are 5 kinds of configs, but not very clear which is for what:

i agree.

best information can be found in our reference docs and source code:
https://godoc.org/k8s.io/kubernetes/cmd/kubeadm/app/apis/kubeadm/v1alpha3
https://github.com/kubernetes/kubernetes/blob/master/cmd/kubeadm/app/apis/kubeadm/types.go

for now kubeadm init is our landing page for users looking for the config:
https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-init/#config-file

From my tests, I could put on master all but Join:
kind: InitConfiguration
kind: ClusterConfiguration
kind: KubeProxyConfiguration
kind: KubeletConfiguration
and on the node:
kind: JoinConfiguration
kind: KubeProxyConfiguration
kind: KubeletConfiguration

true.

@neolit123 neolit123 changed the title Unable to init kubeadm 1.12 with config file the v1alpha3 usage is unclear: Unable to init kubeadm 1.12 with config file Oct 2, 2018
@neolit123 neolit123 self-assigned this Oct 2, 2018
@neolit123 neolit123 added kind/documentation Categorizes issue or PR as related to documentation. priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. and removed kind/bug Categorizes issue or PR as related to a bug. priority/awaiting-more-evidence Lowest priority. Possibly useful, but not yet enough support to actually get it done. labels Oct 2, 2018
@ReSearchITEng
Copy link

@neolit123
Thanks for the great link.
As per the 1st link (https://godoc.org/k8s.io/kubernetes/cmd/kubeadm/app/apis/kubeadm/v1alpha3), at init we should put in the same file init and cluster.

  1. What about kubeProxy and Kubelet? If we add them there, will it read those params?
  2. BTW, I could not find a similar documentation link like the above for kubeproxy and kubelet
    (there is no: https://godoc.org/k8s.io/kubernetes/cmd/kubeadm/app/apis/kubeproxy/v1alpha1
    or https://godoc.org/k8s.io/kubernetes/cmd/kubeadm/app/apis/kubelet/v1beta1 )

@neolit123
Copy link
Member

neolit123 commented Oct 2, 2018

  1. yes.
  2. they are not that easy to find in gdocs sadly, we need to address that too:
    https://godoc.org/k8s.io/kube-proxy/config/v1alpha1#KubeProxyConfiguration
    https://godoc.org/k8s.io/kubelet/config/v1beta1#KubeletConfiguration

@neolit123
Copy link
Member

/lifecycle active

@k8s-ci-robot k8s-ci-robot added the lifecycle/active Indicates that an issue or PR is actively being worked on by a contributor. label Oct 2, 2018
@ReSearchITEng
Copy link

@neolit123
thanks for links.
Checking the definition of the basic params like cidr of podSubnet. It has to be defined in 2 places now?
Once in KubeProxyConfiguration and once in ClusterConfiguration?

https://godoc.org/k8s.io/kube-proxy/config/v1alpha1#KubeProxyConfiguration
// clusterCIDR is the CIDR range of the pods in the cluster. It is used to
// bridge traffic coming from outside of the cluster. If not provided,
// no off-cluster bridging will be performed.
ClusterCIDR string json:"clusterCIDR"

and

https://godoc.org/k8s.io/kubernetes/cmd/kubeadm/app/apis/kubeadm/v1alpha3
// PodSubnet is the subnet used by pods.
PodSubnet string json:"podSubnet"

If so,the 2 sources of truth must be kept manually in sync, or one takes priority (and defines the other)?

@timothysc
Copy link
Member

We should split the default print (init,join) so it's clear.

FWIW I don't think we should fail on the other data though too. We should just warn and ignore.
/cc @chuckha @liztio

@timothysc
Copy link
Member

Given that this is regressive behavior, we can cherry-pick certain changes.

@neolit123
Copy link
Member

@ReSearchITEng

If so,the 2 sources of truth must be kept manually in sync,

yes.

@fabriziopandini
Copy link
Member

@ReSearchITEng @neolit123
Little clarification here:

If you leave field empty, kubeadm will keep values in sync among different components, if you specify values in the kubeproxy config, this value will be preserved (up to you to keep things in sync)

relevant code here:
https://github.com/kubernetes/kubernetes/blob/59957af12529c0cb84f43b2117f115387b49202d/cmd/kubeadm/app/componentconfigs/defaults.go#L44

@neolit123
Copy link
Member

neolit123 commented Oct 2, 2018

@fabriziopandini @timothysc
this PR merged:
kubernetes/kubernetes#69332
but if we want to make it easy for for the 1.12 users to find v1alpha3 docs from the CLI it has to be cherry picked, or at least this hunk:
https://github.com/kubernetes/kubernetes/pull/69332/files#diff-67bf720df6e731a5f9313508a568856fR104

@timothysc timothysc reopened this Oct 2, 2018
@timothysc
Copy link
Member

We need more than a work-around imo. Let's discuss tomorrow on the call.

@rosti
Copy link

rosti commented Oct 3, 2018

I can tackle the long term solution, once we have a clear understanding what's needed.

/assign

@timothysc
Copy link
Member

Adding notes from sig-call today:

1.13

  • add deprecation flag for print-defaults
  • add commands kubeadm config print (init/join)-defaults

1.12

  • We will need to make it tolerant of init/join configs for different workflows to unblock the community.

@neolit123
Copy link
Member

sadly i missed the start of the meeting:

add deprecation flag for print-defaults
add commands kubeadm config print (init/join)-defaults

the is fine as long as we don't expect configuration objects that are not bound to either init or join, but rather bound to another hypothetical workflow that we don't yet have.

i must admit i preferred the old print-default --join / --init idea better.

but also #ETOOMANYSUBCOMMANDS
with kubeadm config print can init / join be flags instead?

@seanhig
Copy link

seanhig commented Oct 4, 2018

I'm not sure this is heading in the right direction, but the issue as stated was bang on to my frustration.

It's hard to believe this passes for acceptable validation output in 2018:

[root@k8s-m1 admin]# kubeadm init --config kube-adm.yml
error converting YAML to JSON: yaml: line 4: found character that cannot start any token

Completely ambigous, and in fact, in this case, the kube-adm.yml file did not even exist. It is just a raw underlying lib parse error on a file that doesn't exist.

No matter what YAML error was in my config file, I would get a similar error output... a line number, even though multiple YAML files are combined, so often the lines were from a particular YAML segment that was never referenced in the output.

But the most frustrating was when I got to more qualitative validations... like this one:

[root@k8s-m1 admin]# kubeadm init --config /root/kube-adm.yml
host must be a valid IP address or a valid RFC-1123 DNS subdomain

That is a lovely error, but would you mind telling me which host field you are referring to!

It was only by removing them one by one that I found the issue... an unnecessary "http://" prefix in my template triggered the parse error at line 110:
https://github.com/kubernetes/kubernetes/blob/master/cmd/kubeadm/app/util/endpoint.go

return "", "", fmt.Errorf("host must be a valid IP address or a valid RFC-1123 DNS subdomain")

Would it be so hard to include the offending host in the error output? Or the relevant field from a list of hundreds?

If this was my first experience with kubeadm it would have been my last. Not sure how this represents progress. Earlier configuration mechanisms seemed sane.

@fabriziopandini
Copy link
Member

@seanhig
Thanks for your feedback
We are commited to move kubeadm config from alpha to beta, and this includes improving UX and as well adding some features.

Along this way user any user feedback is really valuable and we are taking them really seriously as you can see from this thread that already generated 3 PRs.
I'm moving your comments above to separated issues for ensuring better traceability

@ulm0
Copy link
Author

ulm0 commented Oct 6, 2018

Well, after some days i've been able to test the suggestions, it worked, though as stated by other folks here the new format is a bit confusing

@ulm0
Copy link
Author

ulm0 commented Oct 9, 2018

as of v1alpha3, which config block settings like these should be part of? InitConfiguration, ClusterConfiguration or any other?

controllerManagerExtraArgs:
  horizontal-pod-autoscaler-use-rest-clients: "true"
  horizontal-pod-autoscaler-sync-period: "10s"
  node-monitor-grace-period: "10s"
apiServerExtraArgs:
  runtime-config: "api/all=true"

@neolit123
Copy link
Member

^ ClusterConfiguration

@ReSearchITEng
Copy link

Hi,
could not find which config to use when running: "kubeadm token create --config XXXX".
I am interested to add the usual: k8s version, ttl, description.
As for mixing action (--print-join-command) with --config, I have opened #1166 . Please close if not valid.

@neolit123
Copy link
Member

@ReSearchITEng
it's all in the link already posted earlier:
https://godoc.org/k8s.io/kubernetes/cmd/kubeadm/app/apis/kubeadm/v1alpha3
which is now improved, too.

the way you do it is you look for the fields that you need, you place them in XXXXXConfig structs based on the specification and separate the structs with --- if they are more than one.

apiVersion: kubeadm.k8s.io/v1alpha3
kind: InitConfiguration
bootstrapTokens:
- token: "9a08jv.c0izixklcxtmnze7"
  description: "kubeadm bootstrap token"
  ttl: "24h"
---
apiVersion: kubeadm.k8s.io/v1alpha3
kind: ClusterConfiguration
kubernetesVersion: "v1.12.0"

hope that helps.

@ReSearchITEng
Copy link

@neolit123
I only need to create a token.
So I need to build a file with both InitConfiguration (which holds bootstrapTokens) as well as ClusterConfiguration (which holds kubernetesVersion ) ?

@neolit123
Copy link
Member

should also work if you pass InitConfiguration from a file and --kubernetes-version as flag for the version.

@rosti
Copy link

rosti commented Oct 10, 2018

should also work if you pass InitConfiguration from a file and --kubernetes-version as flag for the version.

No, that won't work currently, but it's on the road map for 1.13.
@ReSearchITEng you need both InitConfiguration and ClusterConfiguration in a single YAML file supplied via --config to kubeadm init.

@ReSearchITEng
Copy link

@rosti
in 1.13, will it be possible to use --kubernetes-version in all cases (e.g. token create)?
Currently:

kubeadm token create --print-join-command --kubernetes-version v1.12.1
Error: unknown flag: --kubernetes-version

(tracked here: #1166 (comment))

@rosti
Copy link

rosti commented Oct 10, 2018

@ReSearchITEng no, at this point you should have a running cluster and its version stored in its config map. Therefore, you don't need to specify the K8s version in any way (neither from command line, nor from config file).
If there is an issue with this in 1.12, please, file it here so we can have a look at it.

@ReSearchITEng
Copy link

@rosti
Cluster is installed with kubeadm 12.1, and by default it adds the cm:

ks get cm/kubeadm-config -o yaml | grep -i kubernetesVersion
    kubernetesVersion: v1.12.1

Still, it's querying the internet. I guess @neolit123 's PR will remove the need for querying the version for activities like token creation.

@ulm0
Copy link
Author

ulm0 commented Oct 21, 2018

@neolit123 quick question is it possible to pass kubeletExtraArgs as well? should they also be part of ClusterConfiguration or KubeletConfiguration? I'm trying to pass

kubeletExtraArgs:
  feature-gates: "BlockVolume=true"

within the KubeletConfiguration but it doesn't make any effect on kubelet.

@neolit123
Copy link
Member

we have a full example here @ulm0
https://godoc.org/k8s.io/kubernetes/cmd/kubeadm/app/apis/kubeadm/v1alpha3

search for kubeletExtraArgs.
both the init and join configuration have a sub object called NodeRegistrationOptions.

try passing at least --v=1 to kubeadm init and look for the string feature gates: .... showing the list of enabled gates.

@timothysc timothysc added this to the v1.13 milestone Oct 26, 2018
@timothysc
Copy link
Member

Closing this issue given the update in the docs.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/documentation Categorizes issue or PR as related to documentation. lifecycle/active Indicates that an issue or PR is actively being worked on by a contributor. priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. sig/cluster-lifecycle Categorizes an issue or PR as relevant to SIG Cluster Lifecycle. sig/release Categorizes an issue or PR as relevant to SIG Release.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

8 participants