-
Notifications
You must be signed in to change notification settings - Fork 4.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
kubeconfig files cannot login the dashboard #2474
Comments
Works as intended. Read our Access Control guide on wiki pages to find out how it works. |
@txg1550759 were you able to find what the problem was? I also have a |
https://github.com/kubernetes/dashboard/wiki/Access-control#kubeconfig
|
A few thoughts for those who might end up here from search. The reason why my The yaml file I was trying to authenticate with came from I ran kubectl edit ClusterRoleBinding tiller-cluster-rule ↓ apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
creationTimestamp: 2017-07-23T16:34:40Z
name: tiller-cluster-rule
resourceVersion: "2328"
selfLink: /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/tiller-cluster-rule
uid: d3249b5d-6fc4-1227-8920-5250000643887
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects: # ← here
- kind: ServiceAccount
name: tiller
namespace: kube-system kubectl edit ClusterRoleBinding cluster-admin ↓ apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
annotations:
rbac.authorization.kubernetes.io/autoupdate: "true"
creationTimestamp: 2017-07-23T16:08:15Z
labels:
kubernetes.io/bootstrapping: rbac-defaults
name: cluster-admin
resourceVersion: "118"
selfLink: /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/cluster-admin
uid: 2224ba97-6fc4-1227-8920-5250000643887
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects: # ← here
- apiGroup: rbac.authorization.k8s.io
kind: Group
name: system:masters This suggests that my cluster probably does not have a dedicated ‘root’ service account per se. That's why Nevertheless, I could get into the dashboard by authenticating myself as other ServiceAccounts and this worked well. Depending on the privileges of a service account I picked, the dashboard was giving me access to different resources, which is great! Here's an example of getting a token for the service account called kubectl describe serviceaccount tiller -n kube-system ↓ Name: tiller
Namespace: kube-system
Labels: <none>
Annotations: <none>
Image pull secrets: <none>
Mountable secrets: tiller-token-854dx
Tokens: tiller-token-854dx kubectl describe secret tiller-token-854dx -n kube-system ↓ ...
Data
====
ca.crt: 1025 bytes
namespace: 11 bytes
token: ××× Copy token The useful thing about the Finally, my upgrade from dashboard 1.6 to 1.8 can be considered as finished! 😄 All this RBAC stuff is way too advanced for me to be honest, so it can be that I‘ve done something wrong. I guess that a proper solution would be to create a new service account and a new role binding from scratch and then use that token in the dashboard instead of the tiller's one. However I'll probably stay with my tiller's token for some time until I get energy for switching to a proper solution. Could anyone please confirm or correct my thoughts? |
Correct. It is recommended way of handling it.
If it works for you it is fine. You should be aware, that everyone with |
@kachkaev I'm really glad that you actually took time to try and find a solution on your own :) I can help you fill out the gaps.
Usually cluster provisioners like That is why we have to rely on other methods of authenticating user offered by kubernetes, such as token-based authentication or basic auth (login and password). Second one only works when authorization mode ABAC is enabled and you have some additional arguments passed to apiserver. None of these methods is deprecated. These are just different ways to authenticate user and all of them can be specified in kubeconfig file. There are even more authentication options. That is why it is highly recommended to read our documentation first, before using Dashboard. In https://github.com/kubernetes/dashboard/wiki/Access-control#introduction
RBAC is generally quite a big topic in kubernetes. I'd recommend reading Using RBAC Authorization guide to find out how to create and configure "user" with required permissions. We are really trying to keep our documentation clear for users and provide all necessary links so they can find out how everything works and how to work with not only Dashboard, but also Kubernetes. In case you have some more doubts or questions you can ask me. I'll try to help if I can. |
Thank you for replies @maciaszczykm and @floreks! RBAC's getting slightly clearer over time, thanks to the docs that are constantly improving. I really like the fact that, if installed correctly, the dashboard no longer has admin privileges and so it is possible to give different team members varying permissions. Totally agree that if everyone has When I ran dashboard 1.6 at https://dashboard.example.com/, I was adding basic auth to the ingress rule to protect the dashboard from strangers – anyone could become an admin of my cluster otherwise. After upgrading to 1.8 with your official yaml, it seems that running https://dashboard.example.com/ is now safe even without any basic auth in ingress. If a hacker gets to that domain, they'll only be able to know about the existence of the k8s cluster, but not perform any read/write operations on it. Only authenticated token bearers will be able to get the details about the existing resources and change them (as long as a token represents a ServiceAccount with enough privileges). Am a right about this? I understand that the best option is to keep https://dashboard.example.com/ available only behind a firewall, but am still curious if exposing this resource publicly is OK for simple clusters with non-critical personal projects. A friend of mine has got an opposite opinion, we need to stop our dispute once and for all 😄 |
Token does not have to be necessarily tied to the Service Account. It can be generated by OIDC or any external identity provider. As long as API server accepts it, then it is ok. There are more ways of configuring and getting "correct" token to log in.
By default Dashboard has now very little permissions that do not impose any security risks anymore (https://github.com/kubernetes/dashboard/wiki/Access-control#v18). I do not see any security threats in exposing Dashboard publicly if everything is configured properly. There is no way to escalate privileges right now, so exposing Dashboard should be no different than exposing API server (secured, with RBAC enabled). |
Settings should be restricted like few other views/actions. We are currently working on this. |
@floreks it's should support the kubeconfig which created by the kubeadm, update the code of this struct and methed:
|
It was already explained many times why certificate-based authentication is not supported. It will not be added for security reasons as the private key should never leave user's computer. |
@floreks Thanks... this is a much better way of authenticating but I think the reason people have been ending up here has to do with the wiki docs for authentication for dashboard. It goes through the different methods of authentication but probably should at least provide a basic "Usage" section for (if you are new to kubernetes and you just installed dashboard, do the following steps in order to run dashboard 1. create service-account 2. get token 3. put token in your ~/.kube/config in this section of the yaml). |
Maybe makes sense to also provide sample dashboard proxy setup since |
@cdennen creating sample user and getting token is also described in the docs. https://github.com/kubernetes/dashboard/wiki/Creating-sample-user Wiki describes few ways of accessing Dashboard ( |
Fyi, I have both admin certs and oidc auth(ID and Refresh Token) configured in my |
Current context of kubeconfig file has to be set to the context using token. |
@floreks Thanks for the pointers. We've used https://github.com/kubernetes/dashboard/wiki/Access-control#getting-token-with-kubectl. I would like to kindly point out that the workflow is not quite obvious. Repurposing a token randomly picked from a list of service seems rather arbitrary. Would there a chance to package this as a feature of |
Not really sure why no one is posting this in the docs... Use the 'clusterrole-aggregation-controller' token to access your dashboard as 'root':
Just kind of silly to not include a root account as a part of the dashboard deployment. |
Bottom line, mentioning arbitrary chosen SA that might exist, but not necessarily in the cluster is never a good idea. That is why we have given only an example of how to log in using ANY existing SA. |
I get that there are a lot of different ways to connect, it's just unintuitive. I think the quickstart docs should call out at least one example of how to connect with a 'root' account. Taking hours of searching to find one login method makes people want to quit. |
Hi everyone, thanks for the discussion , I got spent four days, after that I found the solution for the same: Solution 👍
If you con contact me in person : linux.kartik@gmail.com |
@abrahamrhoffman You mean being able to access Dashboard as root like described in here? https://github.com/kubernetes/dashboard/wiki/Access-control#admin-privileges |
Do you consider it safe to use the Dashboard UI with certificate–authentication when it runs on the user's machine? I used to run it like that and just assumed to certificate to stay on my machine. EDIT: No it didn't. :) |
It would be cool if when using kubectl proxy, the API server would pass on the information about the logged in user; and the kubernetes dashboard could use this :-) All the best, |
Yes it would be great and we are still waiting for such API. Unfortunately I don't have any information from core when this will be added.
I'm not sure I understand the use case. Dashboard does not support certificate based authentication in any way. The only supported ways are token-based auth or using basic auth (username & password). |
@floreks - that's great documentation! A lot of individual's workflows look like:
Could you please include information and link on that page? https://github.com/kubernetes/dashboard/wiki/Installation |
Yes I am abke to access but issue is , need to provide token value
everytime, when token expired.
It has resolved the issue temporary, I'll work on it for permanent solution.
…On Tue 20 Feb, 2018, 18:12 Sebastian Florek, ***@***.***> wrote:
Accessing Dashboard is a different topic and is available on the right
side menu under Accessing Dashboard section.
[image: zrzut ekranu z 2018-02-20 13-37-54]
<https://user-images.githubusercontent.com/2285385/36424511-cdc76800-1643-11e8-99ca-fbfff434fe20.png>
Direct link is alaso available right next to Installation link under
Documentation section in our README.md.
[image: zrzut ekranu z 2018-02-20 13-40-28]
<https://user-images.githubusercontent.com/2285385/36424530-e07a9b16-1643-11e8-8a52-66881737a6a3.png>
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<#2474 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/Ag9upJKRJiXQnu7iT6iVME5CMBN_rU6cks5tWr3OgaJpZM4P4vI7>
.
|
I am trying to access to the dashboard using kubeconfig file. |
Please correct me If I am wrong:
I figured out that sample kubeconfig file looks like following:
token or username/password choice depends on |
https://github.com/kubernetes/dashboard/wiki/Access-control#basic
https://github.com/kubernetes/dashboard/wiki/Access-control#kubeconfig Structure of kubeconfig file can be seen in official documentation: https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/ I think it is not recommended to configure multiple auth option for a single user... |
From reading the abovementioned documentation
I am the one who has the power over the cluster now )) edit: no, not really, I needed |
Can we print a more clear error message about Certificate Auth not being supported?
Additionally, it might be useful to state upfront that certificate auth is not supported before the user selects a kubeconfig. |
Yeah this all makes sense, could we do this?
|
It seems to be a small change. Can you create a PR for it? I will merge |
I dont see the correct answer from the internet nor from the official document. here is my research. may this could help you. First, you need to have a account(ideally you could use the build-in account, Then, type this command to get the token which would be used in kubeconfig Finally, the structure for the kubeconfig should be like this
those works for me and here is my dashboard version |
@VampireDaniel This is probably the answer everyone is looking for. The documentation doesn't explain how to add the token to the kube config. |
@VampireDaniel That command prints tons of tokens, but which is the correct one? |
I used https://github.com/kubernetes/dashboard/wiki/Creating-sample-user but edit the
This gave
And used it. It looks like it stuck on the login, after 20 seconds I pressed "skip" and then I have the permissions. To run on the dashboard I used:
Which is really not recommended because everyone can access it. |
so, wait - are you running Thanks for providing this workaround btw... just clarifying ! |
what is the namespace for your dashboard , mine is kube-system, I did as follows but with no luck
the cluster is deployed by kubespray and when using the token to log in ,the error is as follows: |
@abrahamrhoffman’s solution no longer worked for me in k8s 1.19.2, probably because What helped was the creation of a special service account with admin privileges and then printing its token: kubectl --namespace=kube-system create serviceaccount cluster-admin-dashboard-sa
kubectl --namespace=kube-system create clusterrolebinding cluster-admin-dashboard-sa \
--clusterrole=cluster-admin \
--serviceaccount=kube-system:cluster-admin-dashboard-sa
kubectl --namespace=kube-system describe secrets \
$(kubectl --namespace=kube-system get secrets | awk '/cluster-admin-dashboard-sa-token/ {print $1}') \
| awk '/token:/ {print $2}' UPD: Turns out that the Dashboard docs also include a recipe for creating the service account: |
That sounds like a good idea to me; the issued tokens could have a relatively small time limit. Otherwise, I think the "right" way of doing this would be with OpenID. It would be nice if the dashboard itself would redirect to your openid provider, where you could authenticate, and redirect back; but AFAIK you currently need something like kubelogin. |
I agree, that will make things much easier. |
Environment
Steps to reproduce
the dashboard response Not enough data to create auth info structure.
cat kubeconfig
apiVersion: v1
clusters:
certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUR3akNDQXFxZ0F3SUJBZ0lVZDVOb3JqbTRST05jVEk4eDBGMUZKQjgvdDlnd0RRWUpLb1pJaHZjTkFRRUwKQlFBd1p6RUxNQWtHQTFVRUJoTUNRMDR4RVRBUEJnTlZCQWdUQ0ZOb1pXNTZhR1Z1TVJFd0R3WURWUVFIRXdoVAphR1Z1ZW1obGJqRU1NQW9HQTFVRUNoTURhemh6TVE4d0RRWURWUVFMRXdaVGVYTjBaVzB4RXpBUkJnTlZCQU1UCkNtdDFZbVZ5Ym1WMFpYTXdIaGNOTVRjd056SXhNRGN6TURBd1doY05Nakl3TnpJd01EY3pNREF3V2pCbk1Rc3cKQ1FZRFZRUUdFd0pEVGpFUk1BOEdBMVVFQ0JNSVUyaGxibnBvWlc0eEVUQVBCZ05WQkFjVENGTm9aVzU2YUdWdQpNUXd3Q2dZRFZRUUtFd05yT0hNeER6QU5CZ05WQkFzVEJsTjVjM1JsYlRFVE1CRUdBMVVFQXhNS2EzVmlaWEp1ClpYUmxjekNDQVNJd0RRWUpLb1pJaHZjTkFRRUJCUUFEZ2dFUEFEQ0NBUW9DZ2dFQkFMbHVlMlZPTnp5Y3Zxak4KempmYktBSFV4TUx5M3dGUnNhY0FOeEh2R1JNZXFNM3MxcDFJek1kUkc0c2ZSMG5DT0xxOFBHS2g1UzlCQlh2aApNb0RKc0tQQWZic3QyaHpkYThNYUNKMkVYLzdoTFhicUFLMXZZR1E0bEY0NUF5YWEwcVBsc0xlVEM0Wm1lYnZ4CklkajV3MDRGdnl0cVZoUGw3TmIzcEtVWjJ3a2FGREpIVEszZUlhWkc5QTZGMkNpNTYyOTN0MFpLZDJmZWdWMjEKUEtING5xRllXREk4MU5QWFk3UmNuT29ST0NFeDBQLzh4eHRnT1VIdVVUQ29Lc2tyWUhOWjhzc04vVjM3YVY5bQo5TjQ4UHE3RjBsVFN2a1gxaGIzM0RMK0thT0VTa05UYzRJWVJkbHlBaTNHbmJZSXgwU1gvY2swa0NHWWgxc2ZOCmhpUStPcDBDQXdFQUFhTm1NR1F3RGdZRFZSMFBBUUgvQkFRREFnRUdNQklHQTFVZEV3RUIvd1FJTUFZQkFmOEMKQVFJd0hRWURWUjBPQkJZRUZMcmJid2lTQ0xHNENWa2NRL1VaOXBXZVlsMGZNQjhHQTFVZEl3UVlNQmFBRkxyYgpid2lTQ0xHNENWa2NRL1VaOXBXZVlsMGZNQTBHQ1NxR1NJYjNEUUVCQ3dVQUE0SUJBUUJhWmxaMVRZTE1mTGdPClZCeFhiaWE2TE9FaWQwQ3dZTVJVN2tnMmRTYVVGQjY3THNna3ZNT1NxVTlzR1ZienBwOFlscHJYVk52VXV0ajkKOW9EVExCeDA5aG40SnZXSUIwSXNxNTlQc0NxSEtaZlR3UXNXWHFJUFNkL2w5R0tJRVJxM2ZYZFl1QVpMZ1NldwpIUzdLdkc4Mm9oMG5GRGV1UVowSEpRQ09tWG1BdklwTnZLd3p3TTBtdWl6MklDVkhrVEhIZFZidUJMcTJsYnRMCjdld29sS3VSRmZTQk1oRG4wdUxpTmJYZGY1dGFNVUxvOUdjS3Fnc0hFMEVnbU0xMjJPOHEyVmxLeGZBOTRTSXIKNjFvdndoTHhubVZOYkRXdTE4dDhPYTVUQkFxSDgyczJrQ0dZRXpZNjlKS0Vaam1LZzVsUjBCcEd1U1JVZXJhaApOT3VRSG5ETgotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
server: https://192.168.0.5:6443
name: kubernetes
contexts:
cluster: kubernetes
namespace: tuxiaogang
user: tuxiaogang
name: kubernetes
current-context: kubernetes
kind: Config
preferences: {}
users:
user:
client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUQyekNDQXNPZ0F3SUJBZ0lVRForNW9kRWlkWVRvM1FnUFlIQWZoR0xWZGpBd0RRWUpLb1pJaHZjTkFRRUwKQlFBd1p6RUxNQWtHQTFVRUJoTUNRMDR4RVRBUEJnTlZCQWdUQ0ZOb1pXNTZhR1Z1TVJFd0R3WURWUVFIRXdoVAphR1Z1ZW1obGJqRU1NQW9HQTFVRUNoTURhemh6TVE4d0RRWURWUVFMRXdaVGVYTjBaVzB4RXpBUkJnTlZCQU1UCkNtdDFZbVZ5Ym1WMFpYTXdIaGNOTVRjeE1ERXpNVE0xTVRBd1doY05NamN4TURFeE1UTTFNVEF3V2pCbk1Rc3cKQ1FZRFZRUUdFd0pEVGpFUk1BOEdBMVVFQ0JNSVUyaGxibnBvWlc0eEVUQVBCZ05WQkFjVENGTm9aVzU2YUdWdQpNUXd3Q2dZRFZRUUtFd05yT0hNeER6QU5CZ05WQkFzVEJsTjVjM1JsYlRFVE1CRUdBMVVFQXhNS2RIVjRhV0Z2CloyRnVaekNDQVNJd0RRWUpLb1pJaHZjTkFRRUJCUUFEZ2dFUEFEQ0NBUW9DZ2dFQkFMeU9sWk5VL2RNb01ZT1MKWWZ6a1oxYUF2T2Rwd1BxRTgrempNdW9DSUVRQnJiMDVzbGF4ekYvYW9hQzcveE5rQW4ycVZVbEwyQlB2bEJ5MApBVEtwMFh4TlRpdW1sZGJZaFYxZmlMbysxY2VpajU2d3NITGNkNEZUeU56NG11SHFYUTA3NStXRDFRRWpZeFEwCktPRzJyQlg1YmtJMDJMUVIvc2U4SWZIdEdUQ3VFWTJwcndyRUl4UWk4b0FRazNRLzI5SDdpcjB5ZWxPWkxxdjIKcXhlRjc1N1hXZFNMWmN1WmNBV2RNWks0VlA5alJBeG9yVmpubFZkU2drUnBpeTA2Z2dZTUk2OHp0TkppNEw3TQpVQXp3WUZRUDhKZ1BQb3RmdEY1MzEwalNHRnhYejNoZHhLQjNWNGhJZlFxbHpkaGY1SEgydEVXTlVwQ0Y2YUwzCnBFU2V4ZGNDQXdFQUFhTi9NSDB3RGdZRFZSMFBBUUgvQkFRREFnV2dNQjBHQTFVZEpRUVdNQlFHQ0NzR0FRVUYKQndNQkJnZ3JCZ0VGQlFjREFqQU1CZ05WSFJNQkFmOEVBakFBTUIwR0ExVWREZ1FXQkJTcXdsOEo3VGlMVXVEawpjaVhwQ0d6YmIyaUJjREFmQmdOVkhTTUVHREFXZ0JTNjIyOElrZ2l4dUFsWkhFUDFHZmFWbm1KZEh6QU5CZ2txCmhraUc5dzBCQVFzRkFBT0NBUUVBU0NlUStnL0xhV1JEeGZSVzU1T0ZHK3M3SUY3RVZYSlI2RG5ObWVaN2NncjUKaWJWUzZzSElKL1ZOOXBnMlZFWWJoZ3B1ZmdKelM4Mm5ibVhUMUMwYlJ5d1ZRekhxaVNKWkhRdStoL2YvQVMyUApjTUtiY3YzS1dzL3dtekhCZmR3eFdBdTVQektEekJJUDhFNTg3U2ZJU2FZbGtKWm1iN1FYVWN5TEU5bUF3blVCCmVoL2Erd0tFa3ZBNXNhS3Y5NUNyMnRNbmM5MjJJcDB0SUh2RlBhclk4OFdWSXFhdnJWdnY5cjJ4Nm1nOHV0Y0wKZzczc0dzclZYRVFOTXIwcWsxMVd2SzV3dW0rRkdubW4zQUZuaG55ODI1SUN5VGJrV0dOdENlWEt2V0wyZUovdApSNXhuRW5wMjRZTm5iZzBybVB0YVUyeE1wa3pYVCtWcHJBOEE3bVpvV0E9PQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
client-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFcEFJQkFBS0NBUUVBdkk2VmsxVDkweWd4ZzVKaC9PUm5Wb0M4NTJuQStvVHo3T015NmdJZ1JBR3R2VG15ClZySE1YOXFob0x2L0UyUUNmYXBWU1V2WUUrK1VITFFCTXFuUmZFMU9LNmFWMXRpRlhWK0l1ajdWeDZLUG5yQ3cKY3R4M2dWUEkzUGlhNGVwZERUdm41WVBWQVNOakZEUW80YmFzRmZsdVFqVFl0QkgreDd3aDhlMFpNSzRSamFtdgpDc1FqRkNMeWdCQ1RkRC9iMGZ1S3ZUSjZVNWt1cS9hckY0WHZudGRaMUl0bHk1bHdCWjB4a3JoVS8yTkVER2l0CldPZVZWMUtDUkdtTExUcUNCZ3dqcnpPMDBtTGd2c3hRRFBCZ1ZBL3dtQTgraTErMFhuZlhTTklZWEZmUGVGM0UKb0hkWGlFaDlDcVhOMkYva2NmYTBSWTFTa0lYcG92ZWtSSjdGMXdJREFRQUJBb0lCQVFDeWx6WXl3c2hhekhJQgpWWTk3MFBYVHA4SEVTWlVmY3ZmNlFjTkNnMXIrTHJ6WlFpR1pIWFFld2R4ZWVsR0Jrek1NeFYxY08vcmYvd1pCCkhYa1kvR0ZQSTRWTHNNK3hHNGxOeENPamk4bzkrTW1oRzJjMGszNlpQcnM4R0RmU2pJRXYvTEtLMzQvTE1USXgKdTZtUkI4ejhUekRRZ205U05zMGpieHlUb09kQUE2cyt4QXBWS215LysySjB1bjFUbnM3YmZMVEFXS21HNjR1dwp1cW45a3pBRkRXL1MrNms0clJrcGUvOGhzYklQRW1lRWE4dkJZSG54ZkRqVXI2NjVBQ1ZDNHorQy9RTUF5Um5vCkdFcXJGempURW45Unk0YnNpMWFoSHRvbDRqNW5IZERHVnIvMWJteC85d0dpaVdTc3JYS3FPMnhxMm1sV25sV1MKL1Z5ays1QXhBb0dCQVBtM2dEd2VGR2xlSERDUWlwdXhvMkFEQkVUMUc0YUd6QlozKzlaVVpmdnArNUFoWXJNcAp3dklKUVc0N2VIclVvQUpaOUxUcnpmWXJ5L3lqOWREN3JxeWZXaEI4QThsbVluVVd3eUZvdDN5aG1MOFJ2K2ZQCnB5TzFnbXpWd29FSXZ1QU5VZTQrNmlWTXRac29VVjJwZzFDeHNicEJodWRYb3grNzZjcnRLRGIvQW9HQkFNRk4KSW8yS2lETWQ2d0FESkdvUXYxS2NZRjQwYXUzcGswSTZMbDBrdzVKMUdNNDdCeWk5Syt6VU1EV09tY2ZwQjVtYwo1UWVreDVZaFJyWWZrdFNHQ2V6U2ZMTEJUdXErQXExL2dLMVRqTGF6ZnMwS3dSMFYweXRmV1M3QzUwRm45akVKCmkxOUtaSWxIdEhyRDNEejhPR216eC9Rak84R0toV3kraS9TWHdRa3BBb0dCQUtSbGF2V284OVVlVUw2a0dheEEKT1FjM1ZUTTBqZ2QxYkp5S0p2QkdKZEcvaTQ2cWUvanBVRjdaT3dzZitjUWJnSytybXc4VWdrWkROUXJBd2s3dgpzbUlRa2xGeDQyaE9rQmozZ0VUWlZKcW5KQkQ5MVhIOTRkSC9aN3JReXpqNWtmZWNyVWlFZ005SGZmT0VpblIzCjZXeFJYMmo0UktDK3NEUnZHSTR3clIzdkFvR0FCWVkvMDQyL0FMNzlKVjN4bjNwbERXWmN0clNHemMvY0hvdHQKSWNwWU1JcGFNQ0t0dkxOVFd3eGhhRlp2L0sralFQZWo4QWo4ajBUYU1ZQkxnUGxudFRYNnpGMEw5VmVDMmhTSAp4K3hZWEN4YkZsOFZUOUI4M1lOM0dBZ0g5ZTJUc3FrVUs2QURxWXk4RXJvZ1JEbnRIdEE5aWJPc0ZJYng4ejZxCjMwMnEvYWtDZ1lBNWRkb1p4Y2ljb1RvSmJRMWg0aDhmSSt1UlhUdk1hM3VDNmcxNlQ2cUcvQXgxRk5iSll3SkUKS1UxcmgwUkRkMXYzdWJPRnJ4Uk1SVDNLK3F1YjZqbEdpMkcvR1Q0Wm02b3o1TUVMajFGb2g4cGplSjNqdkFkTQpnRm5kdTMxaFdlbkJOWFBEQ1NZTDFTVjl1NlFqVlJtSlRqeHEzSGdEVlVyYUhjNWpjbkFqVkE9PQotLS0tLUVORCBSU0EgUFJJVkFURSBLRVktLS0tLQo=
cat [root@master1 ~]# cat /lib/systemd/system/kube-apiserver.service
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target
[Service]
ExecStart=/usr/local/bin/kube-apiserver
--admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota
--advertise-address=192.168.0.5
--bind-address=192.168.0.5
--insecure-bind-address=192.168.0.5
--kubelet-https=true
--runtime-config=rbac.authorization.k8s.io/v1beta1
--authorization-mode=RBAC
--experimental-bootstrap-token-auth
--token-auth-file=/etc/kubernetes/ssl/token.csv
--service-cluster-ip-range=172.17.0.0/16
--service-node-port-range=300-9000
--tls-cert-file=/etc/kubernetes/ssl/kubernetes.pem
--tls-private-key-file=/etc/kubernetes/ssl/kubernetes-key.pem
--client-ca-file=/etc/kubernetes/ssl/ca.pem
--service-account-key-file=/etc/kubernetes/ssl/ca-key.pem
--etcd-cafile=/etc/kubernetes/ssl/ca.pem
--etcd-certfile=/etc/kubernetes/ssl/kubernetes.pem
--etcd-keyfile=/etc/kubernetes/ssl/kubernetes-key.pem
--etcd-servers=https://192.168.0.8:2379,https://192.168.0.9:2379,https://192.168.0.10:2379
--enable-swagger-ui=true
--allow-privileged=true
--apiserver-count=3
--audit-log-maxage=30
--audit-log-maxbackup=3
--audit-log-maxsize=100
--audit-log-path=/var/lib/audit.log
--event-ttl=1h
--v=2
Restart=on-failure
RestartSec=5
Type=notify
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
Observed result
Expected result
Comments
The text was updated successfully, but these errors were encountered: