-
Notifications
You must be signed in to change notification settings - Fork 7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Helm init compatible with Kubernetes 1.16 #6462
Conversation
This fixes #6374 |
f6bbeda
to
fe1c648
Compare
/assign @bacongobbler I think this is good to go. The |
- Convert Tiller Deployment from extensions/v1betax to apps/v1 - Update installation unit tests - Add support for helm init --upgrade Signed-off-by: Jerome Brette <jbrette@gmail.com>
924e4bb
to
a8e29fe
Compare
8543697
to
996d6f7
Compare
Tested with versions: - kubernetes v1.16.0 - kubernetes v1.15.4 - kubernetes v1.14.7 - kubernetes v1.13.11 - kubernetes v1.12.10 Signed-off-by: Jerome Brette <jbrette@gmail.com>
thanks @jbrette! I just got back from a vacation today. Taking a look at the PR now 👀 |
@thomastaylor312 For info, when testing the upgrade procedure, I stumble over that bug that I fixed at the same time: User can downgrade tiller by mistake #6497 Fixes #6497 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@jbrette Thanks for jumping on this and pushing the PR.
I am doing some manual testing at the moment. FYI, I am just using Helm OOTB in simple open form (GOD mode). I wanted to give you some feedback so far.
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.0", GitCommit:"2bd9643cee5b3b3a5ecbd3af49d09018f0773c77", GitTreeState:"clean", BuildDate:"2019-09-18T14:36:53Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.0", GitCommit:"2bd9643cee5b3b3a5ecbd3af49d09018f0773c77", GitTreeState:"archive", BuildDate:"2019-09-23T20:09:22Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"linux/amd64"}
When I try to initialize with latest code from Helm 2 master, error as expected:
$ helm init
Creating /root/.helm
Creating /root/.helm/repository
Creating /root/.helm/repository/cache
Creating /root/.helm/repository/local
Creating /root/.helm/plugins
Creating /root/.helm/starters
Creating /root/.helm/cache/archive
Creating /root/.helm/repository/repositories.yaml
Adding stable repo with URL: https://kubernetes-charts.storage.googleapis.com
Adding local repo with URL: http://127.0.0.1:8879/charts
$HELM_HOME has been configured at /root/.helm.
Error: error installing: the server could not find the requested resource
Then I rebuild Helm with your PR and when I re-initialize it works:
$ helm init --upgrade
$HELM_HOME has been configured at /root/.helm.
Tiller (the Helm server-side component) has been installed into your Kubernetes Cluster.
Please note: by default, Tiller is deployed with an insecure 'allow unauthenticated users' policy.
To prevent this, run `helm init` with the --tiller-tls-verify flag.
For more information on securing your installation see: https://docs.helm.sh/using_helm/#securing-your-helm-installation
Warning: You appear to be using an unreleased version of Helm. Please either use the
--canary-image flag, or specify your desired tiller version with --tiller-image.
Ex:
$ helm init --tiller-image gcr.io/kubernetes-helm/tiller:v2.8.2
There is however an error when trying to install a scaffold chart (helm create chrt-tst2
):
helm install --name chrt-tst2 chrt-tst2/
Error: release chrt-tst2 failed: namespaces "default" is forbidden: User "system:serviceaccount:kube-system:default" cannot get resource "namespaces" in API group "" in the namespace "default"
Tiller log output:
[....]
[tiller] 2019/09/26 14:22:32 preparing install for chrt-tst2
[storage] 2019/09/26 14:22:32 getting release history for "chrt-tst2"
[storage/driver] 2019/09/26 14:22:32 query: failed to query with labels: configmaps is forbidden: User "system:serviceaccount:kube-system:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
[tiller] 2019/09/26 14:22:32 rendering chrt-tst2 chart using values
2019/09/26 14:22:32 info: manifest "chrt-tst2/templates/ingress.yaml" is empty. Skipping.
[tiller] 2019/09/26 14:22:32 performing install for chrt-tst2
[tiller] 2019/09/26 14:22:32 executing 1 crd-install hooks for chrt-tst2
[tiller] 2019/09/26 14:22:32 hooks complete for crd-install chrt-tst2
[tiller] 2019/09/26 14:22:32 executing 1 pre-install hooks for chrt-tst2
[tiller] 2019/09/26 14:22:32 hooks complete for pre-install chrt-tst2
[storage] 2019/09/26 14:22:32 getting release history for "chrt-tst2"
[storage/driver] 2019/09/26 14:22:32 query: failed to query with labels: configmaps is forbidden: User "system:serviceaccount:kube-system:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
[storage] 2019/09/26 14:22:32 creating release "chrt-tst2.v1"
[storage/driver] 2019/09/26 14:22:32 create: failed to create: configmaps is forbidden: User "system:serviceaccount:kube-system:default" cannot create resource "configmaps" in API group "" in the namespace "kube-system"
[tiller] 2019/09/26 14:22:32 warning: Failed to record release chrt-tst2: configmaps is forbidden: User "system:serviceaccount:kube-system:default" cannot create resource "configmaps" in API group "" in the namespace "kube-system"
[tiller] 2019/09/26 14:22:32 warning: Release "chrt-tst2" failed: namespaces "default" is forbidden: User "system:serviceaccount:kube-system:default" cannot get resource "namespaces" in API group "" in the namespace "default"
[storage] 2019/09/26 14:22:32 updating release "chrt-tst2.v1"
[storage/driver] 2019/09/26 14:22:32 update: failed to update: configmaps "chrt-tst2.v1" is forbidden: User "system:serviceaccount:kube-system:default" cannot update resource "configmaps" in API group "" in the namespace "kube-system"
[tiller] 2019/09/26 14:22:32 warning: Failed to update release chrt-tst2: configmaps "chrt-tst2.v1" is forbidden: User "system:serviceaccount:kube-system:default" cannot update resource "configmaps" in API group "" in the namespace "kube-system"
[tiller] 2019/09/26 14:22:32 failed install perform step: release chrt-tst2 failed: namespaces "default" is forbidden: User "system:serviceaccount:kube-system:default" cannot get resource "namespaces" in API group "" in the namespace "default"
@hickeyma I don't think issue you have is linked to this PR. What I usually do when I do that kind of testing, is to create a very permissive role like this one: kubectl apply -f tiller-serviceaccount.yaml with tiller-serviceaccount.yaml beeing ---
apiVersion: v1
kind: ServiceAccount
metadata:
name: tiller
namespace: kube-system
...
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: tiller
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: tiller
namespace: kube-system |
@jbrette I understand but it should work out of the box as Tiller by default is open. I will investigate further and use a role. |
I have tried the PR with K8s 1.14.1 cluster: $ kubectl version
Client Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.1", GitCommit:"b7394102d6ef778017f2ca4046abbaa23b88c290", GitTreeState:"clean", BuildDate:"2019-04-08T17:11:31Z", GoVersion:"go1.12.1", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.1", GitCommit:"b7394102d6ef778017f2ca4046abbaa23b88c290", GitTreeState:"clean", BuildDate:"2019-04-08T17:02:58Z", GoVersion:"go1.12.1", Compiler:"gc", Platform:"linux/amd64"} Init works as expected: $ helm init
$HELM_HOME has been configured at /root/.helm.
Tiller (the Helm server-side component) has been installed into your Kubernetes Cluster.
Please note: by default, Tiller is deployed with an insecure 'allow unauthenticated users' policy.
To prevent this, run `helm init` with the --tiller-tls-verify flag.
For more information on securing your installation see: https://docs.helm.sh/using_helm/#securing-your-helm-installation
Warning: You appear to be using an unreleased version of Helm. Please either use the
--canary-image flag, or specify your desired tiller version with --tiller-image.
Ex:
$ helm init --tiller-image gcr.io/kubernetes-helm/tiller:v2.8.2 Install of chart works as expected: $ helm install --name chrt-tst2 chrt-tst2/
NAME: chrt-tst2
LAST DEPLOYED: Thu Sep 26 15:31:04 2019
NAMESPACE: default
STATUS: DEPLOYED
RESOURCES:
==> v1/Deployment
NAME READY UP-TO-DATE AVAILABLE AGE
chrt-tst2 0/1 1 0 0s
==> v1/Pod(related)
NAME READY STATUS RESTARTS AGE
chrt-tst2-7597465f6f-bbns8 0/1 ContainerCreating 0 0s
==> v1/Service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
chrt-tst2 ClusterIP 10.108.222.81 <none> 80/TCP 0s
==> v1/ServiceAccount
NAME SECRETS AGE
chrt-tst2 1 0s
NOTES:
1. Get the application URL by running these commands:
export POD_NAME=$(kubectl get pods --namespace default -l "app.kubernetes.io/name=chrt-tst2,app.kubernetes.io/instance=chrt-tst2" -o jsonpath="{.items[0].metadata.name}")
echo "Visit http://127.0.0.1:8080 to use your application"
kubectl port-forward $POD_NAME 8080:80 |
I believe the issue you are seeing is orthogonal to the issue @jbrette is trying to address in this PR. That particular issue is caused because Tiller somehow requires read access to the Regardless I think this PR should go forward as-is. That particular issue is a separate bug :) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, thanks for this @jbrette.
I tested this manually as follows:
-
In Kubernetes 1.16 cluster:
- Initialize fails as shown when use current Helm 2 master
- When use the PR build:
- Initialize succeeds (use service account)
- Install/delete charts
-
In Kubernetes 1.14.1:
- Initialize using Helm 2 master branch (service account)
- Upgrade successfully using PR branch
- Install/delete charts
I raised issue #6517 for issue when installing a chat if Tiller installed without service acocount.
I am going to hold on merging. I would like to get feedback from @mattfarina @adamreese and @thomastaylor312 as well. |
@hickeyma Do I need to do anything to this PR ? |
@jbrette No, waiting on other reviews. |
Can we time box that wait? There's a bunch of people waiting on this to use 1.16. |
@joejulian This is going in to 2.15, so we aren't going to release without it 🙂 |
when can we expect this to get merged? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I tested this as well against 1.15 and 1.16 and the fix works as intended
Is there a release date for 2.15 yet? |
Not at this time. Because it's the last release where we are accepting feature requests, we are combing through the backlog to ensure contributors have a chance to update their PRs and get them merged (or close them) before we cut the release. |
Tentatively, @bridgetkromhout, @thomastaylor312 and I talked about releasing 2.15 in 2 week's time on Wednesday, October 16th. We will discuss if that timeline sounds feasible with the rest of the maintainers in the dev call tomorrow. |
Thanks for the info! |
i fixed it by ensuring kubectl version is same on client and server and if not download the right version from the https://kubernetes.io/docs/tasks/tools/install-kubectl/#install-kubectl-on-windows and match it with the server kubectl version of the server when you do a
Then run
and then run
|
Is this available for download via a release yet? |
This was released in 2.15.0. |
Thank you Matt |
can this be cherry-picked into 2.14.x branch? |
@cofyc Sorry but changes aren't cherry-picked into previous minor branches. |
Helm init currently creates a Deployment for Tiller which is using the deprecated extensions/v1beta1 API. This PR migrates it to apps/v1.
with this PR, helm produces the following out:
./helm init --service-account=tiller --tiller-image=gcr.io/kubernetes-helm/tiller:v2.14.3 -o yaml > apps-v1.yaml
without this PR, helm init produces the following output:
helm init --service-account=tiller --tiller-image=gcr.io/kubernetes-helm/tiller:v2.14.3 -o yaml > extensions-v1beta1.yaml
The deployment of tiller seems to also work: