Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Helm init compatible with Kubernetes 1.16 #6462

Merged
merged 2 commits into from
Oct 1, 2019
Merged

Conversation

jbrette
Copy link

@jbrette jbrette commented Sep 19, 2019

Helm init currently creates a Deployment for Tiller which is using the deprecated extensions/v1beta1 API. This PR migrates it to apps/v1.

with this PR, helm produces the following out:

./helm init --service-account=tiller --tiller-image=gcr.io/kubernetes-helm/tiller:v2.14.3 -o yaml > apps-v1.yaml
---
apiVersion: apps/v1
kind: Deployment
metadata:
  creationTimestamp: null
  labels:
    app: helm
    name: tiller
  name: tiller-deploy
  namespace: kube-system
spec:
  replicas: 1
  selector:
    matchLabels:
      app: helm
      name: tiller
  strategy: {}
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: helm
        name: tiller
    spec:
      automountServiceAccountToken: true
      containers:
      - env:
        - name: TILLER_NAMESPACE
          value: kube-system
        - name: TILLER_HISTORY_MAX
          value: "0"
        image: gcr.io/kubernetes-helm/tiller:v2.14.3
        imagePullPolicy: IfNotPresent
        livenessProbe:
          httpGet:
            path: /liveness
            port: 44135
          initialDelaySeconds: 1
          timeoutSeconds: 1
        name: tiller
        ports:
        - containerPort: 44134
          name: tiller
        - containerPort: 44135
          name: http
        readinessProbe:
          httpGet:
            path: /readiness
            port: 44135
          initialDelaySeconds: 1
          timeoutSeconds: 1
        resources: {}
      serviceAccountName: tiller
status: {}

---
apiVersion: v1
kind: Service
metadata:
  creationTimestamp: null
  labels:
    app: helm
    name: tiller
  name: tiller-deploy
  namespace: kube-system
spec:
  ports:
  - name: tiller
    port: 44134
    targetPort: tiller
  selector:
    app: helm
    name: tiller
  type: ClusterIP
status:
  loadBalancer: {}

...

without this PR, helm init produces the following output:

helm init --service-account=tiller --tiller-image=gcr.io/kubernetes-helm/tiller:v2.14.3 -o yaml > extensions-v1beta1.yaml
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  creationTimestamp: null
  labels:
    app: helm
    name: tiller
  name: tiller-deploy
  namespace: kube-system
spec:
  replicas: 1
  strategy: {}
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: helm
        name: tiller
    spec:
      automountServiceAccountToken: true
      containers:
      - env:
        - name: TILLER_NAMESPACE
          value: kube-system
        - name: TILLER_HISTORY_MAX
          value: "0"
        image: gcr.io/kubernetes-helm/tiller:v2.14.3
        imagePullPolicy: IfNotPresent
        livenessProbe:
          httpGet:
            path: /liveness
            port: 44135
          initialDelaySeconds: 1
          timeoutSeconds: 1
        name: tiller
        ports:
        - containerPort: 44134
          name: tiller
        - containerPort: 44135
          name: http
        readinessProbe:
          httpGet:
            path: /readiness
            port: 44135
          initialDelaySeconds: 1
          timeoutSeconds: 1
        resources: {}
      serviceAccountName: tiller
status: {}

---
apiVersion: v1
kind: Service
metadata:
  creationTimestamp: null
  labels:
    app: helm
    name: tiller
  name: tiller-deploy
  namespace: kube-system
spec:
  ports:
  - name: tiller
    port: 44134
    targetPort: tiller
  selector:
    app: helm
    name: tiller
  type: ClusterIP
status:
  loadBalancer: {}

...

The deployment of tiller seems to also work:

kubectl version
Client Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.0", GitCommit:"2bd9643cee5b3b3a5ecbd3af49d09018f0773c77", GitTreeState:"clean", BuildDate:"2019-09-18T14:36:53Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.0", GitCommit:"2bd9643cee5b3b3a5ecbd3af49d09018f0773c77", GitTreeState:"clean", BuildDate:"2019-09-18T14:27:17Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"linux/amd64"}
kubectl apply -f /home/xxxxxx/kube-deployment/tiller/tiller-serviceaccount.yamli
serviceaccount/tiller unchanged
clusterrolebinding.rbac.authorization.k8s.io/tiller unchanged
~/src/k8s.io/helm/bin$ ./helm init --service-account=tiller --tiller-image=gcr.io/kubernetes-helm/tiller:v2.14.3
Creating /home/xxxxxx/.helm
Creating /home/xxxxxx/.helm/repository
Creating /home/xxxxxx/.helm/repository/cache
Creating /home/xxxxxx/.helm/repository/local
Creating /home/xxxxxx/.helm/plugins
Creating /home/xxxxxx/.helm/starters
Creating /home/xxxxxx/.helm/cache/archive
Creating /home/xxxxxx/.helm/repository/repositories.yaml
Adding stable repo with URL: https://kubernetes-charts.storage.googleapis.com
Adding local repo with URL: http://127.0.0.1:8879/charts
$HELM_HOME has been configured at /home/xxxxxx/.helm.

Tiller (the Helm server-side component) has been installed into your Kubernetes Cluster.

Please note: by default, Tiller is deployed with an insecure 'allow unauthenticated users' policy.
To prevent this, run `helm init` with the --tiller-tls-verify flag.
For more information on securing your installation see: https://docs.helm.sh/using_helm/#securing-your-helm-installation
kubectl get all -n kube-system

NAME                                           READY   STATUS    RESTARTS   AGE
pod/calico-etcd-qfq9k                          1/1     Running   0          85m
pod/calico-kube-controllers-6944fb5984-85ph6   1/1     Running   0          85m
pod/calico-node-8vfc5                          1/1     Running   0          85m
pod/coredns-5644d7b6d9-khp2w                   1/1     Running   0          85m
pod/coredns-5644d7b6d9-rkrqs                   1/1     Running   0          85m
pod/etcd-kubedgesdk                            1/1     Running   0          84m
pod/kube-apiserver-kubedgesdk                  1/1     Running   0          84m
pod/kube-controller-manager-kubedgesdk         1/1     Running   0          84m
pod/kube-proxy-5m66t                           1/1     Running   0          85m
pod/kube-scheduler-kubedgesdk                  1/1     Running   0          84m
pod/tiller-deploy-77855d9dcf-6rr5r             1/1     Running   0          75s

NAME                    TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                  AGE
service/calico-etcd     ClusterIP   10.96.232.136    <none>        6666/TCP                 85m
service/kube-dns        ClusterIP   10.96.0.10       <none>        53/UDP,53/TCP,9153/TCP   85m
service/tiller-deploy   ClusterIP   10.100.221.194   <none>        44134/TCP                75s

NAME                         DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR                     AGE
daemonset.apps/calico-etcd   1         1         1       1            1           node-role.kubernetes.io/master=   85m
daemonset.apps/calico-node   1         1         1       1            1           beta.kubernetes.io/os=linux       85m
daemonset.apps/kube-proxy    1         1         1       1            1           beta.kubernetes.io/os=linux       85m

NAME                                      READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/calico-kube-controllers   1/1     1            1           85m
deployment.apps/coredns                   2/2     2            2           85m
deployment.apps/tiller-deploy             1/1     1            1           75s

NAME                                                 DESIRED   CURRENT   READY   AGE
replicaset.apps/calico-kube-controllers-6944fb5984   1         1         1       85m
replicaset.apps/coredns-5644d7b6d9                   2         2         2       85m
replicaset.apps/tiller-deploy-77855d9dcf             1         1         1       75s

@helm-bot helm-bot added the size/S Denotes a PR that changes 10-29 lines, ignoring generated files. label Sep 19, 2019
@jbrette
Copy link
Author

jbrette commented Sep 19, 2019

This fixes #6374

@helm-bot helm-bot added size/M Denotes a PR that changes 30-99 lines, ignoring generated files. and removed size/S Denotes a PR that changes 10-29 lines, ignoring generated files. labels Sep 19, 2019
@jbrette jbrette force-pushed the kube16 branch 3 times, most recently from f6bbeda to fe1c648 Compare September 19, 2019 20:17
@jbrette
Copy link
Author

jbrette commented Sep 19, 2019

/assign @bacongobbler
/cc @ian-howell @jckasper @bacongobbler

I think this is good to go.

The helm init --upgrade code is now supposed to handle the case where tiller has been installed on an older version of kubernetes and has not been converted from exstensions/v1beta1 to apps/v1

- Convert Tiller Deployment from extensions/v1betax to apps/v1
- Update installation unit tests
- Add support for helm init --upgrade

Signed-off-by: Jerome Brette <jbrette@gmail.com>
@jbrette jbrette force-pushed the kube16 branch 2 times, most recently from 924e4bb to a8e29fe Compare September 20, 2019 15:22
@helm-bot helm-bot added size/L Denotes a PR that changes 100-499 lines, ignoring generated files. and removed size/M Denotes a PR that changes 30-99 lines, ignoring generated files. labels Sep 20, 2019
@jbrette jbrette force-pushed the kube16 branch 2 times, most recently from 8543697 to 996d6f7 Compare September 20, 2019 16:54
Tested with versions:
- kubernetes v1.16.0
- kubernetes v1.15.4
- kubernetes v1.14.7
- kubernetes v1.13.11
- kubernetes v1.12.10

Signed-off-by: Jerome Brette <jbrette@gmail.com>
@bacongobbler
Copy link
Member

bacongobbler commented Sep 23, 2019

thanks @jbrette! I just got back from a vacation today. Taking a look at the PR now 👀

@thomastaylor312 thomastaylor312 added this to the 2.15.0 milestone Sep 23, 2019
@thomastaylor312 thomastaylor312 added the bug Categorizes issue or PR as related to a bug. label Sep 23, 2019
@jbrette
Copy link
Author

jbrette commented Sep 25, 2019

@thomastaylor312 For info, when testing the upgrade procedure, I stumble over that bug that I fixed at the same time: User can downgrade tiller by mistake #6497

Fixes #6497

Copy link
Contributor

@hickeyma hickeyma left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jbrette Thanks for jumping on this and pushing the PR.

I am doing some manual testing at the moment. FYI, I am just using Helm OOTB in simple open form (GOD mode). I wanted to give you some feedback so far.

$ kubectl version
Client Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.0", GitCommit:"2bd9643cee5b3b3a5ecbd3af49d09018f0773c77", GitTreeState:"clean", BuildDate:"2019-09-18T14:36:53Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.0", GitCommit:"2bd9643cee5b3b3a5ecbd3af49d09018f0773c77", GitTreeState:"archive", BuildDate:"2019-09-23T20:09:22Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"linux/amd64"}

When I try to initialize with latest code from Helm 2 master, error as expected:

$ helm init
Creating /root/.helm 
Creating /root/.helm/repository 
Creating /root/.helm/repository/cache 
Creating /root/.helm/repository/local 
Creating /root/.helm/plugins 
Creating /root/.helm/starters 
Creating /root/.helm/cache/archive 
Creating /root/.helm/repository/repositories.yaml 
Adding stable repo with URL: https://kubernetes-charts.storage.googleapis.com 
Adding local repo with URL: http://127.0.0.1:8879/charts 
$HELM_HOME has been configured at /root/.helm.
Error: error installing: the server could not find the requested resource

Then I rebuild Helm with your PR and when I re-initialize it works:

$ helm init --upgrade
$HELM_HOME has been configured at /root/.helm.

Tiller (the Helm server-side component) has been installed into your Kubernetes Cluster.

Please note: by default, Tiller is deployed with an insecure 'allow unauthenticated users' policy.
To prevent this, run `helm init` with the --tiller-tls-verify flag.
For more information on securing your installation see: https://docs.helm.sh/using_helm/#securing-your-helm-installation

Warning: You appear to be using an unreleased version of Helm. Please either use the
--canary-image flag, or specify your desired tiller version with --tiller-image.

Ex:
$ helm init --tiller-image gcr.io/kubernetes-helm/tiller:v2.8.2

There is however an error when trying to install a scaffold chart (helm create chrt-tst2):

helm install --name chrt-tst2 chrt-tst2/
Error: release chrt-tst2 failed: namespaces "default" is forbidden: User "system:serviceaccount:kube-system:default" cannot get resource "namespaces" in API group "" in the namespace "default"

Tiller log output:

[....]

[tiller] 2019/09/26 14:22:32 preparing install for chrt-tst2
[storage] 2019/09/26 14:22:32 getting release history for "chrt-tst2"
[storage/driver] 2019/09/26 14:22:32 query: failed to query with labels: configmaps is forbidden: User "system:serviceaccount:kube-system:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
[tiller] 2019/09/26 14:22:32 rendering chrt-tst2 chart using values
2019/09/26 14:22:32 info: manifest "chrt-tst2/templates/ingress.yaml" is empty. Skipping.
[tiller] 2019/09/26 14:22:32 performing install for chrt-tst2
[tiller] 2019/09/26 14:22:32 executing 1 crd-install hooks for chrt-tst2
[tiller] 2019/09/26 14:22:32 hooks complete for crd-install chrt-tst2
[tiller] 2019/09/26 14:22:32 executing 1 pre-install hooks for chrt-tst2
[tiller] 2019/09/26 14:22:32 hooks complete for pre-install chrt-tst2
[storage] 2019/09/26 14:22:32 getting release history for "chrt-tst2"
[storage/driver] 2019/09/26 14:22:32 query: failed to query with labels: configmaps is forbidden: User "system:serviceaccount:kube-system:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
[storage] 2019/09/26 14:22:32 creating release "chrt-tst2.v1"
[storage/driver] 2019/09/26 14:22:32 create: failed to create: configmaps is forbidden: User "system:serviceaccount:kube-system:default" cannot create resource "configmaps" in API group "" in the namespace "kube-system"
[tiller] 2019/09/26 14:22:32 warning: Failed to record release chrt-tst2: configmaps is forbidden: User "system:serviceaccount:kube-system:default" cannot create resource "configmaps" in API group "" in the namespace "kube-system"
[tiller] 2019/09/26 14:22:32 warning: Release "chrt-tst2" failed: namespaces "default" is forbidden: User "system:serviceaccount:kube-system:default" cannot get resource "namespaces" in API group "" in the namespace "default"
[storage] 2019/09/26 14:22:32 updating release "chrt-tst2.v1"
[storage/driver] 2019/09/26 14:22:32 update: failed to update: configmaps "chrt-tst2.v1" is forbidden: User "system:serviceaccount:kube-system:default" cannot update resource "configmaps" in API group "" in the namespace "kube-system"
[tiller] 2019/09/26 14:22:32 warning: Failed to update release chrt-tst2: configmaps "chrt-tst2.v1" is forbidden: User "system:serviceaccount:kube-system:default" cannot update resource "configmaps" in API group "" in the namespace "kube-system"
[tiller] 2019/09/26 14:22:32 failed install perform step: release chrt-tst2 failed: namespaces "default" is forbidden: User "system:serviceaccount:kube-system:default" cannot get resource "namespaces" in API group "" in the namespace "default"

@jbrette
Copy link
Author

jbrette commented Sep 26, 2019

@hickeyma I don't think issue you have is linked to this PR. What I usually do when I do that kind of testing, is to create a very permissive role like this one:

kubectl apply -f tiller-serviceaccount.yaml

with tiller-serviceaccount.yaml beeing

---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: tiller
  namespace: kube-system
...
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: tiller
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
  - kind: ServiceAccount
    name: tiller
    namespace: kube-system

@hickeyma
Copy link
Contributor

hickeyma commented Sep 26, 2019

@jbrette I understand but it should work out of the box as Tiller by default is open. I will investigate further and use a role.

@hickeyma
Copy link
Contributor

I have tried the PR with K8s 1.14.1 cluster:

$ kubectl version
Client Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.1", GitCommit:"b7394102d6ef778017f2ca4046abbaa23b88c290", GitTreeState:"clean", BuildDate:"2019-04-08T17:11:31Z", GoVersion:"go1.12.1", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.1", GitCommit:"b7394102d6ef778017f2ca4046abbaa23b88c290", GitTreeState:"clean", BuildDate:"2019-04-08T17:02:58Z", GoVersion:"go1.12.1", Compiler:"gc", Platform:"linux/amd64"}

Init works as expected:

$ helm init
$HELM_HOME has been configured at /root/.helm.

Tiller (the Helm server-side component) has been installed into your Kubernetes Cluster.

Please note: by default, Tiller is deployed with an insecure 'allow unauthenticated users' policy.
To prevent this, run `helm init` with the --tiller-tls-verify flag.
For more information on securing your installation see: https://docs.helm.sh/using_helm/#securing-your-helm-installation

Warning: You appear to be using an unreleased version of Helm. Please either use the
--canary-image flag, or specify your desired tiller version with --tiller-image.

Ex:
$ helm init --tiller-image gcr.io/kubernetes-helm/tiller:v2.8.2

Install of chart works as expected:

$ helm install --name chrt-tst2 chrt-tst2/
NAME:   chrt-tst2
LAST DEPLOYED: Thu Sep 26 15:31:04 2019
NAMESPACE: default
STATUS: DEPLOYED

RESOURCES:
==> v1/Deployment
NAME       READY  UP-TO-DATE  AVAILABLE  AGE
chrt-tst2  0/1    1           0          0s

==> v1/Pod(related)
NAME                        READY  STATUS             RESTARTS  AGE
chrt-tst2-7597465f6f-bbns8  0/1    ContainerCreating  0         0s

==> v1/Service
NAME       TYPE       CLUSTER-IP     EXTERNAL-IP  PORT(S)  AGE
chrt-tst2  ClusterIP  10.108.222.81  <none>       80/TCP   0s

==> v1/ServiceAccount
NAME       SECRETS  AGE
chrt-tst2  1        0s


NOTES:
1. Get the application URL by running these commands:
  export POD_NAME=$(kubectl get pods --namespace default -l "app.kubernetes.io/name=chrt-tst2,app.kubernetes.io/instance=chrt-tst2" -o jsonpath="{.items[0].metadata.name}")
  echo "Visit http://127.0.0.1:8080 to use your application"
  kubectl port-forward $POD_NAME 8080:80

@bacongobbler
Copy link
Member

I believe the issue you are seeing is orthogonal to the issue @jbrette is trying to address in this PR. That particular issue is caused because Tiller somehow requires read access to the default namespace regardless of whatever namespace it's in. I can't find the issue that referred to that particular error, however.

Regardless I think this PR should go forward as-is. That particular issue is a separate bug :)

Copy link
Contributor

@hickeyma hickeyma left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, thanks for this @jbrette.

I tested this manually as follows:

  • In Kubernetes 1.16 cluster:

    • Initialize fails as shown when use current Helm 2 master
    • When use the PR build:
      • Initialize succeeds (use service account)
      • Install/delete charts
  • In Kubernetes 1.14.1:

    • Initialize using Helm 2 master branch (service account)
    • Upgrade successfully using PR branch
    • Install/delete charts

I raised issue #6517 for issue when installing a chat if Tiller installed without service acocount.

@hickeyma
Copy link
Contributor

I am going to hold on merging. I would like to get feedback from @mattfarina @adamreese and @thomastaylor312 as well.

@jbrette
Copy link
Author

jbrette commented Oct 1, 2019

@hickeyma Do I need to do anything to this PR ?

@hickeyma
Copy link
Contributor

hickeyma commented Oct 1, 2019

@jbrette No, waiting on other reviews.

@joejulian
Copy link
Contributor

Can we time box that wait? There's a bunch of people waiting on this to use 1.16.

@thomastaylor312
Copy link
Contributor

@joejulian This is going in to 2.15, so we aren't going to release without it 🙂

@cnighojkar
Copy link

when can we expect this to get merged?

Copy link
Contributor

@thomastaylor312 thomastaylor312 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I tested this as well against 1.15 and 1.16 and the fix works as intended

@bacongobbler bacongobbler merged commit 77a7bbb into helm:master Oct 1, 2019
@mr-flannery
Copy link

@joejulian This is going in to 2.15, so we aren't going to release without it 🙂

Is there a release date for 2.15 yet?

@bacongobbler
Copy link
Member

Not at this time. Because it's the last release where we are accepting feature requests, we are combing through the backlog to ensure contributors have a chance to update their PRs and get them merged (or close them) before we cut the release.

@bacongobbler
Copy link
Member

Tentatively, @bridgetkromhout, @thomastaylor312 and I talked about releasing 2.15 in 2 week's time on Wednesday, October 16th. We will discuss if that timeline sounds feasible with the rest of the maintainers in the dev call tomorrow.

@mr-flannery
Copy link

Thanks for the info!

@jbrette jbrette deleted the kube16 branch October 2, 2019 22:50
@alokhom
Copy link

alokhom commented Oct 6, 2019

i fixed it by ensuring kubectl version is same on client and server and if not download the right version from the https://kubernetes.io/docs/tasks/tools/install-kubectl/#install-kubectl-on-windows and match it with the server kubectl version of the server when you do a kubectl version command

PS C:\WINDOWS\system32> helm version                                                                                                                                                                                                                                                                        
Client: &version.Version{SemVer:"v2.14.3", GitCommit:"0e7f3b6637f7af8fcfddb3d2941fcc7cbebb0085", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.14.3", GitCommit:"0e7f3b6637f7af8fcfddb3d2941fcc7cbebb0085", GitTreeState:"clean"}
PS C:\WINDOWS\system32> kubectl version
Client Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.0", GitCommit:"2bd9643cee5b3b3a5ecbd3af49d09018f0773c77", GitTreeState:"clean", BuildDate:"2019-09-18T14:36:53Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"windows/amd64"}
Server Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.0", GitCommit:"2bd9643cee5b3b3a5ecbd3af49d09018f0773c77", GitTreeState:"clean", BuildDate:"2019-09-18T14:27:17Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"linux/amd64"}

Then run kubectl apply -y dep.yaml.

---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  creationTimestamp: null
  labels:
    app: helm
    name: tiller
  name: tiller-deploy
  namespace: kube-system
spec:
  replicas: 1
  strategy: {}
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: helm
        name: tiller
    spec:
      automountServiceAccountToken: true
      containers:
      - env:
        - name: TILLER_NAMESPACE
          value: kube-system
        - name: TILLER_HISTORY_MAX
          value: "0"
        image: gcr.io/kubernetes-helm/tiller:v2.14.3
        imagePullPolicy: IfNotPresent
        livenessProbe:
          httpGet:
            path: /liveness
            port: 44135
          initialDelaySeconds: 1
          timeoutSeconds: 1
        name: tiller
        ports:
        - containerPort: 44134
          name: tiller
        - containerPort: 44135
          name: http
        readinessProbe:
          httpGet:
            path: /readiness
            port: 44135
          initialDelaySeconds: 1
          timeoutSeconds: 1
        resources: {}
      serviceAccountName: tiller
status: {}

---
apiVersion: v1
kind: Service
metadata:
  creationTimestamp: null
  labels:
    app: helm
    name: tiller
  name: tiller-deploy
  namespace: kube-system
spec:
  ports:
  - name: tiller
    port: 44134
    targetPort: tiller
  selector:
    app: helm
    name: tiller
  type: ClusterIP
status:
  loadBalancer: {}

and then run

helm init --service-account tiller --override spec.selector.matchLabels.'name'='tiller',spec.selector.matchLabels.'app'='helm' --output yaml | sed 's@apiVersion: extensions/v1beta1@apiVersion: apps/v1@' | kubectl apply -f -

@alexellis
Copy link

Is this available for download via a release yet?

@bacongobbler
Copy link
Member

This was released in 2.15.0.

@alexellis
Copy link

Thank you Matt

@cofyc
Copy link
Contributor

cofyc commented Dec 2, 2019

can this be cherry-picked into 2.14.x branch?

@hickeyma
Copy link
Contributor

hickeyma commented Dec 2, 2019

@cofyc Sorry but changes aren't cherry-picked into previous minor branches.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Categorizes issue or PR as related to a bug. size/L Denotes a PR that changes 100-499 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet