Skip to content
This repository has been archived by the owner on Feb 22, 2022. It is now read-only.

[stable/nginx-ingress] k8s 1.6 - perms #927

Closed
ReSearchITEng opened this issue Apr 14, 2017 · 9 comments
Closed

[stable/nginx-ingress] k8s 1.6 - perms #927

ReSearchITEng opened this issue Apr 14, 2017 · 9 comments
Assignees
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@ReSearchITEng
Copy link
Contributor

ReSearchITEng commented Apr 14, 2017

brand new 1.6 cluster:

helm install stable/nginx-ingress --name my-release
Error: release my-release failed: the server does not allow access to the requested resource (get namespaces default)

versions:
k8s: 1.6.1
helm: v.2.3.1
stable/nginx-ingress: 0.3.2

maybe due RBAC of k8s 1.6?
This looks related: https://www.bountysource.com/issues/43945543-kubeadm-using-1-6-ingess-controller-can-t-access-api
this looks like a good start: jetstack/kube-lego#99

@prydonius
Copy link
Member

@ReSearchITEng yes, looks like RBAC is the issue here. Should be easy to add the Role resources to the chart.

@iantanwx
Copy link

iantanwx commented May 5, 2017

I can confirm this 'bug' on kubeadm 1.6.2. I was able to reproduce and fix by going through the usual steps to install nginx-ingress, and then manually editing the deployment afterward.

@iantanwx
Copy link

iantanwx commented May 5, 2017

Is there a good way to solve this problem using the helm CLI? It does not seem that there is an easy way to automate the process.

@noonien
Copy link

noonien commented May 22, 2017

Any updates on this?

@mgoodness mgoodness self-assigned this May 30, 2017
@ReSearchITEng
Copy link
Contributor Author

For nginx-ingress, in case you don't want to give all perms to all(https://github.com/ReSearchITEng/kubeadm-playbook/raw/master/allow-all-all-rbac.yml), one has to tune: https://raw.githubusercontent.com/kubernetes/ingress/master/examples/rbac/nginx/nginx-ingress-controller-rbac.yml or, even better, add it in the helm chart.

@ReSearchITEng
Copy link
Contributor Author

To be fixed by PR in #1235

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

Prevent issues from auto-closing with an /lifecycle frozen comment.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or @fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Dec 31, 2017
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jan 30, 2018
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

7 participants