Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

authorization upgrade to rbac broken? #9432

Closed
Deshke opened this issue Jun 23, 2020 · 5 comments
Closed

authorization upgrade to rbac broken? #9432

Deshke opened this issue Jun 23, 2020 · 5 comments
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@Deshke
Copy link

Deshke commented Jun 23, 2020

1. What kops version are you running? The command kops version, will display
this information.

Version 1.17.0 (git-a17511e6dd)

2. What Kubernetes version are you running? kubectl version will print the
version if a cluster is running or provide the Kubernetes version specified as
a kops flag.

Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.4", GitCommit:"c96aede7b5205121079932896c4ad89bb93260af", GitTreeState:"clean", BuildDate:"2020-06-17T11:41:22Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.6", GitCommit:"d32e40e20d167e103faf894261614c5b45c44198", GitTreeState:"clean", BuildDate:"2020-05-20T13:08:34Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}

3. What cloud provider are you using?

aws ec2

4. What commands did you run? What is the simplest way to reproduce this issue?

kops edit cluster

  authorization:
+    rbac: {}
-    alwaysAllow: {}
  kubeAPIServer:
+    authorizationRbacSuperUser: admin

kops update cluster --yes

5. What happened after the commands executed?

  • no rbac cluster(roles)/rolebindings got created
  • cluster crashed on update

6. What did you expect to happen?

working cluster

7. Please provide your cluster manifest. Execute
kops get --name my.example.com -o yaml to display your cluster manifest.
You may want to remove your cluster name and other sensitive information.

apiVersion: kops.k8s.io/v1alpha2
kind: Cluster
metadata:
  creationTimestamp: "2020-06-23T09:58:48Z"
  name: :duck:
spec:
  api:
    dns: {}
  authorization:
    rbac: {}
  channel: stable
  cloudProvider: aws
  configBase: s3://:duck:
  dnsZone: :duck: 
  etcdClusters:
  - cpuRequest: 200m
    etcdMembers:
    - instanceGroup: master-eu-central-1a
      name: a
    - instanceGroup: master-eu-central-1b
      name: b
    - instanceGroup: master-eu-central-1c
      name: c
    memoryRequest: 100Mi
    name: main
  - cpuRequest: 100m
    etcdMembers:
    - instanceGroup: master-eu-central-1a
      name: a
    - instanceGroup: master-eu-central-1b
      name: b
    - instanceGroup: master-eu-central-1c
      name: c
    memoryRequest: 100Mi
    name: events
  iam:
    allowContainerRegistry: true
    legacy: false
  kubelet:
    anonymousAuth: false
  kubeAPIServer:
    authorizationRbacSuperUser: admin
  kubernetesApiAccess:
  - :duck: 
  kubernetesVersion: 1.17.6
  masterPublicName: :duck: 
  networkCIDR: 172.20.0.0/16
  networking:
    amazonvpc: {}
  nonMasqueradeCIDR: 172.20.0.0/16
  sshAccess:
  - :duck:
  subnets:
  - cidr: 172.20.32.0/19
    name: eu-central-1a
    type: Public
    zone: eu-central-1a
  - cidr: 172.20.64.0/19
    name: eu-central-1b
    type: Public
    zone: eu-central-1b
  - cidr: 172.20.96.0/19
    name: eu-central-1c
    type: Public
    zone: eu-central-1c
  topology:
    dns:
      type: Public
    masters: public
    nodes: public

8. Please run the commands with most verbose logging by adding the -v 10 flag.
Paste the logs into this report, or in a gist and provide the gist link here.

none

9. Anything else do we need to know?

steps to reproduce

  1. create a cluster with --authorization AlwaysAllow
  2. check clusterroles/bindings kubectl -n kube-system get rolebindings
  3. update the cluster and switch to rbac
  4. rolling update cluster -> cluster gone
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Sep 21, 2020
@Deshke
Copy link
Author

Deshke commented Sep 21, 2020

not stale and still broken

@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Oct 21, 2020
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@k8s-ci-robot
Copy link
Contributor

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

3 participants