-
Notifications
You must be signed in to change notification settings - Fork 252
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Find a better way to generate tokens #78
Comments
/assign @chrigl |
@chaosaffe: GitHub didn't allow me to assign the following users: chrigl. Note that only kubernetes-sigs members and repo collaborators can be assigned. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
* Implemented bootstrap tokens via cluster-bootstrap Therefore, I added new dependency to k8s.io/cluster-bootstrap. Shelling out to kubeadm from token generation is now removed from machineactuator.go. I used the tools of cluster-bootstrap to generate a kubeadm compliant token name and create a secret out of it. This implementation does not use the ways kubeadm currently uses, because it would pull a hole bunch of other dependencies in, which are IMHO not needed at that point. Fixes #78 * Renamed TokenExpiration to TokenTTL * Increased TokenTTL to 60 minutes * Added link to inspiration of bootstrap/token.go * Changed error handling to panic and removed needless lines
* Added Role to kube-system to allow CAP-OS to create secrets When CAP-OS is deployed to the resulted cluster, it runs in the namespace openstack-provider-system, and gets the default ServiceAccount mounted in. Because of #78, CAP-OS needs to create secrets (bootstrap tokens essentially are secrets) in namespace kube-system. Created a Role and RoleBinding to allow this. Using yaml files in config/rbac in favor of kubebuilder auto generation, because this only works for ClusterRoles so far. See kubernetes-sigs/kubebuilder#401 How to test: ``` generate-yaml.sh -c ... -p ubuntu configure machines.yaml and provider-components.yaml clusterctl create cluster \ --minikube kubernetes-version=v1.12.2 \ --vm-driver hyperkit \ --provider openstack \ -c examples/openstack/ubuntu/out/cluster.yaml \ -m examples/openstack/ubuntu/out/machines.yaml \ -p examples/openstack/ubuntu/out/provider-components.yaml wait until master and the one node appear. create a new machine manually by hand. It should be created, come up and join the cluster. e.g. apiVersion: cluster.k8s.io/v1alpha1 kind: Machine metadata: generateName: openstack-node- labels: set: node name: openstack-node-manual namespace: default spec: providerConfig: value: apiVersion: openstackproviderconfig/v1alpha1 availabilityZone: es1 flavor: m1.medium image: Ubuntu 16.04 Xenial Xerus - Latest kind: OpenstackProviderConfig networks: - uuid: e21aeb04-f98a-4c05-bc84-69441dbb304c securityGroups: - default - secgrp_docs sshUserName: ubuntu versions: kubelet: 1.12.1 ``` Fixes #93 * Removed redundant role and rolebinding
* Implemented bootstrap tokens via cluster-bootstrap Therefore, I added new dependency to k8s.io/cluster-bootstrap. Shelling out to kubeadm from token generation is now removed from machineactuator.go. I used the tools of cluster-bootstrap to generate a kubeadm compliant token name and create a secret out of it. This implementation does not use the ways kubeadm currently uses, because it would pull a hole bunch of other dependencies in, which are IMHO not needed at that point. Fixes #78 * Renamed TokenExpiration to TokenTTL * Increased TokenTTL to 60 minutes * Added link to inspiration of bootstrap/token.go * Changed error handling to panic and removed needless lines
* Added Role to kube-system to allow CAP-OS to create secrets When CAP-OS is deployed to the resulted cluster, it runs in the namespace openstack-provider-system, and gets the default ServiceAccount mounted in. Because of #78, CAP-OS needs to create secrets (bootstrap tokens essentially are secrets) in namespace kube-system. Created a Role and RoleBinding to allow this. Using yaml files in config/rbac in favor of kubebuilder auto generation, because this only works for ClusterRoles so far. See kubernetes-sigs/kubebuilder#401 How to test: ``` generate-yaml.sh -c ... -p ubuntu configure machines.yaml and provider-components.yaml clusterctl create cluster \ --minikube kubernetes-version=v1.12.2 \ --vm-driver hyperkit \ --provider openstack \ -c examples/openstack/ubuntu/out/cluster.yaml \ -m examples/openstack/ubuntu/out/machines.yaml \ -p examples/openstack/ubuntu/out/provider-components.yaml wait until master and the one node appear. create a new machine manually by hand. It should be created, come up and join the cluster. e.g. apiVersion: cluster.k8s.io/v1alpha1 kind: Machine metadata: generateName: openstack-node- labels: set: node name: openstack-node-manual namespace: default spec: providerConfig: value: apiVersion: openstackproviderconfig/v1alpha1 availabilityZone: es1 flavor: m1.medium image: Ubuntu 16.04 Xenial Xerus - Latest kind: OpenstackProviderConfig networks: - uuid: e21aeb04-f98a-4c05-bc84-69441dbb304c securityGroups: - default - secgrp_docs sshUserName: ubuntu versions: kubelet: 1.12.1 ``` Fixes #93 * Removed redundant role and rolebinding
Bug 1769879: allow CA Cert bundles to be trusted
We're currently depending on
kubeadm
to generate tokens. This is far from ideal as it requires to shell out and it doesn't work on environments where kubeadm is not present.The text was updated successfully, but these errors were encountered: