New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
BootstrapSigner and TokenCleaner controllers #36101
Conversation
75ca384
to
0b6ea76
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just adding some thoughts from an early pass through.
Is it possible to play with this right now? I've made two attempts to get this deployed and running (built my own hyperkube image and deployed it with kubeadm, and the new dind stuff that hit our ML this morning), in both cases I can't seem to trigger anything happening. I've created bootstrap tokens per my WIP PR but no config maps show up in kube-public.
maxRetries := options.MaxRetries | ||
if maxRetries == 0 { | ||
maxRetries = 10 | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would DefaultBootstrapSignerOptions be a better place to define that default?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is the pattern that other modules follow. Folks don't have to use DefaultBootstrapSignerOptions so this deals with that case.
} | ||
configMapSelector := fields.SelectorFromSet(map[string]string{api.ObjectNameField: options.ConfigMapName}) | ||
e.configMaps, e.configMapsController = cache.NewInformer( | ||
&cache.ListWatch{ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why do we monitor the config map itself? I would have expected monitoring secrets and then eventually API servers, and assumed that when a change was detected the config map is considered "ours" to be completely re-synced.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There is no way to monitor "API servers". At some point we may have a way to update this directly but we aren' there yet. To start with this config map will be written as part of set up and rarely changed. As we move to HA we can update it as the HA set changes and this controller will automatically react and sign.
if _, ok := ret[tokenID]; ok { | ||
glog.V(3).Infof("Duplicate bootstrap tokens found for id %s, ignoring on in %s/%s", tokenID, secret.Namespace, secret.Name) | ||
continue | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm creating these as prefix-tokenID so hopefully it won't be possible for a duplicate ID to appear here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The name doesn't matter here as we are keying off of the secret type.
MaxRetries int | ||
} | ||
|
||
// DefaultBootstrapSignerOptions returns a set of default options for creating a |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just a missed ref to wrong struct name in comment here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fixed
// signConfigMap computes the signatures on our latest cached objects and writes | ||
// back if necessary. | ||
func (e *BootstrapSigner) signConfigMap() { | ||
configMaps := e.listConfigMaps() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Trying to get this to run so I could poke around but not having much luck yet, I am curious why multiple config maps are referenced here? (would have expected just the one)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fixed to just grab the one. Was easy enough either way.
ret := &api.Secret{ | ||
ObjectMeta: api.ObjectMeta{ | ||
Namespace: api.NamespaceSystem, | ||
Name: "token-" + tokenID, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not sure it matters but these are "bootstrap-token-" on my end, if so which should we sync on?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Again, we key off of the secret type so this doesn't matter.
core.NewUpdateAction(unversioned.GroupVersionResource{Resource: "configmaps"}, | ||
api.NamespacePublic, | ||
newConfigMap("tokenID", "eyJhbGciOiJIUzI1NiIsImtpZCI6InRva2VuSUQifQ..QAvK9DAjF0hSyASEkH1MOTB5rJMmbWEY9j-z1NSYILE")), | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This testing stuff looks interesting, I need to see if I can make use of this for unit testing the kubeadm token commands.
bootstrap.NewBootstrapSigner( | ||
client("bootstrap-signer"), | ||
bootstrap.DefaultBootstrapSignerOptions(), | ||
).Run() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Run should be started with the stop chan that is in scope here like other controllers. It should not have an explicit stop method, but should stop when the channel is closed.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Cool -- the pattern changed from when I first wrote this. Also note that stopping controllers is currently pretty busted. It would be better to use ctx. But that is too big a change. See: https://github.com/kubernetes/kubernetes/blob/master/pkg/client/cache/controller.go#L125
} | ||
|
||
// listConfigMaps lists all of the cached config maps | ||
func (e *BootstrapSigner) listConfigMaps() []*api.ConfigMap { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
you are reimplementing listers. https://github.com/kubernetes/kubernetes/blob/master/pkg/client/listers/core/internalversion/zz_generated.configmap.go#L42
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Unfortunately, Listers take an indexer. I don't have one of those as I don't need to watch across namespaces. It seems silly to create an indexer just to avoid writing a 7 line function.
return items | ||
} | ||
|
||
func (e *BootstrapSigner) listSecrets() []*api.Secret { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
same as above, use a lister
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same -- not indexing this so I can't easily use a lister.
options.SecretResync, | ||
cache.ResourceEventHandlerFuncs{ | ||
AddFunc: func(_ interface{}) { e.signConfigMap() }, | ||
UpdateFunc: func(_, _ interface{}) { e.signConfigMap() }, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why not accept a config map from signConfigMap() ? Then you only check configmaps that changed.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This simplifies the flow. We want to resign whenever we get a new/changed/deleted token or if the config map changes. In the token change case we don't have the config map already. This way everything just funnels down to a single "reconcile the world" function.
If we were signing a lot of config maps it would make sense to be more surgical.
@jbeda Are you planning to get back to this soon so we can have it lgtm'd when master is opening for merges again? |
@luxas -- yeah. I'll be getting back to this soon. Just coming up from air from the Heptio launch now. |
Yeah, really cool. Understand that totally, guess it has been a hectic week. |
0b6ea76
to
612312c
Compare
ac1bba7
to
7235e14
Compare
18cb472
to
6ac4bc8
Compare
@deads2k @mikedanese PTAL Talked to @smarterclayton quickly and I think I got to the root of what he was looking for. |
7228e56
to
fed564a
Compare
Per conversation in the sig-auth channel with @luxas and @liggitt, there we issues with using token.csv. @liggitt suggested we add a "BootstrapTokenAuthenticator" to handle these cases. That authenticator would look for secrets of type apiVersion: v1
kind: Secret
metadata:
name: simple-bootstrapper
namespace: kube-system # Only watch tokens in the "kube-system" namespace?
data:
token-secret: ( bearer token )
token-id: ( token id )
type: bootstrap.kubernetes.io/token A request that authenticates using the "( bearer token )" would get the username In the short term admins will have to manually create a cluster role binding the group or name to the existing cc @liggitt @luxas @jbeda does that accurately summarize the conversation? I'm happy to go add this if it does. [0] kubernetes/plugin/pkg/auth/authorizer/rbac/bootstrappolicy/testdata/cluster-roles.yaml Lines 603 to 619 in 03bde62
|
Move these form core API to a separate package (pkg/bootstrap/api). This also creates the constant for the new kube-public namespace.
@ericchiang Yes -- that is the idea. That would be another PR, obviously. I'd propose we document this as part of https://github.com/kubernetes/community/blob/master/contributors/design-proposals/bootstrap-discovery.md (or perhaps a companion proposal) |
This adds these to the list of controllers the Controller Manager can start. But as these are alpha, they are also currently disabled by default.
fed564a
to
415e208
Compare
/lgtm |
Automatic merge from submit-queue (batch tested with PRs 38252, 41122, 36101, 41017, 41264) |
Automatic merge from submit-queue (batch tested with PRs 41378, 41413, 40743, 41155, 41385) Expose the constants in pkg/controller/bootstrap and add a validate token function **What this PR does / why we need it**: In order to hook up #36101 against kubeadm, we have to expose the constants and add a function to validate the token **Special notes for your reviewer**: **Release note**: ```release-note NONE ``` cc @jbeda @mikedanese @pires @dmmcquay
Automatic merge from submit-queue (batch tested with PRs 41812, 41665, 40007, 41281, 41771) kube-apiserver: add a bootstrap token authenticator for TLS bootstrapping Follows up on #36101 Still needs: * More tests. * To be hooked up to the API server. - Do I have to do that in a separate PR after k8s.io/apiserver is synced? * Docs (kubernetes.io PR). * Figure out caching strategy. * Release notes. cc @kubernetes/sig-auth-api-reviews @liggitt @luxas @jbeda ```release-notes Added a new secret type "bootstrap.kubernetes.io/token" for dynamically creating TLS bootstrapping bearer tokens. ```
Automatic merge from submit-queue Remove the kube-discovery binary from the tree **What this PR does / why we need it**: kube-discovery was a temporary solution to implementing proposal: https://github.com/kubernetes/community/blob/master/contributors/design-proposals/bootstrap-discovery.md However, this functionality is now gonna be implemented in the core for v1.6 and will fully replace kube-discovery: - #36101 - #41281 - #41417 So due to that `kube-discovery` isn't used in any v1.6 code, it should be removed. The image `gcr.io/google_containers/kube-discovery-${ARCH}:1.0` should and will continue to exist so kubeadm <= v1.5 continues to work. **Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes # **Special notes for your reviewer**: **Release note**: ```release-note Remove cmd/kube-discovery from the tree since it's not necessary anymore ``` @jbeda @dgoodwin @mikedanese @dmmcquay @lukemarsden @errordeveloper @pires
This is part of kubernetes/enhancements#130 and is an implementation of kubernetes/community#189.
Work that needs to be done yet in this PR:
e2e testsWill come in new PR.@kubernetes/sig-cluster-lifecycle @dgoodwin @roberthbailey @mikedanese
This change is