-
Notifications
You must be signed in to change notification settings - Fork 357
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
jsonnet,pkg: watch kubelet CA #250
Conversation
20ba7a5
to
af5723a
Compare
/retest |
2 similar comments
/retest |
/retest |
/test e2e-aws-operator |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
One thing that I think we can skip, otherwise lgtm
@@ -89,6 +90,18 @@ spec: | |||
- --label=namespace | |||
image: quay.io/coreos/prom-label-proxy:v0.1.0 | |||
name: prom-label-proxy | |||
- args: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think this is necessary. Prometheus reloads the scape managers every 5 seconds, which reloads all scrape pools, and reloading a scrape pool re-creates the http client, which also re-reads the TLS configuration. 5 seconds delay is acceptable I think as kubelets reloading their certificates and the secret being re-mounted is racy anyways.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
TIL!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fwiw, let's persist this fact in a comment inside jsonnet, otherwise it is not obvious why a configmap reloader is not needed.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done and done
This commit addds a new configmap-reloader container to the Prometheus K8s pods that watch the configmap holding the CAs used to sign the Kubelet CSRs. These CSRs are signed by the CSR controller. The CA is rolled frequently and so must be watched by the cluster-monitoring-operator and mirrored into the openshift-monitoring namespace whenever it changes.
@@ -612,6 +613,18 @@ func (f *Factory) PrometheusK8sServingCertsCABundle() (*v1.ConfigMap, error) { | |||
return c, nil | |||
} | |||
|
|||
func (f *Factory) PrometheusK8sCSRControllerCABundle(data map[string]string) (*v1.ConfigMap, error) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
mini-nit: i think we are unit-testing all those factory methods. let's add a test for that one too.
/lgtm |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: brancz, squat The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
kubelet metrics targets are working once again 👍 thanks! |
This commit addds a new configmap-reloader container to the Prometheus
K8s pods that watch the configmap holding the CAs used to sign the
Kubelet CSRs. These CSRs are signed by the CSR controller. The CA is
rolled frequently and so must be watched by the
cluster-monitoring-operator and mirrored into the openshift-monitoring
namespace whenever it changes.
These changes fix scraping of all kubelets on worker nodes, however, scraping
master kubelets will be broken until
openshift/cluster-kube-apiserver-operator#247 lands and
makes it into the installer. Once that is in, we can change the CA configmap to
kubelet-serving-ca
.cc @s-urbaniak @deads2k @brancz