Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Kubelet can't start API server if it needs a fresh certificate from API server #167

Closed
jlebon opened this issue Aug 24, 2018 · 6 comments
Closed

Comments

@jlebon
Copy link
Member

jlebon commented Aug 24, 2018

In the local dev case, one may only have provisioned a single master. If one restart the master, then on restart, the kubelet will fail like so if the certificate expired:

Aug 24 15:41:58 test1-master-0 systemd[1]: Started Kubernetes Kubelet.
Aug 24 15:41:59 test1-master-0 docker[19162]: Flag --rotate-certificates has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluste
Aug 24 15:41:59 test1-master-0 docker[19162]: Flag --pod-manifest-path has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/
Aug 24 15:41:59 test1-master-0 docker[19162]: Flag --allow-privileged has been deprecated, will be removed in a future version
Aug 24 15:41:59 test1-master-0 docker[19162]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version.
Aug 24 15:41:59 test1-master-0 docker[19162]: Flag --cluster-dns has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubele
Aug 24 15:41:59 test1-master-0 docker[19162]: Flag --cluster-domain has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kub
Aug 24 15:41:59 test1-master-0 docker[19162]: Flag --client-ca-file has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kub
Aug 24 15:41:59 test1-master-0 docker[19162]: Flag --anonymous-auth has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kub
Aug 24 15:41:59 test1-master-0 docker[19162]: Flag --cgroup-driver has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kube
Aug 24 15:41:59 test1-master-0 docker[19162]: I0824 15:41:59.069056   19185 server.go:418] Version: v1.11.0+d4cacc0
Aug 24 15:41:59 test1-master-0 docker[19162]: I0824 15:41:59.069191   19185 server.go:496] acquiring file lock on "/var/run/lock/kubelet.lock"
Aug 24 15:41:59 test1-master-0 docker[19162]: I0824 15:41:59.069220   19185 server.go:501] watching for inotify events for: /var/run/lock/kubelet.lock
Aug 24 15:41:59 test1-master-0 docker[19162]: I0824 15:41:59.069373   19185 plugins.go:97] No cloud provider specified.
Aug 24 15:41:59 test1-master-0 docker[19162]: E0824 15:41:59.071501   19185 bootstrap.go:195] Part of the existing bootstrap client certificate is expired: 2018-08-23 17:08:07 +0000 UTC
Aug 24 15:41:59 test1-master-0 docker[19162]: I0824 15:41:59.072551   19185 certificate_store.go:131] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
Aug 24 15:41:59 test1-master-0 docker[19162]: F0824 15:41:59.093262   19185 server.go:262] failed to run Kubelet: cannot create certificate signing request: Post https://test1-api.mco.testing:6443/apis/certificates.k8s.io/v1beta1/certifica
Aug 24 15:41:59 test1-master-0 systemd[1]: kubelet.service: main process exited, code=exited, status=255/n/a
Aug 24 15:41:59 test1-master-0 systemd[1]: Unit kubelet.service entered failed state.
Aug 24 15:41:59 test1-master-0 systemd[1]: kubelet.service failed.

@aaronlevy says:

So what I”m thinking happened: we give master nodes a short-lived certificate (30min iirc) during the initial bootstrap. The intention is that this gets rotated out after that 30 minutes. However, if there was a single master and a reboot timed such that the rotation didn’t happen (and now it’s expired)… puts us in a bit of a pickle.

@praveenkumar
Copy link
Contributor

@aaronlevy Can you provide where that short-lived certificate located on master or bootstrap. I am in a situation where I create cluster using libvirt (which was up and runing) then I shut it down and 2 days later it is not coming up, etcd have below logs.

I still have that setup handy if you need any more info, I just want to understand what is the best time I can shutdown my libvirt VM and then start when required without having any issue.

[root@test1-master-0 core]# crictl ps
CONTAINER ID        IMAGE                                                              CREATED             STATE               NAME                ATTEMPT
ab1c998534064       94bc3af972c98ce73f99d70bd72144caa8b63e541ccc9d844960b7f0ca77d7c4   4 minutes ago       Running             etcd-member         1
[root@test1-master-0 core]# crictl logs ab1c998534064
2018-12-05 09:41:31.214799 I | pkg/flags: recognized and used environment variable ETCD_DATA_DIR=/var/lib/etcd
2018-12-05 09:41:31.215415 I | pkg/flags: recognized and used environment variable ETCD_NAME=etcd-member-test1-master-0
2018-12-05 09:41:31.215476 I | etcdmain: etcd Version: 3.2.14
2018-12-05 09:41:31.215489 I | etcdmain: Git SHA: fb5cd6f1c
2018-12-05 09:41:31.215494 I | etcdmain: Go Version: go1.8.5
2018-12-05 09:41:31.215499 I | etcdmain: Go OS/Arch: linux/amd64
2018-12-05 09:41:31.215505 I | etcdmain: setting maximum number of CPUs to 2, total number of available CPUs is 2
2018-12-05 09:41:31.215686 N | etcdmain: the server is already initialized as member before, starting as etcd member...
2018-12-05 09:41:31.215720 I | embed: peerTLS: cert = /etc/ssl/etcd/system:etcd-peer:test1-etcd-0.tt.testing.crt, key = /etc/ssl/etcd/system:etcd-peer:test1-etcd-0.tt.testing.key, ca = , trusted-ca = /etc/ssl/etcd/ca.crt, client-cert-auth = true
2018-12-05 09:41:31.219274 I | embed: listening for peers on https://0.0.0.0:2380
2018-12-05 09:41:31.219572 I | embed: listening for client requests on 0.0.0.0:2379
2018-12-05 09:41:31.310536 I | etcdserver: name = etcd-member-test1-master-0
2018-12-05 09:41:31.311205 I | etcdserver: data dir = /var/lib/etcd
2018-12-05 09:41:31.311265 I | etcdserver: member dir = /var/lib/etcd/member
2018-12-05 09:41:31.311302 I | etcdserver: heartbeat = 100ms
2018-12-05 09:41:31.311393 I | etcdserver: election = 1000ms
2018-12-05 09:41:31.311473 I | etcdserver: snapshot count = 100000
2018-12-05 09:41:31.311657 I | etcdserver: advertise client URLs = https://192.168.126.11:2379
2018-12-05 09:41:31.554976 I | etcdserver: restarting member 7d3fdaaceb134d3d in cluster d98ef57fc5131193 at commit index 15764
2018-12-05 09:41:31.556475 I | raft: 7d3fdaaceb134d3d became follower at term 2
2018-12-05 09:41:31.556576 I | raft: newRaft 7d3fdaaceb134d3d [peers: [], term: 2, commit: 15764, applied: 0, lastindex: 15764, lastterm: 2]
2018-12-05 09:41:31.710712 W | auth: simple token is not cryptographically signed
2018-12-05 09:41:31.739007 I | etcdserver: starting server... [version: 3.2.14, cluster version: to_be_decided]
2018-12-05 09:41:31.744323 I | embed: ClientTLS: cert = /etc/ssl/etcd/system:etcd-server:test1-etcd-0.tt.testing.crt, key = /etc/ssl/etcd/system:etcd-server:test1-etcd-0.tt.testing.key, ca = , trusted-ca = /etc/ssl/etcd/ca.crt, client-cert-auth = true
2018-12-05 09:41:31.749681 I | etcdserver/membership: added member 7d3fdaaceb134d3d [https://test1-etcd-0.tt.testing:2380] to cluster d98ef57fc5131193
2018-12-05 09:41:31.750073 N | etcdserver/membership: set the initial cluster version to 3.2
2018-12-05 09:41:31.750222 I | etcdserver/api: enabled capabilities for version 3.2
2018-12-05 09:41:32.458097 I | raft: 7d3fdaaceb134d3d is starting a new election at term 2
2018-12-05 09:41:32.458417 I | raft: 7d3fdaaceb134d3d became candidate at term 3
2018-12-05 09:41:32.458500 I | raft: 7d3fdaaceb134d3d received MsgVoteResp from 7d3fdaaceb134d3d at term 3
2018-12-05 09:41:32.458606 I | raft: 7d3fdaaceb134d3d became leader at term 3
2018-12-05 09:41:32.458666 I | raft: raft.node: 7d3fdaaceb134d3d elected leader 7d3fdaaceb134d3d at term 3
2018-12-05 09:41:32.466818 I | embed: ready to serve client requests
2018-12-05 09:41:32.467766 I | etcdserver: published {Name:etcd-member-test1-master-0 ClientURLs:[https://192.168.126.11:2379]} to cluster d98ef57fc5131193
2018-12-05 09:41:32.468564 I | embed: serving client requests on [::]:2379
WARNING: 2018/12/05 09:41:32 Failed to dial 0.0.0.0:2379: connection error: desc = "transport: authentication handshake failed: remote error: tls: bad certificate"; please retry.

@aaronlevy
Copy link
Contributor

I believe the locations are:

  • On bootstrap node: opt/openshift/auth/kubeconfig-kubelet
  • On master nodes: /etc/kubernetes/kubeconfig (this should be defined by the --bootstrap-kubeconfig flag.

The kubelet will pick a random(?) time before expiration for it to request a new cert. So anywhere in the 30min window after starting, the cert might be rotated.

Ideally we would rotate immediately after it had posted CSR / got a full client cert. This is something that @abhinavdahiya was going to look into this sprint (see https://jira.coreos.com/browse/CORS-810). But there may be some kubelet behaviors that block this.

@praveenkumar
Copy link
Contributor

@aaronlevy So below is the cert details of the master node where I am getting that error and I am not able to see if that is expired.

[root@test1-master-0 kubernetes]# pwd
/etc/kubernetes
[root@test1-master-0 kubernetes]# openssl x509 -in ca.crt -text -noout
Certificate:
    Data:
        Version: 3 (0x2)
        Serial Number: 0 (0x0)
    Signature Algorithm: sha256WithRSAEncryption
        Issuer: OU=openshift, CN=root-ca
        Validity
            Not Before: Dec  5 07:20:17 2018 GMT
            Not After : Dec  2 07:20:17 2028 GMT
        Subject: OU=openshift, CN=root-ca
[root@test1-master-0 kubernetes]# ls -al /etc/ssl/certs/
total 12
drwxr-xr-x. 2 root root  117 Dec  5 05:55 .
drwxr-xr-x. 5 root root   81 Dec  5 05:55 ..
lrwxrwxrwx. 1 root root   49 Dec  5 05:55 ca-bundle.crt -> /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem
lrwxrwxrwx. 1 root root   55 Dec  5 05:55 ca-bundle.trust.crt -> /etc/pki/ca-trust/extracted/openssl/ca-bundle.trust.crt
-rwxr-xr-x. 1 root root  610 Dec  5 05:55 make-dummy-cert
-rw-r--r--. 1 root root 2516 Dec  5 05:55 Makefile
-rwxr-xr-x. 1 root root  829 Dec  5 05:55 renew-dummy-cert
[root@test1-master-0 kubernetes]# openssl x509 -in /etc/pki/ca-trust/extracted/openssl/ca-bundle.trust.crt -text -noout
Certificate:
    Data:
        Version: 3 (0x2)
        Serial Number: 6828503384748696800 (0x5ec3b7a6437fa4e0)
    Signature Algorithm: sha1WithRSAEncryption
        Issuer: CN=ACCVRAIZ1, OU=PKIACCV, O=ACCV, C=ES
        Validity
            Not Before: May  5 09:37:37 2011 GMT
            Not After : Dec 31 09:37:37 2030 GMT
        Subject: CN=ACCVRAIZ1, OU=PKIACCV, O=ACCV, C=ES
[root@test1-master-0 kubernetes]# openssl x509 -in /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem -text -noout
Certificate:
    Data:
        Version: 3 (0x2)
        Serial Number: 6828503384748696800 (0x5ec3b7a6437fa4e0)
    Signature Algorithm: sha1WithRSAEncryption
        Issuer: CN=ACCVRAIZ1, OU=PKIACCV, O=ACCV, C=ES
        Validity
            Not Before: May  5 09:37:37 2011 GMT
            Not After : Dec 31 09:37:37 2030 GMT
 
[root@test1-master-0 kubernetes]# ls -al /var/lib/kubelet/pki/
total 8
drwxr-xr-x. 2 root root  166 Dec  5 07:35 .
drwxr-xr-x. 7 root root  153 Dec  5 07:28 ..
-rw-------. 1 root root 1187 Dec  5 07:28 kubelet-client-2018-12-05-07-28-01.pem
lrwxrwxrwx. 1 root root   59 Dec  5 07:28 kubelet-client-current.pem -> /var/lib/kubelet/pki/kubelet-client-2018-12-05-07-28-01.pem
-rw-------. 1 root root 1240 Dec  5 07:35 kubelet-server-2018-12-05-07-35-14.pem
lrwxrwxrwx. 1 root root   59 Dec  5 07:35 kubelet-server-current.pem -> /var/lib/kubelet/pki/kubelet-server-2018-12-05-07-35-14.pem
[root@test1-master-0 kubernetes]# openssl x509 -in /var/lib/kubelet/pki/kubelet-client-2018-12-05-07-28-01.pem -text -noout
Certificate:
    Data:
        Version: 3 (0x2)
        Serial Number:
            09:3f:c5:f3:f8:6d:24:e6:7d:18:3e:de:a8:66:5c:bc:90:4e:a8:04
    Signature Algorithm: sha256WithRSAEncryption
        Issuer: OU=bootkube, CN=kube-ca
        Validity
            Not Before: Dec  5 07:23:00 2018 GMT
            Not After : Jan  4 07:23:00 2019 GMT
        Subject: O=system:nodes, CN=system:node:test1-master-0
        Subject Public Key Info:
[root@test1-master-0 kubernetes]# openssl x509 -in /var/lib/kubelet/pki/kubelet-server-2018-12-05-07-35-14.pem -text -noout
Certificate:
    Data:
        Version: 3 (0x2)
        Serial Number:
            3e:3d:e3:cc:c8:02:ca:22:d6:1f:1f:e3:70:b0:35:45:8d:04:3c:3c
    Signature Algorithm: sha256WithRSAEncryption
        Issuer: OU=bootkube, CN=kube-ca
        Validity
            Not Before: Dec  5 07:30:00 2018 GMT
            Not After : Jan  4 07:30:00 2019 GMT
        Subject: O=system:nodes, CN=system:node:test1-master-0

@aaronlevy
Copy link
Contributor

From what you posted in #167 (comment)

WARNING: 2018/12/05 09:41:32 Failed to dial 0.0.0.0:2379:

etcd is what is listening on :2379, so I don't believe this is the same issue as the original. Might be better to open a new issue to discuss the separate problem you're having. On a side note - I'm unsure why it would be dialing 0.0.0.0 -- fine to listen on all interfaces, but that seems wrong / maybe etcd DNS is configured improperly?

@wking
Copy link
Member

wking commented Dec 6, 2018

etcd is what is listening on :2379, so I don't believe this is the same issue as the original. Might be better to open a new issue to discuss the separate problem you're having.

Already moved to coreos/kubecsr#22 ;).

@eparis
Copy link
Member

eparis commented Feb 19, 2019

cert rotation and lifetimes are not something the installer will be addressing. Please work with the master team (preferably in BZ) for further discussion if you are having problems.

@eparis eparis closed this as completed Feb 19, 2019
wking added a commit to wking/openshift-installer that referenced this issue Feb 28, 2019
…-release:4.0.0-0.6

Clayton pushed 4.0.0-0.nightly-2019-02-27-213933 to
quay.io/openshift-release-dev/ocp-release:4.0.0-0.6.  Extracting the
associated RHCOS build:

  $ oc adm release info --pullspecs quay.io/openshift-release-dev/ocp-release:4.0.0-0.6 | grep machine-os-content
    machine-os-content                            registry.svc.ci.openshift.org/ocp/4.0-art-latest-2019-02-27-213933@sha256:1262533e31a427917f94babeef2774c98373409897863ae742ff04120f32f79b
  $ oc image info registry.svc.ci.openshift.org/ocp/4.0-art-latest-2019-02-26-125216@sha256:1262533e31a427917f94babeef2774c98373409897863ae742ff04120f32f79b | grep version
              version=47.330

that's the same machine-os-content image referenced from 4.0.0-0.5,
which we used for installer v0.13.0.

Renaming OPENSHIFT_INSTALL_RELEASE_IMAGE_OVERRIDE gets us CI testing
of the pinned release despite openshift/release@60007df2 (Use
RELEASE_IMAGE_LATEST for CVO payload, 2018-10-03,
openshift/release#1793).

Also comment out regions which this particular RHCOS build wasn't
pushed to, leaving only:

  $ curl -s https://releases-rhcos.svc.ci.openshift.org/storage/releases/maipo/47.330/meta.json | jq -r '.amis[] | .name'
  ap-northeast-1
  ap-northeast-2
  ap-south-1
  ap-southeast-1
  ap-southeast-2
  ca-central-1
  eu-central-1
  eu-west-1
  eu-west-2
  eu-west-3
  sa-east-1
  us-east-1
  us-east-2
  us-west-1
  us-west-2

I'd initially expected to export the pinning environment variables in
release.sh, but I've put them in build.sh here because our continuous
integration tests use build.sh directly and don't go through
release.sh.

Using the slick, new change-log generator from [1], here's everything
that changed in the update payload:

  $ oc adm release info --changelog ~/.local/lib/go/src --changes-from quay.io/openshift-release-dev/ocp-release:4.0.0-0.5 quay.io/openshift-release-dev/ocp-release:4.0.0-0.6
  # 4.0.0-0.6

  Created: 2019-02-28 20:40:11 +0000 UTC
  Image Digest: `sha256:5ce3d05da3bfa3d0310684f5ac53d98d66a904d25f2e55c2442705b628560962`
  Promoted from registry.svc.ci.openshift.org/ocp/release:4.0.0-0.nightly-2019-02-27-213933

  ## Changes from 4.0.0-0.5

  ### Components

  * Kubernetes 1.12.4

  ### New images

  * [pod](https://github.com/openshift/images) git [2f60da39](openshift/images@2f60da3) `sha256:c0d602467dfe0299ce577ba568a9ef5fb9b0864bac6455604258e7f5986d3509`

  ### Rebuilt images without code change

  * [cloud-credential-operator](https://github.com/openshift/cloud-credential-operator) git [01bbf372](openshift/cloud-credential-operator@01bbf37) `sha256:f87be09923a5cb081722634d2e0c3d0a5633ea2c23da651398d4e915ad9f73b0`
  * [cluster-autoscaler](https://github.com/openshift/kubernetes-autoscaler) git [d8a4a304](openshift/kubernetes-autoscaler@d8a4a30) `sha256:955413b82cf8054ce149bc05c18297a8abe9c59f9d0034989f08086ae6c71fa6`
  * [cluster-autoscaler-operator](https://github.com/openshift/cluster-autoscaler-operator) git [73c46659](openshift/cluster-autoscaler-operator@73c4665) `sha256:756e813fce04841993c8060d08a5684c173cbfb61a090ae67cb1558d76a0336e`
  * [cluster-bootstrap](https://github.com/openshift/cluster-bootstrap) git [05a5c8e6](openshift/cluster-bootstrap@05a5c8e) `sha256:dbdd90da7d256e8d49e4e21cb0bdef618c79d83f539049f89f3e3af5dbc77e0f`
  * [cluster-config-operator](https://github.com/openshift/cluster-config-operator) git [aa1805e7](openshift/cluster-config-operator@aa1805e) `sha256:773d3355e6365237501d4eb70d58cd0633feb541d4b6f23d6a5f7b41fd6ad2f5`
  * [cluster-dns-operator](https://github.com/openshift/cluster-dns-operator) git [ffb04ae9](openshift/cluster-dns-operator@ffb04ae) `sha256:ca15f98cc1f61440f87950773329e1fdf58e73e591638f18c43384ad4f8f84da`
  * [cluster-machine-approver](https://github.com/openshift/cluster-machine-approver) git [2fbc6a6b](openshift/cluster-machine-approver@2fbc6a6) `sha256:a66af3b1f4ae98257ab600d54f8c94f3a4136f85863bbe0fa7c5dba65c5aea46`
  * [cluster-node-tuned](https://github.com/openshift/openshift-tuned) git [278ee72d](openshift/openshift-tuned@278ee72) `sha256:ad71743cc50a6f07eba013b496beab9ec817603b07fd3f5c022fffbf400e4f4b`
  * [cluster-node-tuning-operator](https://github.com/openshift/cluster-node-tuning-operator) git [b5c14deb](openshift/cluster-node-tuning-operator@b5c14de) `sha256:e61d1fdb7ad9f5fed870e917a1bc8fac9ccede6e4426d31678876bcb5896b000`
  * [cluster-openshift-controller-manager-operator](https://github.com/openshift/cluster-openshift-controller-manager-operator) git [3f79b51b](openshift/cluster-openshift-controller-manager-operator@3f79b51) `sha256:8f3b40b4dd29186975c900e41b1a94ce511478eeea653b89a065257a62bf3ae9`
  * [cluster-svcat-apiserver-operator](https://github.com/openshift/cluster-svcat-apiserver-operator) git [547648cb](openshift/cluster-svcat-apiserver-operator@547648c) `sha256:e7c9323b91dbb11e044d5a1277d1e29d106d92627a6c32bd0368616e0bcf631a`
  * [cluster-svcat-controller-manager-operator](https://github.com/openshift/cluster-svcat-controller-manager-operator) git [9261f420](openshift/cluster-svcat-controller-manager-operator@9261f42) `sha256:097a429eda2306fcd49e14e4f5db8ec3a09a90fa29ebdbc98cc519511ab6fb5b`
  * [cluster-version-operator](https://github.com/openshift/cluster-version-operator) git [70c0232e](openshift/cluster-version-operator@70c0232) `sha256:7d59edff68300e13f0b9e56d2f2bc1af7f0051a9fbc76cc208239137ac10f782`
  * [configmap-reloader](https://github.com/openshift/configmap-reload) git [3c2f8572](openshift/configmap-reload@3c2f857) `sha256:32360c79d8d8d54cea03675c24f9d0a69877a2f2e16b949ca1d97440b8f45220`
  * [console-operator](https://github.com/openshift/console-operator) git [32ed7c03](openshift/console-operator@32ed7c0) `sha256:f8c07cb72dc8aa931bbfabca9b4133f3b93bc96da59e95110ceb8c64f3efc755`
  * [container-networking-plugins-supported](https://github.com/openshift/ose-containernetworking-plugins) git [f6a58dce](openshift/ose-containernetworking-plugins@f6a58dc) `sha256:c6434441fa9cc96428385574578c41e9bc833b6db9557df1dd627411d9372bf4`
  * [container-networking-plugins-unsupported](https://github.com/openshift/ose-containernetworking-plugins) git [f6a58dce](openshift/ose-containernetworking-plugins@f6a58dc) `sha256:bb589cf71d4f41977ec329cf808cdb956d5eedfc604e36b98cfd0bacce513ffc`
  * [coredns](https://github.com/openshift/coredns) git [fbcb8252](openshift/coredns@fbcb825) `sha256:2f1812a95e153a40ce607de9b3ace7cae5bee67467a44a64672dac54e47f2a66`
  * [docker-builder](https://github.com/openshift/builder) git [1a77d837](openshift/builder@1a77d83) `sha256:27062ab2c62869e5ffeca234e97863334633241089a5d822a19350f16945fbcb`
  * [etcd](https://github.com/openshift/etcd) git [a0e62b48](openshift/etcd@a0e62b4) `sha256:e4e9677d004f8f93d4f084739b4502c2957c6620d633e1fdb379c33243c684fa`
  * [grafana](https://github.com/openshift/grafana) git [58efe0eb](openshift/grafana@58efe0e) `sha256:548abcc50ccb8bb17e6be2baf050062a60fc5ea0ca5d6c59ebcb8286fc9eb043`
  * [haproxy-router](https://github.com/openshift/router) git [2c33f47f](openshift/router@2c33f47) `sha256:c899b557e4ee2ea7fdbe5c37b5f4f6e9f9748a39119130fa930d9497464bd957`
  * [k8s-prometheus-adapter](https://github.com/openshift/k8s-prometheus-adapter) git [815fa76b](openshift/k8s-prometheus-adapter@815fa76) `sha256:772c1b40b21ccaa9ffcb5556a1228578526a141b230e8ac0afe19f14404fdffc`
  * [kube-rbac-proxy](https://github.com/openshift/kube-rbac-proxy) git [3f271e09](openshift/kube-rbac-proxy@3f271e0) `sha256:b6de05167ecab0472279cdc430105fac4b97fb2c43d854e1c1aa470d20a36572`
  * [kube-state-metrics](https://github.com/openshift/kube-state-metrics) git [2ab51c9f](openshift/kube-state-metrics@2ab51c9) `sha256:611c800c052de692c84d89da504d9f386d3dcab59cbbcaf6a26023756bc863a0`
  * [libvirt-machine-controllers](https://github.com/openshift/cluster-api-provider-libvirt) git [7ff8b08f](openshift/cluster-api-provider-libvirt@7ff8b08) `sha256:6ab8749886ec26d45853c0e7ade3c1faaf6b36e09ba2b8a55f66c6cc25052832`
  * [multus-cni](https://github.com/openshift/ose-multus-cni) git [61f9e088](https://github.com/openshift/ose-multus-cni/commit/61f9e0886370ea5f6093ed61d4cfefc6dadef582) `sha256:e3f87811d22751e7f06863e7a1407652af781e32e614c8535f63d744e923ea5c`
  * [oauth-proxy](https://github.com/openshift/oauth-proxy) git [b771960b](openshift/oauth-proxy@b771960) `sha256:093a2ac687849e91671ce906054685a4c193dfbed27ebb977302f2e09ad856dc`
  * [openstack-machine-controllers](https://github.com/openshift/cluster-api-provider-openstack) git [c2d845b](openshift/cluster-api-provider-openstack@c2d845b) `sha256:f9c321de068d977d5b4adf8f697c5b15f870ccf24ad3e19989b129e744a352a7`
  * [operator-registry](https://github.com/operator-framework/operator-registry) git [0531400c](operator-framework/operator-registry@0531400) `sha256:730f3b504cccf07e72282caf60dc12f4e7655d7aacf0374d710c3f27125f7008`
  * [prom-label-proxy](https://github.com/openshift/prom-label-proxy) git [46423f9d](openshift/prom-label-proxy@46423f9) `sha256:3235ad5e22b6f560d447266e0ecb2e5655fda7c0ab5c1021d8d3a4202f04d2ca`
  * [prometheus](https://github.com/openshift/prometheus) git [6e5fb5dc](openshift/prometheus@6e5fb5d) `sha256:013455905e4a6313f8c471ba5f99962ec097a9cecee3e22bdff3e87061efad57`
  * [prometheus-alertmanager](https://github.com/openshift/prometheus-alertmanager) git [4617d550](openshift/prometheus-alertmanager@4617d55) `sha256:54512a6cf25cf3baf7fed0b01a1d4786d952d93f662578398cad0d06c9e4e951`
  * [prometheus-config-reloader](https://github.com/openshift/prometheus-operator) git [f8a0aa17](openshift/prometheus-operator@f8a0aa1) `sha256:244fc5f1a4a0aa983067331c762a04a6939407b4396ae0e86a1dd1519e42bb5d`
  * [prometheus-node-exporter](https://github.com/openshift/node_exporter) git [f248b582](openshift/node_exporter@f248b58) `sha256:390e5e1b3f3c401a0fea307d6f9295c7ff7d23b4b27fa0eb8f4017bd86d7252c`
  * [prometheus-operator](https://github.com/openshift/prometheus-operator) git [f8a0aa17](openshift/prometheus-operator@f8a0aa1) `sha256:6e697dcaa19e03bded1edf5770fb19c0d2cd8739885e79723e898824ce3cd8f5`
  * [service-catalog](https://github.com/openshift/service-catalog) git [b24ffd6f](openshift/service-catalog@b24ffd6) `sha256:85ea2924810ced0a66d414adb63445a90d61ab5318808859790b1d4b7decfea6`
  * [service-serving-cert-signer](https://github.com/openshift/service-serving-cert-signer) git [30924216](openshift/service-serving-cert-signer@3092421) `sha256:7f89db559ffbd3bf609489e228f959a032d68dd78ae083be72c9048ef0c35064`
  * [telemeter](https://github.com/openshift/telemeter) git [e12aabe4](openshift/telemeter@e12aabe) `sha256:fd518d2c056d4ab8a89d80888e0a96445be41f747bfc5f93aa51c7177cf92b92`

  ### [aws-machine-controllers](https://github.com/openshift/cluster-api-provider-aws)

  * client: add cluster-api-provider-aws to UserAgent for AWS API calls [openshift#167](openshift/cluster-api-provider-aws#167)
  * Drop the yaml unmarshalling [openshift#155](openshift/cluster-api-provider-aws#155)
  * [Full changelog](openshift/cluster-api-provider-aws@46f4852...c0c3b9e)

  ### [cli, deployer, hyperkube, hypershift, node, tests](https://github.com/openshift/ose)

  * Build OSTree using baked SELinux policy [#22081](https://github.com/openshift/ose/pull/22081)
  * NodeName was being cleared for `oc debug node/X` instead of set [#22086](https://github.com/openshift/ose/pull/22086)
  * UPSTREAM: 73894: Print the involved object in the event table [#22039](https://github.com/openshift/ose/pull/22039)
  * Publish CRD openapi [#22045](https://github.com/openshift/ose/pull/22045)
  * UPSTREAM: 00000: wait for CRD discovery to be successful once before [#22149](https://github.com/openshift/ose/pull/22149)
  * `oc adm release info --changelog` should clone if necessary [#22148](https://github.com/openshift/ose/pull/22148)
  * [Full changelog](openshift/ose@c547bc3...0cbcfc5)

  ### [cluster-authentication-operator](https://github.com/openshift/cluster-authentication-operator)

  * Add redeploy on serving cert and operator pod template change [openshift#75](openshift/cluster-authentication-operator#75)
  * Create the service before waiting for serving certs [openshift#84](openshift/cluster-authentication-operator#84)
  * [Full changelog](openshift/cluster-authentication-operator@78dd53b...35879ec)

  ### [cluster-image-registry-operator](https://github.com/openshift/cluster-image-registry-operator)

  * Enable subresource status [openshift#209](openshift/cluster-image-registry-operator#209)
  * Add ReadOnly flag [openshift#210](openshift/cluster-image-registry-operator#210)
  * do not setup ownerrefs for clusterscoped/cross-namespace objects [openshift#215](openshift/cluster-image-registry-operator#215)
  * s3: include operator version in UserAgent for AWS API calls [openshift#212](openshift/cluster-image-registry-operator#212)
  * [Full changelog](openshift/cluster-image-registry-operator@0780074...8060048)

  ### [cluster-ingress-operator](https://github.com/openshift/cluster-ingress-operator)

  * Adds info log msg indicating ns/secret used by DNSManager [openshift#134](openshift/cluster-ingress-operator#134)
  * Introduce certificate controller [openshift#140](openshift/cluster-ingress-operator#140)
  * [Full changelog](openshift/cluster-ingress-operator@1b4fa5a...09d14db)

  ### [cluster-kube-apiserver-operator](https://github.com/openshift/cluster-kube-apiserver-operator)

  * bump(*): fix installer pod shutdown and rolebinding [openshift#307](openshift/cluster-kube-apiserver-operator#307)
  * bump to fix early status [openshift#309](openshift/cluster-kube-apiserver-operator#309)
  * [Full changelog](openshift/cluster-kube-apiserver-operator@4016927...fa75c05)

  ### [cluster-kube-controller-manager-operator](https://github.com/openshift/cluster-kube-controller-manager-operator)

  * bump(*): fix installer pod shutdown and rolebinding [openshift#183](openshift/cluster-kube-controller-manager-operator#183)
  * bump to fix empty status [openshift#184](openshift/cluster-kube-controller-manager-operator#184)
  * [Full changelog](openshift/cluster-kube-controller-manager-operator@95f5f32...53ff6d8)

  ### [cluster-kube-scheduler-operator](https://github.com/openshift/cluster-kube-scheduler-operator)

  * Rotate kubeconfig [openshift#62](openshift/cluster-kube-scheduler-operator#62)
  * Don't pass nil function pointer to NewConfigObserver [openshift#65](openshift/cluster-kube-scheduler-operator#65)
  * [Full changelog](openshift/cluster-kube-scheduler-operator@50848b4...7066c96)

  ### [cluster-monitoring-operator](https://github.com/openshift/cluster-monitoring-operator)

  * *: Clean test invocation and documenation [openshift#267](openshift/cluster-monitoring-operator#267)
  * pkg/operator: fix progressing state of cluster operator [openshift#268](openshift/cluster-monitoring-operator#268)
  * jsonnet/main.jsonnet: Bump Prometheus to v2.7.1 [openshift#246](openshift/cluster-monitoring-operator#246)
  * OWNERS: Remove ironcladlou [openshift#204](openshift/cluster-monitoring-operator#204)
  * test/e2e: Refactor framework setup & wait for query logic [openshift#265](openshift/cluster-monitoring-operator#265)
  * jsonnet: Update dependencies [openshift#269](openshift/cluster-monitoring-operator#269)
  * [Full changelog](openshift/cluster-monitoring-operator@94b701f...3609aea)

  ### [cluster-network-operator](https://github.com/openshift/cluster-network-operator)

  * Update to be able to track both DaemonSets and Deployments [openshift#102](openshift/cluster-network-operator#102)
  * openshift-sdn: more service-catalog netnamespace fixes [openshift#108](openshift/cluster-network-operator#108)
  * [Full changelog](openshift/cluster-network-operator@9db4d03...15204e6)

  ### [cluster-openshift-apiserver-operator](https://github.com/openshift/cluster-openshift-apiserver-operator)

  * bump to fix status reporting [openshift#157](openshift/cluster-openshift-apiserver-operator#157)
  * [Full changelog](openshift/cluster-openshift-apiserver-operator@1ce6ac7...0a65fe4)

  ### [cluster-samples-operator](https://github.com/openshift/cluster-samples-operator)

  * use pumped up rate limiter, shave 30 seconds from startup creates [openshift#113](openshift/cluster-samples-operator#113)
  * [Full changelog](openshift/cluster-samples-operator@4726068...f001324)

  ### [cluster-storage-operator](https://github.com/openshift/cluster-storage-operator)

  * WaitForFirstConsumer in AWS StorageClass [openshift#12](openshift/cluster-storage-operator#12)
  * [Full changelog](openshift/cluster-storage-operator@dc42489...b850242)

  ### [console](https://github.com/openshift/console)

  * Add back OAuth configuration link in kubeadmin notifier [openshift#1202](openshift/console#1202)
  * Normalize display of <ResourceIcon> across browsers, platforms [openshift#1210](openshift/console#1210)
  * Add margin spacing so event info doesn't run together before truncating [openshift#1170](openshift/console#1170)
  * [Full changelog](openshift/console@a0b75bc...d10fb8b)

  ### [docker-registry](https://github.com/openshift/image-registry)

  * Bump k8s and OpenShift, use new docker-distribution branch [openshift#165](openshift/image-registry#165)
  * [Full changelog](openshift/image-registry@75a1fbe...afcc7da)

  ### [installer](https://github.com/openshift/installer)

  * data: route53 A records with SimplePolicy should not use health check [openshift#1308](openshift#1308)
  * bootkube.sh: do not hide problems with render [openshift#1274](openshift#1274)
  * data/bootstrap/files/usr/local/bin/bootkube: etcdctl from release image [openshift#1315](openshift#1315)
  * pkg/types/validation: Drop v1beta1 backwards compat hack [openshift#1251](openshift#1251)
  * pkg/asset/tls: self-sign etcd-client-ca [openshift#1267](openshift#1267)
  * pkg/asset/tls: self-sign aggregator-ca [openshift#1275](openshift#1275)
  * pkg/types/validation/installconfig: Drop nominal v1beta2 support [openshift#1319](openshift#1319)
  * Removing unused/deprecated security groups and ports. Updated AWS doc [openshift#1306](openshift#1306)
  * [Full changelog](openshift/installer@0208204...563f71f)

  ### [jenkins, jenkins-agent-maven, jenkins-agent-nodejs](https://github.com/openshift/jenkins)

  * recover from jenkins deps backleveling workflow-durable-task-step fro… [openshift#806](openshift/jenkins#806)
  * [Full changelog](openshift/jenkins@2485f9a...e4583ca)

  ### [machine-api-operator](https://github.com/openshift/machine-api-operator)

  * Rename labels from sigs.k8s.io to machine.openshift.io [openshift#213](openshift/machine-api-operator#213)
  * Remove clusters.cluster.k8s.io CRD [openshift#225](openshift/machine-api-operator#225)
  * MAO: Stop setting statusProgressing=true when resyincing same version [openshift#217](openshift/machine-api-operator#217)
  * Generate clientset for machine health check API [openshift#223](openshift/machine-api-operator#223)
  * [Full changelog](openshift/machine-api-operator@bf95d7d...34c3424)

  ### [machine-config-controller, machine-config-daemon, machine-config-operator, machine-config-server, setup-etcd-environment](https://github.com/openshift/machine-config-operator)

  * daemon: Only print status if os == RHCOS [openshift#495](openshift/machine-config-operator#495)
  * Add pod image to image-references [openshift#500](openshift/machine-config-operator#500)
  * pkg/daemon: stash the node object [openshift#464](openshift/machine-config-operator#464)
  * Eliminate use of cpu limits [openshift#503](openshift/machine-config-operator#503)
  * MCD: add ign validation check for mc.ignconfig [openshift#481](openshift/machine-config-operator#481)
  * [Full changelog](openshift/machine-config-operator@875f25e...f0b87fc)

  ### [operator-lifecycle-manager](https://github.com/operator-framework/operator-lifecycle-manager)

  * fix(owners): remove cross-namespace and cluster->namespace ownerrefs [openshift#729](operator-framework/operator-lifecycle-manager#729)
  * [Full changelog](operator-framework/operator-lifecycle-manager@1ac9ace...9186781)

  ### [operator-marketplace](https://github.com/operator-framework/operator-marketplace)

  * [opsrc] Do not delete csc during purge [openshift#117](operator-framework/operator-marketplace#117)
  * Remove Dependency on Owner References [openshift#118](operator-framework/operator-marketplace#118)
  * [Full changelog](operator-framework/operator-marketplace@7b53305...fedd694)

[1]: openshift/origin#22030
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants