Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bump k8s to.1.14 #320

Merged

Conversation

ingvagabund
Copy link
Member

@ingvagabund ingvagabund commented Jun 2, 2019

  • update k8s to 1.14.0
  • sync github.com/openshift/cluster-api
  • sync github.com/openshift/cluster-api-actuator-pkg
  • sync sigs.k8s.io/controller-runtime
  • sync github.com/openshift/cluster-autoscaler-operator

@openshift-ci-robot openshift-ci-robot added the size/XXL Denotes a PR that changes 1000+ lines, ignoring generated files. label Jun 2, 2019
@ingvagabund
Copy link
Member Author

/test unit

1 similar comment
@ingvagabund
Copy link
Member Author

/test unit

@ingvagabund
Copy link
Member Author

/retest

@ingvagabund ingvagabund changed the title Dump k8s to.1.14 Bump k8s to.1.14 Jun 3, 2019
@ingvagabund
Copy link
Member Author

/retest

@ingvagabund
Copy link
Member Author

# kubectl get events --all-namespaces | grep ScaleDownEmpty
kube-system         7m          7m           1         cluster-autoscaler-status.15a4a354453deea1                                   ConfigMap                                                              Normal    ScaleDownEmpty                    cluster-autoscaler                                                                  Scale-down: removing empty node 20383dfc-f848-4a7c-afc7-fc2d5658b0fc
kube-system         7m          7m           1         cluster-autoscaler-status.15a4a35446db3d04                                   ConfigMap                                                              Normal    ScaleDownEmpty                    cluster-autoscaler                                                                  Scale-down: empty node 20383dfc-f848-4a7c-afc7-fc2d5658b0fc removed
kube-system         6m          6m           1         cluster-autoscaler-status.15a4a36254e5dff2                                   ConfigMap                                                              Normal    ScaleDownEmpty                    cluster-autoscaler                                                                  Scale-down: removing empty node 72c385e9-98d7-4d45-800a-57b4a6818588
kube-system         6m          6m           1         cluster-autoscaler-status.15a4a36255ddb2c2                                   ConfigMap                                                              Normal    ScaleDownEmpty                    cluster-autoscaler                                                                  Scale-down: empty node 72c385e9-98d7-4d45-800a-57b4a6818588 removed
kube-system         6m          6m           1         cluster-autoscaler-status.15a4a364b50a152b                                   ConfigMap                                                              Normal    ScaleDownEmpty                    cluster-autoscaler                                                                  Scale-down: removing empty node a98cbc14-2fea-47b0-bb02-1a28d44c2715
kube-system         6m          6m           1         cluster-autoscaler-status.15a4a364b5f17562                                   ConfigMap                                                              Normal    ScaleDownEmpty                    cluster-autoscaler                                                                  Scale-down: empty node a98cbc14-2fea-47b0-bb02-1a28d44c2715 removed

Yet, the CI logs says:

I0603 08:36:34.452395    4701 autoscaler.go:304] [10m0s remaining] Waiting for cluster-autoscaler to generate 3 more "ScaleDownEmpty" events
I0603 08:36:37.452499    4701 autoscaler.go:304] [9m57s remaining] Waiting for cluster-autoscaler to generate 3 more "ScaleDownEmpty" events
I0603 08:36:40.452828    4701 autoscaler.go:304] [9m54s remaining] Waiting for cluster-autoscaler to generate 3 more "ScaleDownEmpty" events
I0603 08:36:43.453159    4701 autoscaler.go:304] [9m51s remaining] Waiting for cluster-autoscaler to generate 3 more "ScaleDownEmpty" events
I0603 08:36:46.454097    4701 autoscaler.go:304] [9m48s remaining] Waiting for cluster-autoscaler to generate 3 more "ScaleDownEmpty" events
I0603 08:36:49.454346    4701 autoscaler.go:304] [9m45s remaining] Waiting for cluster-autoscaler to generate 3 more "ScaleDownEmpty" events
I0603 08:36:52.454657    4701 autoscaler.go:304] [9m42s remaining] Waiting for cluster-autoscaler to generate 3 more "ScaleDownEmpty" events
I0603 08:36:53.706379    4701 autoscaler.go:250] cluster-autoscaler-status: Scale-down: removing empty node 72c385e9-98d7-4d45-800a-57b4a6818588
I0603 08:36:53.722351    4701 autoscaler.go:250] cluster-autoscaler-status: Scale-down: empty node 72c385e9-98d7-4d45-800a-57b4a6818588 removed
I0603 08:36:53.727509    4701 autoscaler.go:250] 72c385e9-98d7-4d45-800a-57b4a6818588: node removed by cluster autoscaler
I0603 08:36:55.454871    4701 autoscaler.go:304] [9m39s remaining] Waiting for cluster-autoscaler to generate 2 more "ScaleDownEmpty" events
I0603 08:36:58.455116    4701 autoscaler.go:304] [9m36s remaining] Waiting for cluster-autoscaler to generate 2 more "ScaleDownEmpty" events
I0603 08:37:01.455391    4701 autoscaler.go:304] [9m33s remaining] Waiting for cluster-autoscaler to generate 2 more "ScaleDownEmpty" events
I0603 08:37:03.910672    4701 autoscaler.go:250] cluster-autoscaler-status: Scale-down: removing empty node a98cbc14-2fea-47b0-bb02-1a28d44c2715
I0603 08:37:03.924774    4701 autoscaler.go:250] cluster-autoscaler-status: Scale-down: empty node a98cbc14-2fea-47b0-bb02-1a28d44c2715 removed
I0603 08:37:03.929334    4701 autoscaler.go:250] a98cbc14-2fea-47b0-bb02-1a28d44c2715: node removed by cluster autoscaler
I0603 08:37:04.455626    4701 autoscaler.go:304] [9m30s remaining] Waiting for cluster-autoscaler to generate 1 more "ScaleDownEmpty" events

The first events Scale-down: removing empty node 20383dfc-f848-4a7c-afc7-fc2d5658b0fc is not registered.

@ingvagabund
Copy link
Member Author

/test unit

@ingvagabund
Copy link
Member Author

I0603 12:22:42.647444    4835 autoscaler.go:250] cluster-autoscaler-status: Max total nodes in cluster reached
I0603 12:22:42.699360    4835 autoscaler.go:290] [51s remaining] At max cluster size and expecting no more "ScaledUpGroup" events; currently have 3, max=3
I0603 12:22:45.699652    4835 autoscaler.go:290] [48s remaining] At max cluster size and expecting no more "ScaledUpGroup" events; currently have 3, max=3
I0603 12:22:48.700200    4835 autoscaler.go:290] [45s remaining] At max cluster size and expecting no more "ScaledUpGroup" events; currently have 3, max=3
I0603 12:22:51.700488    4835 autoscaler.go:290] [42s remaining] At max cluster size and expecting no more "ScaledUpGroup" events; currently have 3, max=3
I0603 12:22:52.691123    4835 autoscaler.go:250] cluster-autoscaler-status: Scale-down: removing empty node c16691c7-9ac0-48a4-aa6a-2a339648d14b
I0603 12:22:52.706938    4835 autoscaler.go:250] cluster-autoscaler-status: Scale-down: empty node c16691c7-9ac0-48a4-aa6a-2a339648d14b removed
I0603 12:22:52.854363    4835 autoscaler.go:250] c16691c7-9ac0-48a4-aa6a-2a339648d14b: node removed by cluster autoscaler
I0603 12:22:54.701093    4835 autoscaler.go:290] [39s remaining] At max cluster size and expecting no more "ScaledUpGroup" events; currently have 3, max=3
I0603 12:22:57.701376    4835 autoscaler.go:290] [36s remaining] At max cluster size and expecting no more "ScaledUpGroup" events; currently have 3, max=3
I0603 12:23:00.701682    4835 autoscaler.go:290] [33s remaining] At max cluster size and expecting no more "ScaledUpGroup" events; currently have 3, max=3
I0603 12:23:03.701922    4835 autoscaler.go:290] [30s remaining] At max cluster size and expecting no more "ScaledUpGroup" events; currently have 3, max=3
I0603 12:23:06.702664    4835 autoscaler.go:290] [27s remaining] At max cluster size and expecting no more "ScaledUpGroup" events; currently have 3, max=3
I0603 12:23:09.703643    4835 autoscaler.go:290] [24s remaining] At max cluster size and expecting no more "ScaledUpGroup" events; currently have 3, max=3
I0603 12:23:12.703853    4835 autoscaler.go:290] [21s remaining] At max cluster size and expecting no more "ScaledUpGroup" events; currently have 3, max=3
I0603 12:23:15.704117    4835 autoscaler.go:290] [18s remaining] At max cluster size and expecting no more "ScaledUpGroup" events; currently have 3, max=3
I0603 12:23:18.704398    4835 autoscaler.go:290] [15s remaining] At max cluster size and expecting no more "ScaledUpGroup" events; currently have 3, max=3
I0603 12:23:21.704672    4835 autoscaler.go:290] [12s remaining] At max cluster size and expecting no more "ScaledUpGroup" events; currently have 3, max=3
I0603 12:23:24.704923    4835 autoscaler.go:290] [9s remaining] At max cluster size and expecting no more "ScaledUpGroup" events; currently have 3, max=3
I0603 12:23:27.705141    4835 autoscaler.go:290] [6s remaining] At max cluster size and expecting no more "ScaledUpGroup" events; currently have 3, max=3
I0603 12:23:30.705365    4835 autoscaler.go:290] [3s remaining] At max cluster size and expecting no more "ScaledUpGroup" events; currently have 3, max=3
STEP: Deleting workload
I0603 12:23:33.715072    4835 autoscaler.go:304] [10m0s remaining] Waiting for cluster-autoscaler to generate 3 more "ScaleDownEmpty" events
I0603 12:23:36.715269    4835 autoscaler.go:304] [9m57s remaining] Waiting for cluster-autoscaler to generate 3 more "ScaleDownEmpty" events
I0603 12:23:39.716209    4835 autoscaler.go:304] [9m54s remaining] Waiting for cluster-autoscaler to generate 3 more "ScaleDownEmpty" events
I0603 12:23:42.716454    4835 autoscaler.go:304] [9m51s remaining] Waiting for cluster-autoscaler to generate 3 more "ScaleDownEmpty" events
I0603 12:23:45.716664    4835 autoscaler.go:304] [9m48s remaining] Waiting for cluster-autoscaler to generate 3 more "ScaleDownEmpty" events
I0603 12:23:48.716869    4835 autoscaler.go:304] [9m45s remaining] Waiting for cluster-autoscaler to generate 3 more "ScaleDownEmpty" events
I0603 12:23:51.717091    4835 autoscaler.go:304] [9m42s remaining] Waiting for cluster-autoscaler to generate 3 more "ScaleDownEmpty" events
I0603 12:23:53.251524    4835 autoscaler.go:250] cluster-autoscaler-status: Scale-down: removing empty node 51ce4d2f-2c44-4e76-b79a-5a5002937a37
I0603 12:23:53.257619    4835 autoscaler.go:250] cluster-autoscaler-status: Scale-down: removing empty node 1252e8f0-18ca-43e9-9c95-53f1f2df6f78
I0603 12:23:53.434784    4835 autoscaler.go:250] cluster-autoscaler-status: Scale-down: empty node 51ce4d2f-2c44-4e76-b79a-5a5002937a37 removed
I0603 12:23:53.833914    4835 autoscaler.go:250] cluster-autoscaler-status: Scale-down: empty node 1252e8f0-18ca-43e9-9c95-53f1f2df6f78 removed
I0603 12:23:54.234510    4835 autoscaler.go:250] 1252e8f0-18ca-43e9-9c95-53f1f2df6f78: node removed by cluster autoscaler
I0603 12:23:54.433530    4835 autoscaler.go:250] 51ce4d2f-2c44-4e76-b79a-5a5002937a37: node removed by cluster autoscaler
I0603 12:23:54.717338    4835 autoscaler.go:304] [9m39s remaining] Waiting for cluster-autoscaler to generate 1 more "ScaleDownEmpty" events
I0603 12:23:57.717601    4835 autoscaler.go:304] [9m36s remaining] Waiting for cluster-autoscaler to generate 1 more "ScaleDownEmpty" events

@ingvagabund
Copy link
Member Author

/test integration

@ingvagabund
Copy link
Member Author

/test unit

@ingvagabund
Copy link
Member Author

/retest

@enxebre
Copy link
Member

enxebre commented Jun 5, 2019

/approve

@openshift-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: enxebre

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@openshift-ci-robot openshift-ci-robot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Jun 5, 2019
@frobware
Copy link
Contributor

frobware commented Jun 5, 2019

/lgtm

@openshift-ci-robot openshift-ci-robot added the lgtm Indicates that a PR is ready to be merged. label Jun 5, 2019
@openshift-merge-robot openshift-merge-robot merged commit dab4270 into openshift:master Jun 5, 2019
@ingvagabund ingvagabund deleted the dump-k8s-to.1.14 branch June 5, 2019 11:44
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. lgtm Indicates that a PR is ready to be merged. size/XXL Denotes a PR that changes 1000+ lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

6 participants