New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Print additional columns for RuntimeClass CRD #72446

Merged
merged 1 commit into from Jan 5, 2019

Conversation

@Huang-Wei
Copy link
Member

Huang-Wei commented Dec 30, 2018

What type of PR is this?

/kind feature

What this PR does / why we need it:

Without this PR, runtimeclass obj is printed with name and age columns:

$ k get runtimeclass
NAME         AGE
containerd   16h

After leveraging additionalPrinterColumns feature of CRD (since 1.11), more meaningful columns can be printed out:

$ k apply -f runtimeclass-crd.yaml
customresourcedefinition.apiextensions.k8s.io/runtimeclasses.node.k8s.io configured
$ k get runtimeclass
NAME         RUNTIME-HANDLER   AGE
containerd   runc              16h

Does this PR introduce a user-facing change?:

RuntimeClass is now printed with extra `RUNTIME-HANDLER` column.

/sig node
/assign @tallclair

@Huang-Wei

This comment has been minimized.

Copy link
Member

Huang-Wei commented Dec 30, 2018

/priority important-soon

@yastij

yastij approved these changes Dec 31, 2018

Copy link
Member

yastij left a comment

/lgtm

@k8s-ci-robot k8s-ci-robot added the lgtm label Dec 31, 2018

@feiskyer
Copy link
Member

feiskyer left a comment

lgtm

@tallclair

This comment has been minimized.

Copy link
Member

tallclair commented Jan 2, 2019

/lgtm

@tallclair

This comment has been minimized.

Copy link
Member

tallclair commented Jan 2, 2019

/approve

@k8s-ci-robot

This comment has been minimized.

Copy link
Contributor

k8s-ci-robot commented Jan 2, 2019

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: Huang-Wei, tallclair

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@Huang-Wei

This comment has been minimized.

Copy link
Member

Huang-Wei commented Jan 3, 2019

/retest

@krmayankk
Copy link
Contributor

krmayankk left a comment

Nice , since what version of kubernetes has this feature been available

@Huang-Wei

This comment has been minimized.

Copy link
Member

Huang-Wei commented Jan 3, 2019

/retest

@Huang-Wei

This comment has been minimized.

Copy link
Member

Huang-Wei commented Jan 3, 2019

@fejta-bot

This comment has been minimized.

Copy link

fejta-bot commented Jan 3, 2019

/retest
This bot automatically retries jobs that failed/flaked on approved PRs (send feedback to fejta).

Review the full test history for this PR.

Silence the bot with an /lgtm cancel comment for consistent failures.

1 similar comment
@fejta-bot

This comment has been minimized.

Copy link

fejta-bot commented Jan 3, 2019

/retest
This bot automatically retries jobs that failed/flaked on approved PRs (send feedback to fejta).

Review the full test history for this PR.

Silence the bot with an /lgtm cancel comment for consistent failures.

@yastij

This comment has been minimized.

Copy link
Member

yastij commented Jan 3, 2019

/retest

@fejta-bot

This comment has been minimized.

Copy link

fejta-bot commented Jan 3, 2019

/retest
This bot automatically retries jobs that failed/flaked on approved PRs (send feedback to fejta).

Review the full test history for this PR.

Silence the bot with an /lgtm cancel comment for consistent failures.

@fejta-bot

This comment has been minimized.

Copy link

fejta-bot commented Jan 3, 2019

/retest
This bot automatically retries jobs that failed/flaked on approved PRs (send feedback to fejta).

Review the full test history for this PR.

Silence the bot with an /lgtm cancel comment for consistent failures.

1 similar comment
@fejta-bot

This comment has been minimized.

Copy link

fejta-bot commented Jan 3, 2019

/retest
This bot automatically retries jobs that failed/flaked on approved PRs (send feedback to fejta).

Review the full test history for this PR.

Silence the bot with an /lgtm cancel comment for consistent failures.

@yastij

This comment has been minimized.

Copy link
Member

yastij commented Jan 3, 2019

/lgtm cancel

Until gke-e2e is fixed

@k8s-ci-robot k8s-ci-robot removed the lgtm label Jan 3, 2019

@Huang-Wei

This comment has been minimized.

Copy link
Member

Huang-Wei commented Jan 5, 2019

/test pull-kubernetes-e2e-gke

@Huang-Wei

This comment has been minimized.

Copy link
Member

Huang-Wei commented Jan 5, 2019

It's weird that the change results in a GKE e2e failure. I looked through the log and it says:

W0105 02:57:43.126]  detail: u'All cluster resources were brought up, but the cluster API is reporting that: only 0 nodes out of 4 have registered; cluster may be unhealthy.'
...
W0105 02:57:43.128]  zone: u'us-central1-f'>] finished with error: All cluster resources were brought up, but the cluster API is reporting that: only 0 nodes out of 4 have registered; cluster may be unhealthy.

And the specific error says:

W0105 03:15:01.281] 2019/01/05 03:15:01 main.go:313: Something went wrong: starting e2e cluster: error creating cluster: error during gcloud container clusters create --quiet --addons=HttpLoadBalancing,HorizontalPodAutoscaling,KubernetesDashboard --project=gke-up-g1-3-c1-5-up-clu-n --zone=us-central1-f --machine-type=n1-standard-2 --image-type=gci --num-nodes=4 --network=e2e-3828-218b6 --cluster-version=1.14.0-alpha.0.1437+43fb55c486e080-pull-kubernetes-e2e-gke e2e-3828-218b6: exit status 1
W0105 03:15:01.285] Traceback (most recent call last):
W0105 03:15:01.285]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 758, in <module>
W0105 03:15:01.372]     main(parse_args())
W0105 03:15:01.373]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 609, in main
W0105 03:15:01.373]     mode.start(runner_args)
W0105 03:15:01.373]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 262, in start
W0105 03:15:01.373]     check_env(env, self.command, *args)
W0105 03:15:01.373]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 111, in check_env
W0105 03:15:01.373]     subprocess.check_call(cmd, env=env)
W0105 03:15:01.373]   File "/usr/lib/python2.7/subprocess.py", line 186, in check_call
W0105 03:15:01.403]     raise CalledProcessError(retcode, cmd)
W0105 03:15:01.404] subprocess.CalledProcessError: Command '('kubetest', '--dump=/workspace/_artifacts', '--gcp-service-account=/etc/service-account/service-account.json', '--build=bazel', '--stage=gs://kubernetes-release-dev/ci', '--up', '--down', '--test', '--deployment=gke', '--provider=gke', '--cluster=e2e-3828-218b6', '--gcp-network=e2e-3828-218b6', '--extract=local', '--gcp-cloud-sdk=gs://cloud-sdk-testing/ci/staging', '--gcp-node-image=gci', '--gcp-zone=us-central1-f', '--ginkgo-parallel=30', '--gke-create-command=container clusters create --quiet --addons=HttpLoadBalancing,HorizontalPodAutoscaling,KubernetesDashboard', '--gke-environment=test', '--gke-shape={"default":{"Nodes":4,"MachineType":"n1-standard-2"}}', '--stage-suffix=pull-kubernetes-e2e-gke', '--test_args=--ginkgo.flakeAttempts=2 --ginkgo.skip=\\[Slow\\]|\\[Serial\\]|\\[Disruptive\\]|\\[Flaky\\]|\\[Feature:.+\\] --minStartupPods=8', '--timeout=65m')' returned non-zero exit status 1
E0105 03:15:01.413] Command failed
I0105 03:15:01.413] process 520 exited with code 1 after 57.5m

@kubernetes/sig-testing Could you give a hand on diagnosing this? Thanks in advance.

/sig testing

@yastij

This comment has been minimized.

Copy link
Member

yastij commented Jan 5, 2019

@Huang-Wei - it was broken, see run across PRs.

@yastij

This comment has been minimized.

Copy link
Member

yastij commented Jan 5, 2019

/retest

@Huang-Wei

This comment has been minimized.

Copy link
Member

Huang-Wei commented Jan 5, 2019

@yastij Thanks! Could you re-label /lgtm?

@yastij

This comment has been minimized.

Copy link
Member

yastij commented Jan 5, 2019

/lgtm

@k8s-ci-robot k8s-ci-robot added the lgtm label Jan 5, 2019

@k8s-ci-robot k8s-ci-robot merged commit 815acf7 into kubernetes:master Jan 5, 2019

19 checks passed

cla/linuxfoundation Huang-Wei authorized
Details
pull-kubernetes-bazel-build Job succeeded.
Details
pull-kubernetes-bazel-test Job succeeded.
Details
pull-kubernetes-cross Skipped
pull-kubernetes-e2e-gce Job succeeded.
Details
pull-kubernetes-e2e-gce-100-performance Job succeeded.
Details
pull-kubernetes-e2e-gce-device-plugin-gpu Job succeeded.
Details
pull-kubernetes-e2e-gke Job succeeded.
Details
pull-kubernetes-e2e-kops-aws Job succeeded.
Details
pull-kubernetes-e2e-kubeadm-gce Skipped
pull-kubernetes-godeps Skipped
pull-kubernetes-integration Job succeeded.
Details
pull-kubernetes-kubemark-e2e-gce-big Job succeeded.
Details
pull-kubernetes-local-e2e Skipped
pull-kubernetes-local-e2e-containerized Skipped
pull-kubernetes-node-e2e Job succeeded.
Details
pull-kubernetes-typecheck Job succeeded.
Details
pull-kubernetes-verify Job succeeded.
Details
tide In merge pool.
Details

@Huang-Wei Huang-Wei deleted the Huang-Wei:runtimclass-crd-print branch Jan 6, 2019

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment