New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[e2e failure] [k8s.io] Kubernetes Dashboard should check that the kubernetes-dashboard instance is alive #56877

Closed
spiffxp opened this Issue Dec 6, 2017 · 3 comments

Comments

Projects
None yet
3 participants
@spiffxp
Member

spiffxp commented Dec 6, 2017

/priority critical-urgent
/priority failing-test
/kind bug
/status approved-for-milestone
@kubernetes/sig-ui-test-failures

This test has been failing since 2017-12-01 in the following jobs:

These jobs are on the sig-release-master-upgrade dashboard, and prevent us from cutting v1.9.0-beta.2 (kubernetes/sig-release#39). Is there work ongoing to bring this test back to green?

I don't have a triage link handy right now because triage hasn't been updating for a while

@spiffxp

This comment has been minimized.

Show comment
Hide comment
@spiffxp

spiffxp Dec 6, 2017

Member

Also guessing that #56793 might resolve this, since we're running 1.8.x tests

Member

spiffxp commented Dec 6, 2017

Also guessing that #56793 might resolve this, since we're running 1.8.x tests

@k8s-merge-robot

This comment has been minimized.

Show comment
Hide comment
@k8s-merge-robot

k8s-merge-robot Dec 7, 2017

Contributor

[MILESTONENOTIFIER] Milestone Issue Needs Attention

@spiffxp @kubernetes/sig-ui-misc

Action required: During code freeze, issues in the milestone should be in progress.
If this issue is not being actively worked on, please remove it from the milestone.
If it is being worked on, please add the status/in-progress label so it can be tracked with other in-flight issues.

Action Required: This issue has not been updated since Dec 6. Please provide an update.

Note: This issue is marked as priority/critical-urgent, and must be updated every 1 day during code freeze.

Example update:

ACK.  In progress
ETA: DD/MM/YYYY
Risks: Complicated fix required
Issue Labels
  • sig/ui: Issue will be escalated to these SIGs if needed.
  • priority/critical-urgent: Never automatically move issue out of a release milestone; continually escalate to contributor and SIG through all available channels.
  • kind/bug: Fixes a bug discovered during the current release.
Help
Contributor

k8s-merge-robot commented Dec 7, 2017

[MILESTONENOTIFIER] Milestone Issue Needs Attention

@spiffxp @kubernetes/sig-ui-misc

Action required: During code freeze, issues in the milestone should be in progress.
If this issue is not being actively worked on, please remove it from the milestone.
If it is being worked on, please add the status/in-progress label so it can be tracked with other in-flight issues.

Action Required: This issue has not been updated since Dec 6. Please provide an update.

Note: This issue is marked as priority/critical-urgent, and must be updated every 1 day during code freeze.

Example update:

ACK.  In progress
ETA: DD/MM/YYYY
Risks: Complicated fix required
Issue Labels
  • sig/ui: Issue will be escalated to these SIGs if needed.
  • priority/critical-urgent: Never automatically move issue out of a release milestone; continually escalate to contributor and SIG through all available channels.
  • kind/bug: Fixes a bug discovered during the current release.
Help
@spiffxp

This comment has been minimized.

Show comment
Hide comment
@spiffxp

spiffxp Dec 8, 2017

Member

/close
Yeah, that did the trick. All jobs that had this failure no longer have it

Member

spiffxp commented Dec 8, 2017

/close
Yeah, that did the trick. All jobs that had this failure no longer have it

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment