New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[request] backport #56820 from master to 1.9 #58220

Closed
leifmadsen opened this Issue Jan 12, 2018 · 10 comments

Comments

Projects
None yet
7 participants
@leifmadsen

leifmadsen commented Jan 12, 2018

/kind bug

What happened: Unable to use artifacts created with Bazel on k8s 1.9

What you expected to happen: Artifacts created by Bazel on k8s 1.9 results in a Go coredump when attempting to install via kubeadm

How to reproduce it (as minimally and precisely as possible): Use planter to build k8s 1.9. Load the resulting .tar container files into Docker via docker load -i, retag for -amd64, run kubeadm init. On 1.9 Go will coredump. With master, things are fine.

Anything else we need to know?: I've tested against master, and the changes in PR #56820 and related results in usable artifacts.

Environment: CentOS 7.3 x86_64

  • Kubernetes version (use kubectl version): 1.10-alpha
  • Cloud provider or hardware configuration: x86_64 VM on KVM
  • OS (e.g. from /etc/os-release): CentOS 7.3
  • Kernel (e.g. uname -a): 3.10.0-514.el7.x86_64
  • Install tools: kubeadm
  • Others: Planter, bazel
@leifmadsen

This comment has been minimized.

Show comment
Hide comment
@leifmadsen

leifmadsen commented Jan 12, 2018

/cc @BenTheElder
/cc @ixdy

@liggitt

This comment has been minimized.

Show comment
Hide comment
@liggitt

liggitt Jan 12, 2018

Member

that seems like a significant change for a maintenance branch... have those changes been given enough soak time?

Member

liggitt commented Jan 12, 2018

that seems like a significant change for a maintenance branch... have those changes been given enough soak time?

@ixdy

This comment has been minimized.

Show comment
Hide comment
@ixdy

ixdy Jan 24, 2018

Member

@liggitt two points:

  1. Those changes have been running in the master branch for close to a month now
  2. AFAIK nothing official is using the bazel artifacts currently

So it seems like it would probably be safe to do?
Also the build stuff in 1.9 hasn't drifted too far from master yet.

Member

ixdy commented Jan 24, 2018

@liggitt two points:

  1. Those changes have been running in the master branch for close to a month now
  2. AFAIK nothing official is using the bazel artifacts currently

So it seems like it would probably be safe to do?
Also the build stuff in 1.9 hasn't drifted too far from master yet.

@liggitt

This comment has been minimized.

Show comment
Hide comment
@liggitt

liggitt Jan 24, 2018

Member

AFAIK nothing official is using the bazel artifacts currently

ah, I thought that's how the build scripts were building

Member

liggitt commented Jan 24, 2018

AFAIK nothing official is using the bazel artifacts currently

ah, I thought that's how the build scripts were building

@BenTheElder

This comment has been minimized.

Show comment
Hide comment
@BenTheElder

BenTheElder Jan 24, 2018

Member
Member

BenTheElder commented Jan 24, 2018

@fejta

This comment has been minimized.

Show comment
Hide comment
@fejta

fejta Jan 28, 2018

Contributor

@kubernetes/sig-release-bugs

We really ought to fix this if we can.

Contributor

fejta commented Jan 28, 2018

@kubernetes/sig-release-bugs

We really ought to fix this if we can.

@k8s-ci-robot k8s-ci-robot added sig/release and removed needs-sig labels Jan 28, 2018

@fejta-bot

This comment has been minimized.

Show comment
Hide comment
@fejta-bot

fejta-bot Apr 28, 2018

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

fejta-bot commented Apr 28, 2018

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@leifmadsen

This comment has been minimized.

Show comment
Hide comment
@leifmadsen

leifmadsen Apr 30, 2018

I don't know... should we bother with this since 1.10 is released?

leifmadsen commented Apr 30, 2018

I don't know... should we bother with this since 1.10 is released?

@fejta-bot

This comment has been minimized.

Show comment
Hide comment
@fejta-bot

fejta-bot May 31, 2018

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
/remove-lifecycle stale

fejta-bot commented May 31, 2018

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
/remove-lifecycle stale

@leifmadsen

This comment has been minimized.

Show comment
Hide comment
@leifmadsen

leifmadsen May 31, 2018

Closing as a backport to 1.9 is no longer necessary (release containing fixes is available).

leifmadsen commented May 31, 2018

Closing as a backport to 1.9 is no longer necessary (release containing fixes is available).

@leifmadsen leifmadsen closed this May 31, 2018

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment