-
Notifications
You must be signed in to change notification settings - Fork 500
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Snap packaging support. #293
Conversation
b349631
to
c24039e
Compare
Thank you for this patch! Adding snap packages to Kubernetes is just great! I'll take deeper look into this after KubeCon, but I think we should only build packages for kubelet, kubeadm and kubectl. Also we need a package for the CNI binaries. There should also be snap packages for all available architectures (see the deb process for more info how to easily do that). |
cc @marcoceppi |
This is necessary for kube-controller-manager to provision volumes with `rbd`.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Took a super-quick pass and I'm asking myself why there are snaps for apiserver/controller-manager/scheduler/kube-proxy
I don't think those should be debs, the preferred way to run those is in containers.
Re multiarch: You can just download all binaries for all arches from the CI builds -- please look at the deb creation for the specific URLs to use.
No need to compile anything by hand 👍
Not everyone runs those as containers, many on-premise users and cloud users in the field run those on the "metal" as services. There's no harm in including them - it's an option. |
Assigning to @marcoceppi at the request of @castrojo. |
As this is a known issue moving forward, that we only have AMD64 bins in the snap channels today we have additional incoming feedback requesting PPC64EL. Linking these two issues together so we can continue to track as the status evolves https://github.com/juju-solutions/bundle-canonical-kubernetes/issues/268 |
@marcoceppi It is not; it's here as a PR against this PR branch: juju-solutions#5 It's currently blocked by the issues listed there. I think you indicated you were going to think about how to fix those; could be wrong. |
* support multiple platforms * allow multiple architectures with a single invocation of make
What's blocking this from being merged, anything? |
@tvansteenburgh @marcoceppi Last I touched this, we hadn't quite finished the cross-platform snap support, which is what I think was blocking this. |
@wwwtyro Can you summarize what's left to do? |
@tvansteenburgh I think we're mostly blocked by this and need it fixed or a workaround found: https://bugs.launchpad.net/snapcraft/+bug/1686481 |
In order to pass env vars to the daemons in a generic fashion, this sources the /root/cdk/kube.env file if it exists.
We need to be able to build snaps with a suffix. These are identical to our normal snap builds, but allow us to have a variation in the store, like 'kubectl-foo'. This makes it possible to keep variants on different versions without having a bunch of channels muddying up the primary 'kubectl' snap.
…-controller-manager (#25)
@luxas is this still relevant? |
Not for me personally at least. I think we're moving towards containers for the control plane & debs/rpms for the kubelet, kubeadm and kubectl, built automatically with bazel |
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: wwwtyro If they are not already assigned, you can assign the PR to them by writing The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
/hold where/how snaps should be built and hosted is probably a larger discussion than a single PR, but thanks a lot for the work! |
This makes sure snaps can still be built on bionic hosts. Signed-off-by: Adam Stokes <battlemidget@users.noreply.github.com>
Thanks for your pull request. Before we can look at your pull request, you'll need to sign a Contributor License Agreement (CLA). 📝 Please follow instructions at https://git.k8s.io/community/CLA.md#the-contributor-license-agreement to sign the CLA. It may take a couple minutes for the CLA signature to be fully registered; after that, please reply here with a new comment and we'll verify. Thanks.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. |
* Fix docker builds This updates to a recent stable container for building snaps via docker. Signed-off-by: Adam Stokes <battlemidget@users.noreply.github.com> * Include file in container, link magic db Signed-off-by: Adam Stokes <battlemidget@users.noreply.github.com>
Signed-off-by: Adam Stokes <battlemidget@users.noreply.github.com>
@battlemidget Looks like somewhere along the way we lost the ability to pass KUBE_VERSION and KUBE_ARCH through to the build script. It just always builds the latest stable Kubernetes (currently v1.11.3). This PR should fix it. I also moved the docker-build.sh script into build-scripts/ to tidy up a bit, hope that's cool.
Run with: cd release/snap KUBE_ARCH=amd64 KUBE_VERSION=v1.11.3 ./build-scripts/docker-build kubectl Signed-off-by: Adam Stokes <battlemidget@users.noreply.github.com>
* Add support for ppc64, arm64, s390x Signed-off-by: Adam Stokes <battlemidget@users.noreply.github.com> * add make to aarch64 dockerfile Signed-off-by: Adam Stokes <battlemidget@users.noreply.github.com>
/close |
@dims: Closing this PR. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
docs: note security responsibility in release lead role
This PR adds support for snap packages, fixing #44, for the following kubernetes components:
These follow the same process for deb and RPM by including the appropriate component from the k8s.io build.