Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Kustomize Packaging #15

Closed
stealthybox opened this issue Jun 14, 2019 · 10 comments
Closed

Kustomize Packaging #15

stealthybox opened this issue Jun 14, 2019 · 10 comments
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@stealthybox
Copy link
Contributor

Referenced in #14, we would like installers to be able to load kustomize packages.
We also expect that addon-operators can benefit from this ComponentConfig work and packaging.

This implies improvements for packaging and distributing these kustomize bases and overlays.

git support with tags/refs and nested dirs already seems to be built into kustomize which is a great starting point.

Some good things to work on:

  • OCI image package format
  • Cluster Bundle based package
    • no deps other than CRD's in k8s API
    • can use typed refs to load related bundles inside the cluster from installer ComponentConfig (interesting MIME-type questions for the URI here)
@jzelinskie
Copy link

+1 to using ORAS. I'm an OCI maintainer and trying to ensure that there is a cohesive strategy for how everyone is storing non-container artifacts before we cut a stable OCI-distribution release.

@stealthybox
Copy link
Contributor Author

stealthybox commented Jul 31, 2019

@jzelinskie do you have an opinion on using https://github.com/containers/skopeo + https://github.com/openSUSE/umoci as libraries to pull and unpack?

I was able to get something working with the commandline tools and some canonical file layout.
The creation UX was just to docker build a Dockerfile where the resulting kustomize layer(s) were in an image under /addon/

@jzelinskie
Copy link

It depends on what you'd like to accomplish. If you want to simply store YAML inside of container layers and pull them out, you could use skopeo/umoci. If you want to actually differentiate between regular container images and Kustomize YAML at the registry level, then you want to use ORAS and configure a custom mimetype. ORAS is internally using libraries from containerd -- they are just flexible enough to configure the mimetype whereas I'm not sure if skopeo or umoci have a public API that allows that level of configuration.

@stealthybox
Copy link
Contributor Author

If we use a different mime-type -- we'll probably need to use something buildkit based to assemble the OCI image instead of using canonical folders in a Dockerfile. This is definitely more formal.
One thing I found interesting about using the docker layers was that it provided another means of extending a package (since you can overlay the files).

umoci appears to only have UX for operating on files, but you can unpack and repack the image and make raw edits:

  umoci insert --image oci:foo mybinary /usr/bin/mybinary
  umoci insert --image oci:foo myconfigdir /etc/myconfigdir
  umoci insert --image oci:foo --opaque myoptdir /opt
  umoci insert --image oci:foo --whiteout /some/old/dir

I'm not sure about modifying the mime-types when using it as a library /cc @cyphar

WRT implementation:
What feels most appropriate in my opinion is for the unpack functionality whether it's ORAS based or using skopeo/umoci/something-else be integrated into the kustomize ref parser /w URI's and the execution occurring within kustomize libs (as opposed to us first unpacking and passing the resulting dir into kustomize).

@stealthybox
Copy link
Contributor Author

IIRC Helm3 is using ORAS with dedicated MIME-types so that's a consideration for parity.

@stealthybox
Copy link
Contributor Author

Some initial POC work on this is posted:
https://github.com/ecordell/kpg

Thanks for getting started @ecordell

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Nov 3, 2019
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Dec 3, 2019
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@k8s-ci-robot
Copy link
Contributor

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

4 participants