Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature Request: Move ResourceGroup CRD from kpt to cli-utils #364

Open
ash2k opened this issue Jun 3, 2021 · 14 comments
Open

Feature Request: Move ResourceGroup CRD from kpt to cli-utils #364

ash2k opened this issue Jun 3, 2021 · 14 comments
Labels
lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness.

Comments

@ash2k
Copy link
Member

ash2k commented Jun 3, 2021

https://github.com/GoogleContainerTools/kpt/blob/master/docs/ROADMAP.md#4-live-apply says there are some issues with ConfigMap as an inventory object. Are they documented somewhere? I've found kptdev/kpt#708, but there is no explanation.

As a cli-utils user, I'd like to avoid any issues with ConfigMap and use the ResourceGroup CR. I wonder if it would be possible to make it part of cli-utils?

@mortent
Copy link
Member

mortent commented Jun 8, 2021

@ash2k So the change from using a CRD instead of a ConfigMap to store the inventory object actually somewhat conflates two issues, only one of which is actually related to the use of a ConfigMap.

First, the challenges that are specific to a ConfigMap were:

  • In order to encode the GKNN information in the ConfigMap we have to make some assumptions about what would be "legal" characters in the names of resources. We had an issue where it turns out that : is not allowed, except for certain RBAC resources: Allow colon in RBAC resource names kptdev/kpt#899. By storing the GKNN information in a structured format in a CRD, we avoid these kind of issues.
  • For kpt we may want to use the object containing the inventory for additional purposes, for example capturing the version of the kpt package that were installed. While we could encode this in a ConfigMap or add it as an annotation, it just seems cleaner to use a CRD.

Somewhat separately, the first implementation of the inventory created a new inventory object on every apply and then removed old ones as they were no longer needed. This lead to some additional complexity so the current algorithm just updates the inventory object in place. There are still some remnants of the original approach in the code, but we eventually want to get rid of it.

I will look into whether the ResourceGroup CRD can be made available somewhere. We do want to allow users to chose the resource type for storing the inventory, which is why the support for ResourceGroup is implemented in kpt rather than in cli-utils.

@ash2k
Copy link
Member Author

ash2k commented Jun 16, 2021

@mortent Thanks for clarifying this. Hm, that means ConfigMap as-is is not fit for this purpose and... cannot be used? So the default option in this library is something that people may/will have problems with. Maybe we can change how the names are encoded, like devising an escaping mechanism or just always turning data into base64 or maybe storing it as a JSON value in a single key in the ConfigMap? It's unfortunate that the default implementation has this shortcoming, I would expect it to "just work".

For kpt I agree a CRD makes sense and then it doesn't make sense to have ResourceGroup code in this repo. On the other hand it'd be awesome to standardize inventory objects so that different tools could interoperate and understand each other's inventory objects. From the user's perspective that would be very nice.

FYI @marshall007

@bgrant0607
Copy link
Member

@ash2k How are you using cli-utils? Do you have specific interoperability use cases in mind?

cli-utils provides some building blocks for tool builders, as well as reference client implementations for certain API features, but aspirations for interoperability have decreased over time.

Projects like Helm (https://helm.sh/docs/topics/advanced/#storage-backends) and Kapp (https://carvel.dev/kapp/docs/latest/) have chosen to avoid dependencies on CRDs that require installation by end users, interoperability of granular mechanisms across tools has not been prioritized highly by tool developers, there has been little adoption of https://github.com/kubernetes-sigs/application, and virtually all projects that do use CRDs define their own group types.

Similar to Helm storage backends, configurable or pluggable inventory resource types are probably necessary, for tools with different requirements, constraints, UX goals, etc. ConfigMap is a handy default because it's a built-in type. However, in addition to the other complications, simply intermingling with end user ConfigMaps creates challenges for UX, authorization, scaling, naming, watching, garbage collection, and so on.

While an escaping or encoding feature may be necessary for ConfigMap, it could impair the experience for CRDs, so it would need to be optional.

@ash2k ash2k closed this as completed Jul 9, 2021
@ash2k ash2k reopened this Jul 9, 2021
@ash2k
Copy link
Member Author

ash2k commented Jul 12, 2021

@bgrant0607

How are you using cli-utils?

We are using cli-utils to perform apply and pruning in the GitOps feature of the GitLab Kubernetes Agent.

Do you have specific interoperability use cases in mind?

We support inventory object template being one of the objects in the repository. Would be nice if it was interoperable with kpt/other tools so that if the user needs to apply/etc objects from the repository manually, they could do it using the same object template. We would prefer not to come up with our own "standard", but use something the community standardized on. Looks like there is nothing at the moment.

I don't mind if it's a ConfigMap rather than a custom resource.

Regarding https://github.com/kubernetes-sigs/application - we are considering building a dashboard (or reusing an existing one if we find a suitable option) and thought it would make sense to support this specification. If there is no adoption, there is no point to support it indeed. Chicken and egg problem?

@marshall007
Copy link

@bgrant0607 @ash2k I think it might be a mistake to conflate the discussion around how "inventory metadata" is persisted to the cluster (which may very well be tooling specific) and the mechanism by which a user signals to the tooling that the set of resources they are applying belong to the same "inventory".

This is different from something like the Application CRD because, just like the previous iteration using ConfigMap, the user does not expect that their on-disk representation of this resource is the same as what is ultimately applied to the cluster.

I think it makes sense for a user to be able to declare something like:

apiVersion: v1alpha1
kind: ResourceGroup
metadata:
  name: inventory-obj
  namespace: inventory-namespace
  labels:
    cli-utils.sigs.k8s.io/inventory-id: 87bb72f3-86cf-4096-a12c-67c8ac653679

... which arbitrary tooling can interface with as a signal of intent by the user regardless of how the tooling chooses to persist the inventory object to the cluster. That being said, I still agree with @ash2k that the SIG community should supply some standards for persisting the inventory object as well.


Another option would be to leverage the Application CRD, or a slightly re-imagined version of it that implements pruning logic on the client side instead of requiring a custom controller. If we are serious about it long term, the live apply commands are a great way to encourage adoption because they already require the user to specify some information about how to manage inventory.

I think users and tool developers would be much more incentivized to use an Application CRD that did not require an in-cluster component.

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Oct 11, 2021
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Nov 10, 2021
@karlkfi karlkfi changed the title Move ResourceGroup CRD here from kpt? Feature Request: Move ResourceGroup CRD from kpt to cli-utils Dec 2, 2021
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue or PR with /reopen
  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closing this issue.

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue or PR with /reopen
  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@karlkfi
Copy link
Contributor

karlkfi commented Mar 25, 2022

/remove-lifecycle rotten

@karlkfi karlkfi reopened this Mar 25, 2022
@k8s-ci-robot k8s-ci-robot removed the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label Mar 25, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jun 23, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jul 23, 2022
@karlkfi
Copy link
Contributor

karlkfi commented Jul 24, 2022

/remove-lifecycle rotten

Ugh. How do I turn off this useless bot?

@k8s-ci-robot k8s-ci-robot removed the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label Jul 24, 2022
@karlkfi
Copy link
Contributor

karlkfi commented Jul 24, 2022

/lifecycle frozen

@k8s-ci-robot k8s-ci-robot added the lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. label Jul 24, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness.
Projects
None yet
Development

No branches or pull requests

7 participants