Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Deprecate and remove kustomize from kubectl #4706

Closed
soltysh opened this issue Jun 7, 2024 · 32 comments
Closed

Deprecate and remove kustomize from kubectl #4706

soltysh opened this issue Jun 7, 2024 · 32 comments
Assignees
Labels
sig/cli Categorizes an issue or PR as relevant to SIG CLI. stage/alpha Denotes an issue tracking an enhancement targeted for Alpha status tracked/yes Denotes an enhancement issue is actively being tracked by the Release Team

Comments

@soltysh
Copy link
Contributor

soltysh commented Jun 7, 2024

Enhancement Description

Please keep this description up to date. This will help the Enhancement Team to track the evolution of the enhancement efficiently.

@k8s-ci-robot k8s-ci-robot added the needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. label Jun 7, 2024
@soltysh
Copy link
Contributor Author

soltysh commented Jun 7, 2024

/sig cli
/milestone v1.31
/stage alpha
/label lead-opted-in

@k8s-ci-robot k8s-ci-robot added the stage/alpha Denotes an issue tracking an enhancement targeted for Alpha status label Jun 7, 2024
@k8s-ci-robot k8s-ci-robot added this to the v1.31 milestone Jun 7, 2024
@k8s-ci-robot k8s-ci-robot added sig/cli Categorizes an issue or PR as relevant to SIG CLI. and removed needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. labels Jun 7, 2024
@github-project-automation github-project-automation bot moved this to Needs Triage in SIG CLI Jun 7, 2024
@k8s-ci-robot k8s-ci-robot added the lead-opted-in Denotes that an issue has been opted in to a release label Jun 7, 2024
@soltysh soltysh self-assigned this Jun 7, 2024
@dipesh-rawat
Copy link
Member

Hello @soltysh 👋, Enhancements team here.

Just checking in as we approach enhancements freeze on on 02:00 UTC Friday 14th June 2024 / 19:00 PDT Thursday 13th June 2024.

This enhancement is targeting for stage alpha for 1.31 (correct me, if otherwise).

Here's where this enhancement currently stands:

  • KEP readme using the latest template has been merged into the k/enhancements repo.
  • KEP status is marked as implementable for latest-milestone: { CURRENT_RELEASE }. KEPs targeting stable will need to be marked as implemented after code PRs are merged and the feature gates are removed.
  • KEP readme has up-to-date graduation criteria
  • KEP has a production readiness review that has been completed and merged into k/enhancements. (For more information on the PRR process, check here). If your production readiness review is not completed yet, please make sure to fill the production readiness questionnaire in your KEP by the PRR Freeze deadline so that the PRR team has enough time to review your KEP.

For this KEP, we would need to update the following:

  • Create the KEP readme using the latest template and merge it in the k/enhancements repo.
  • Ensure that the KEP has undergone a production readiness review and has been merged into k/enhancements.

The status of this enhancement is marked as at risk for enhancement freeze. Please keep the issue description up-to-date with appropriate stages as well. Thank you!

If you anticipate missing enhancements freeze, you can file an exception request in advance. Thank you!

@dipesh-rawat
Copy link
Member

Hello @soltysh 👋, 1.31 Enhancements team here.

Now that PR #4712 has been merged, all the KEP requirements are in place and merged into k/enhancements, this enhancement is all good for the upcoming enhancements freeze. 🚀

The status of this enhancement is marked as tracked for enhancement freeze. Please keep the issue description up-to-date with appropriate stages as well. Thank you!

@dipesh-rawat dipesh-rawat moved this from At Risk for Enhancements Freeze to Tracked for Enhancements Freeze in 1.31 Enhancements Tracking Jun 11, 2024
@Princesso
Copy link

Hello @soltysh 👋, 1.31 Docs Lead here.
Does this enhancement work planned for 1.31 require any new docs or modification to existing docs?
If so, please follow the steps here to open a PR against dev-1.31 branch in the k/website repo. This PR can be just a placeholder at this time and must be created before Thursday June 27, 2024 18:00 PDT.
Also, take a look at Documenting for a release to get yourself familiarised with the docs requirement for the release.
Thank you!

@soltysh
Copy link
Contributor Author

soltysh commented Jun 13, 2024

Hey @Princesso we'll probably want to put together a blog post around 1.31 release, to more advertise the fact of this deprecation along with the future plan for removal. So that more users are aware of this fact. I'll followup with appropriate PRs.

@liggitt
Copy link
Member

liggitt commented Jun 13, 2024

I'm surprised and disappointed to see this proposed. I don't love the current state, but it's the commitment we made to users... we should not just break them without very very good reasons.

I think the KEP enormously underestimates the impact. There are thousands of publicly visible uses and likely even more non-public uses. Dropping support / breaking those uses does reputational damage to kubernetes for being unstable in new versions.

The justification / motivation in the KEP is vague

  • "risking users of kubectl to work with outdated version of kustomize" isn't an issue for users whose existing manifests are working properly
  • "some of the kustomize dependencies has already been problematic to the core kubernetes project" needs more details ... I wasn't aware of ongoing problematic dependencies here

@thockin
Copy link
Member

thockin commented Jun 14, 2024

Without having read the KEP, just the headline....I'm pretty strongly in the "no, that would break users" camp.

I'm all for throwing warnings - use colors and flashing terminal codes, heck - make it play the Star Wars alarm siren on the PC speaker if you can. As a recent victim of tools breaking underneath me, let's PLEASE take this seriously. It's just about the worst thing we can do to people. Look, I hate past me more than anyone, but I have to live with his idiotic, short-sighted decisions.

https://youtu.be/EjR1Ht__9KE?si=8cymBHCdN-UbPx4U - FF to 12:22

@a-mccarthy
Copy link

Hi @soltysh,

👋 from the v1.31 Communications Team! We'd love for you to opt in to write a feature blog about your enhancement!
Some reasons why you might want to write a blog for this feature include (but are not limited to) if this introduces breaking changes, is important to our users, or has been in progress for a long time and is graduating.

To opt in, let us know and open a Feature Blog placeholder PR against the website repository by 3rd July, 2024. For more information about writing a blog see the blog contribution guidelines.

Note: In your placeholder PR, use XX characters for the blog date in the front matter and file name. We will work with you on updating the PR with the publication date once we have a final number of feature blogs for this release.

@soltysh
Copy link
Contributor Author

soltysh commented Jun 18, 2024

@a-mccarthy opened kubernetes/website#46868

@soltysh
Copy link
Contributor Author

soltysh commented Jun 18, 2024

@liggitt @thockin thanks for your valuable input, I think it's important that we start having those conversations. I'll probably either open this topic again with sig-cli or even with sig-arch, so that we can discuss the potential path forward. Like I said when talking with Jordan on slack, nothing set in stone, but at the same time we shouldn't be stuck in a place that we seem to all agree is not the best one.

@codablock
Copy link

Has it been considered to implement a compatibility mode after kubectl moves from the deprecation to the removal state? Such a compatibility mode could shell out to the kustomize binary and emulate what the kubectl native kustomize integration was doing. This compatibility/emulation mode could be in deprecation phase from day one while printing HUGE warning messages about what is happening.

This way, kubectl could remove its compile-time dependency into kustomize and leave the compatibility mode for a much longer time. Of course, users would be required to install kustomize along kubectl, but that might be an acceptable tradeoff compared to all scripts breaking immediately.

@koba1t
Copy link
Member

koba1t commented Jun 19, 2024

Hi! I'm currently subproject lead for kustomize.

I have some comments on the Motivation section of this proposal, and I am writing here because that proposal was merged before I checked.
Could you consider updating it?

The current kubernetes release cycle doesn't match that of kustomize, oftentimes risking users of kubectl to work with outdated version of kustomize.

I feel we are able to make a kustomize release cycle to match what Kubernetes is using. In my memory, the reason the current release cycle is not regular is because we did not discuss whether it is necessary or not.
I think we need to add more details about any technical or non-technical problems you feel and why you set a non-goal to change the release cycle.

"some of the kustomize dependencies has already been problematic to the core kubernetes project" needs more details ... I wasn't aware of ongoing problematic dependencies here

I agree with @liggitt's opinion. In my memory, I didn't notice any related issues, and I tried to clean up dependencies at any time.
Did you want to say about kustomize's dependencies in unwanted-dependencies file in k/k?

current kubectl maintainers feel that promoting one tool over the other should not be the role of the project.

I completely agree with your opinion.
Currently, we can find many manifest management tools like helm, kpt, cdk8s, cue, jsonnet, and more hundred tools.

So, I'm able to agree with your idea to remove kustomize from kubectl to improve the maintainability of both projects.
But, I thought it would be better if you updated it and clearly explained why you wrote this proposal before writing the blog post.

@sreeram-venkitesh sreeram-venkitesh added the tracked/yes Denotes an enhancement issue is actively being tracked by the Release Team label Jun 24, 2024
@Princesso
Copy link

Hey @Princesso we'll probably want to put together a blog post around 1.31 release, to more advertise the fact of this deprecation along with the future plan for removal. So that more users are aware of this fact. I'll followup with appropriate PRs.

Hi @soltysh, by this comment, I am assuming that this enhancement does not need any updates to the Docs. Please correct me if I am wrong.

If it does indeed need documentation updates, please follow the steps here to open a PR against dev-1.31 branch in the k/website repo. This PR can be just a placeholder at this time and must be created before Thursday June 27, 2024 18:00 PDT.
Also, take a look at Documenting for a release to get yourself familiarised with the docs requirement for the release.
Thank you!

NB: Doc updates are different from blog posts.

@soltysh
Copy link
Contributor Author

soltysh commented Jun 24, 2024

Hi @soltysh, by this comment, I am assuming that this enhancement does not need any updates to the Docs. Please correct me if I am wrong.

That is correct.

@dipesh-rawat
Copy link
Member

Hey again @soltysh 👋, 1.31 Enhancements team here,

Just checking in as we approach code freeze at at 02:00 UTC Wednesday 24th July 2024 / 19:00 PDT Tuesday 23rd July 2024.

Here's where this enhancement currently stands:

  • All PRs to the Kubernetes repo that are related to your enhancement are linked in the above issue description (for tracking purposes).
  • All PR/s are ready to be merged (they have approved and lgtm labels applied) by the code freeze deadline. This includes tests.

Regarding this enhancement, it appears that there are currently no pull requests in the k/k repository related to it.

For this KEP, we would need to do the following:

  • Ensure all PRs to the Kubernetes repo related to your enhancement are linked in the above issue description for tracking purposes).
  • Ensure all PRs are prepared for merging (they have approved and lgtm labels applied) by the code freeze deadline. This includes tests.

If you anticipate missing code freeze, you can file an exception request in advance.

The status of this enhancement is marked as at risk for code freeze.

@dipesh-rawat dipesh-rawat moved this from Tracked for Enhancements Freeze to At Risk for Code Freeze in 1.31 Enhancements Tracking Jul 2, 2024
@dipesh-rawat
Copy link
Member

Hey again @soltysh 👋, 1.31 Enhancements team here,

Just a quick friendly reminder as we approach code freeze in around two weeks time, at 02:00 UTC Wednesday 24th July 2024 / 19:00 PDT Tuesday 23rd July 2024.

The current status of this enhancement is marked as at risk for code freeze. There are a few requirements mentioned in the comment #4706 (comment) that still need to be completed.

If you anticipate missing code freeze, you can file an exception request in advance.

@koba1t
Copy link
Member

koba1t commented Jul 10, 2024

Hi @soltysh

I have a few ideas for users to avoid painful transitions. (cc @liggitt @thockin)

  • During the transition, the kubectl kustomize exec kubectl-kustomize binary if that was installed like krew.
    • Currently, kubectl kustomize is run kustomize-in-kubectl. If we provide how to use the external kustomize binary from kubectl, users be able to select when to move kustomize-in-kubectl to external kustomize.
  • After the transition, Still remain the kubectl apply -f option and be an alias of kustomize build . | kubectl apply -f. Show an error message that announces the installation of the kustomize binary if that is not found.
    • It requires an operation that installs kustomize binary for users. but we can remove the kustomize code from kubectl without deprecating it as a famous option.

I have a concern: currently main kustomize documentation site is a part of the site of kubectl.
Do you have any plans to remove the documentation of kustomize for this site?

@BenTheElder
Copy link
Member

During the transition, the kubectl kustomize exec kubectl-kustomize binary if that was installed like krew.
Currently, kubectl kustomize is run kustomize-in-kubectl. If we provide how to use the external kustomize binary from kubectl, users be able to select when to move kustomize-in-kubectl to external kustomize.
After the transition, Still remain the kubectl apply -f option and be an alias of kustomize build . | kubectl apply -f. Show an error message that announces the installation of the kustomize binary if that is not found.
It requires an operation that installs kustomize binary for users. but we can remove the kustomize code from kubectl without deprecating it as a famous option.

This sounds like a more detailed description of a comment above #4706 (comment)

It's interesting idea but ... if the user has to take additional action to install kustomize seperately, they could also update their scripts to invoke kustomize directly, I think this is only a rather marginal improvement on not breaking people versus forcing them to switch outright.

It's still going to break a lot of automation etc.

Also, this now means that you can't make assumptions about kustomize + kubectl version together.

I think if we could go back, that putting kustomize in kubectl may not have been the right move, but I also think we need really good reasons to break users and I don't think we've made a terribly strong case here. The dependency issues were bad, but as a dependency approver in kubernetes/kubernetes I'm not seeing a big problem there now ...

Having a compat mode that enables kustomize means we're not any less "promoting one tool over another", so the remaining motivation about release-cycle alignment seems a bit thin.

Kubernetes generally doesn't break end-users for GA functionality and when we do it hurts the entire ecosystem's reputation. We should be careful about this. At the very least I think we should communicate a stronger case for why this is necessary.

@dipesh-rawat
Copy link
Member

Hey again @soltysh 👋, 1.31 Enhancements team here,

Just a quick friendly reminder as we approach code freeze next week, at 02:00 UTC Wednesday 24th July 2024 / 19:00 PDT Tuesday 23rd July 2024.

The current status of this enhancement is marked as at risk for code freeze. There are a few requirements mentioned in the comment #4706 (comment) that still need to be completed.

If you anticipate missing code freeze, you can file an exception request in advance.

@soltysh
Copy link
Contributor Author

soltysh commented Jul 17, 2024

I've had some more discussions, we'll be bringing this topic for discussing with sig-arch in the next weeks. Due to that I'm dropping this from 1.31 release.

/milestone clear

@k8s-ci-robot k8s-ci-robot removed this from the v1.31 milestone Jul 17, 2024
@dipesh-rawat
Copy link
Member

@soltysh Thank you for confirming that this enhancement will be targeted for a future release. I will mark this as Deferred on the tracking board for the current v1.31 release.

@dipesh-rawat dipesh-rawat moved this from At Risk for Code Freeze to Deferred in 1.31 Enhancements Tracking Jul 17, 2024
@cnorling
Copy link

cnorling commented Aug 2, 2024

Dedicating time to removing Kustomize from kubectl seems like it will just hurt users. It strikes me as just invoking change for the sake of change than being pushed for more solid underlying reasons.

The current kubernetes release cycle doesn't match that of kustomize, oftentimes risking users of kubectl to work with outdated version of kustomize.

Most of the time I'm apathetic to the version of Kustomize being invoked and as an end user I simply don't need to care about the underlying version. If people were running into circumstances where disparate version increments of Kustomize were producing wildly different outputs, there would be more people complaining. It's been my experience that the expected behaviors and outputs of Kustomize between revisions is quite similar.

Having Kustomize be available OOTB with kubectl has been nice as a user. I don't have to tell me colleagues to go curl Kustomize because it's already there for them.

@sebhoss
Copy link

sebhoss commented Aug 3, 2024

IMHO kubectl should include kustomize and enable all its features (e.g. exec plugins) or not bundle it at all. Having a half baked version of kustomize in kubectl quickly frustrates users and increases maintenance work in kubectl

@koba1t
Copy link
Member

koba1t commented Aug 7, 2024

It's interesting idea but ... if the user has to take additional action to install kustomize seperately, they could also update their scripts to invoke kustomize directly, I think this is only a rather marginal improvement on not breaking people versus forcing them to switch outright.

Maybe you are right.
I think for some cases that users don't install kubectl directly the platform team sets up those. Like CI environment or step server that executable kubectl for productions.

@thockin
Copy link
Member

thockin commented Sep 9, 2024

Have we decided what we want to do about this? There's a strong feeling to not break users, but maybe there's a middle ground, like:

  • freeze kustomize-in-kubectl
  • issue a warning when it is used that users are better off to use kustomize directly
  • maybe have kubectl exec kustomize if a magic env var is specified (but I am not sure on this one) ?

@liggitt
Copy link
Member

liggitt commented Sep 9, 2024

I don't think freezing is viable. Dependencies we cannot update are unfixable vulnerabilities waiting to happen. I'd really like to know more about the motivations for removal and whether they are actually blocking us updating kustomize in kubectl occasionally.

@thockin
Copy link
Member

thockin commented Sep 9, 2024

The text of the KEP (and yaml) still says alpha in 31 - That didn't happend and I don't suppose we're look at this in 32 either?

@thockin thockin moved this to Pre-alpha in KEPs I am tracking Sep 9, 2024
@koba1t
Copy link
Member

koba1t commented Sep 10, 2024

I have done some experimenting, and it may be possible to remove the kustomize dependency from kubectl while maintaining the previous behavior if we do a binary embed.
In detail, bundle the pre-built binaries of kustomize in the same 'os, arch' with kubectl by 'go embed', copy the binaries to the 'tempolary directory' (like /tmp) when kustomize needs to run, and execute it by 'exec.Command' or similar.
However, this may not be acceptable due to security risks with binary exec, maintainability of the build process, or specific platform characteristics.

@thockin
Copy link
Member

thockin commented Sep 10, 2024 via email

@liggitt
Copy link
Member

liggitt commented Sep 10, 2024

Before we get into ever more creative ways of accomplishing the removal, there's not agreement on the motivations of the KEP that we even should pursue removal. That seems important to settle first.

The motivations in the KEP are:

  • [...] current kubectl maintainers feel that promoting one tool over the other should not be the role of the project

That could have informed the decision to add kustomize to kubectl, but I don't think it should be motivation to break users once added.

  • The current kubernetes release cycle doesn't match that of kustomize, oftentimes risking users of kubectl to work with outdated version of kustomize.

This is not an issue for users currently using the embedded kustomize version successfully. Nothing stops a user impacted by an older embedded kustomize from obtaining and using a more up-to-date standalone kustomize binary. I don't see how removing the embedded one improves things for anyone here.

  • Lastly, some of the kustomize dependencies has already been problematic to the core kubernetes project, so removing kustomize will allow us to minimize the dependency graph and the size of kubectl binary.

There's always opportunity to trim down and improve, but I haven't seen kustomize specifically causing more problems than other dependencies in recent years.

@tjons
Copy link
Contributor

tjons commented Sep 16, 2024

Hi, enhancements lead here - I inadvertently added this to the 1.32 tracking board 😀. Please readd it if you wish to progress this enhancement in 1.32.

/remove-label lead-opted-in

@k8s-ci-robot k8s-ci-robot removed the lead-opted-in Denotes that an issue has been opted in to a release label Sep 16, 2024
@soltysh
Copy link
Contributor Author

soltysh commented Nov 28, 2024

After a discussion in sig-architecture meeting on Oct 17th (see recording and meeting notes) the decision was made to not to pursue this topic further, and leave kustomize as is, part of kubectl. I've updated the KEP to reflect the rejected status in #4984

@soltysh soltysh moved this from Needs Triage to Closed in SIG CLI Nov 28, 2024
@soltysh soltysh closed this as completed Nov 28, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
sig/cli Categorizes an issue or PR as relevant to SIG CLI. stage/alpha Denotes an issue tracking an enhancement targeted for Alpha status tracked/yes Denotes an enhancement issue is actively being tracked by the Release Team
Projects
Archived in project
Status: Deferred
Development

No branches or pull requests