-
Notifications
You must be signed in to change notification settings - Fork 1.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Deprecate and remove kustomize from kubectl #4706
Comments
/sig cli |
Hello @soltysh 👋, Enhancements team here. Just checking in as we approach enhancements freeze on on 02:00 UTC Friday 14th June 2024 / 19:00 PDT Thursday 13th June 2024. This enhancement is targeting for stage Here's where this enhancement currently stands:
For this KEP, we would need to update the following:
The status of this enhancement is marked as If you anticipate missing enhancements freeze, you can file an exception request in advance. Thank you! |
Hello @soltysh 👋, 1.31 Enhancements team here. Now that PR #4712 has been merged, all the KEP requirements are in place and merged into k/enhancements, this enhancement is all good for the upcoming enhancements freeze. 🚀 The status of this enhancement is marked as |
Hello @soltysh 👋, 1.31 Docs Lead here. |
Hey @Princesso we'll probably want to put together a blog post around 1.31 release, to more advertise the fact of this deprecation along with the future plan for removal. So that more users are aware of this fact. I'll followup with appropriate PRs. |
I'm surprised and disappointed to see this proposed. I don't love the current state, but it's the commitment we made to users... we should not just break them without very very good reasons. I think the KEP enormously underestimates the impact. There are thousands of publicly visible uses and likely even more non-public uses. Dropping support / breaking those uses does reputational damage to kubernetes for being unstable in new versions. The justification / motivation in the KEP is vague
|
Without having read the KEP, just the headline....I'm pretty strongly in the "no, that would break users" camp. I'm all for throwing warnings - use colors and flashing terminal codes, heck - make it play the Star Wars alarm siren on the PC speaker if you can. As a recent victim of tools breaking underneath me, let's PLEASE take this seriously. It's just about the worst thing we can do to people. Look, I hate past me more than anyone, but I have to live with his idiotic, short-sighted decisions. https://youtu.be/EjR1Ht__9KE?si=8cymBHCdN-UbPx4U - FF to 12:22 |
Hi @soltysh, 👋 from the v1.31 Communications Team! We'd love for you to opt in to write a feature blog about your enhancement! To opt in, let us know and open a Feature Blog placeholder PR against the website repository by 3rd July, 2024. For more information about writing a blog see the blog contribution guidelines. Note: In your placeholder PR, use XX characters for the blog date in the front matter and file name. We will work with you on updating the PR with the publication date once we have a final number of feature blogs for this release. |
@liggitt @thockin thanks for your valuable input, I think it's important that we start having those conversations. I'll probably either open this topic again with sig-cli or even with sig-arch, so that we can discuss the potential path forward. Like I said when talking with Jordan on slack, nothing set in stone, but at the same time we shouldn't be stuck in a place that we seem to all agree is not the best one. |
Has it been considered to implement a compatibility mode after kubectl moves from the deprecation to the removal state? Such a compatibility mode could shell out to the kustomize binary and emulate what the kubectl native kustomize integration was doing. This compatibility/emulation mode could be in deprecation phase from day one while printing HUGE warning messages about what is happening. This way, kubectl could remove its compile-time dependency into kustomize and leave the compatibility mode for a much longer time. Of course, users would be required to install kustomize along kubectl, but that might be an acceptable tradeoff compared to all scripts breaking immediately. |
Hi! I'm currently subproject lead for kustomize. I have some comments on the
I feel we are able to make a kustomize release cycle to match what Kubernetes is using. In my memory, the reason the current release cycle is not regular is because we did not discuss whether it is necessary or not.
I agree with @liggitt's opinion. In my memory, I didn't notice any related issues, and I tried to clean up dependencies at any time.
I completely agree with your opinion. So, I'm able to agree with your idea to remove kustomize from kubectl to improve the maintainability of both projects. |
Hi @soltysh, by this comment, I am assuming that this enhancement does not need any updates to the Docs. Please correct me if I am wrong. If it does indeed need documentation updates, please follow the steps here to open a PR against dev-1.31 branch in the k/website repo. This PR can be just a placeholder at this time and must be created before Thursday June 27, 2024 18:00 PDT. NB: Doc updates are different from blog posts. |
That is correct. |
Hey again @soltysh 👋, 1.31 Enhancements team here, Just checking in as we approach code freeze at at 02:00 UTC Wednesday 24th July 2024 / 19:00 PDT Tuesday 23rd July 2024. Here's where this enhancement currently stands:
Regarding this enhancement, it appears that there are currently no pull requests in the k/k repository related to it. For this KEP, we would need to do the following:
If you anticipate missing code freeze, you can file an exception request in advance. The status of this enhancement is marked as |
Hey again @soltysh 👋, 1.31 Enhancements team here, Just a quick friendly reminder as we approach code freeze in around two weeks time, at 02:00 UTC Wednesday 24th July 2024 / 19:00 PDT Tuesday 23rd July 2024. The current status of this enhancement is marked as If you anticipate missing code freeze, you can file an exception request in advance. |
Hi @soltysh I have a few ideas for users to avoid painful transitions. (cc @liggitt @thockin)
I have a concern: currently main kustomize documentation site is a part of the site of kubectl. |
This sounds like a more detailed description of a comment above #4706 (comment) It's interesting idea but ... if the user has to take additional action to install kustomize seperately, they could also update their scripts to invoke kustomize directly, I think this is only a rather marginal improvement on not breaking people versus forcing them to switch outright. It's still going to break a lot of automation etc. Also, this now means that you can't make assumptions about kustomize + kubectl version together. I think if we could go back, that putting kustomize in kubectl may not have been the right move, but I also think we need really good reasons to break users and I don't think we've made a terribly strong case here. The dependency issues were bad, but as a dependency approver in kubernetes/kubernetes I'm not seeing a big problem there now ... Having a compat mode that enables kustomize means we're not any less "promoting one tool over another", so the remaining motivation about release-cycle alignment seems a bit thin. Kubernetes generally doesn't break end-users for GA functionality and when we do it hurts the entire ecosystem's reputation. We should be careful about this. At the very least I think we should communicate a stronger case for why this is necessary. |
Hey again @soltysh 👋, 1.31 Enhancements team here, Just a quick friendly reminder as we approach code freeze next week, at 02:00 UTC Wednesday 24th July 2024 / 19:00 PDT Tuesday 23rd July 2024. The current status of this enhancement is marked as If you anticipate missing code freeze, you can file an exception request in advance. |
I've had some more discussions, we'll be bringing this topic for discussing with sig-arch in the next weeks. Due to that I'm dropping this from 1.31 release. /milestone clear |
@soltysh Thank you for confirming that this enhancement will be targeted for a future release. I will mark this as |
Dedicating time to removing Kustomize from kubectl seems like it will just hurt users. It strikes me as just invoking change for the sake of change than being pushed for more solid underlying reasons.
Most of the time I'm apathetic to the version of Kustomize being invoked and as an end user I simply don't need to care about the underlying version. If people were running into circumstances where disparate version increments of Kustomize were producing wildly different outputs, there would be more people complaining. It's been my experience that the expected behaviors and outputs of Kustomize between revisions is quite similar. Having Kustomize be available OOTB with kubectl has been nice as a user. I don't have to tell me colleagues to go curl Kustomize because it's already there for them. |
IMHO kubectl should include kustomize and enable all its features (e.g. exec plugins) or not bundle it at all. Having a half baked version of kustomize in kubectl quickly frustrates users and increases maintenance work in kubectl |
Maybe you are right. |
Have we decided what we want to do about this? There's a strong feeling to not break users, but maybe there's a middle ground, like:
|
I don't think freezing is viable. Dependencies we cannot update are unfixable vulnerabilities waiting to happen. I'd really like to know more about the motivations for removal and whether they are actually blocking us updating kustomize in kubectl occasionally. |
The text of the KEP (and yaml) still says alpha in 31 - That didn't happend and I don't suppose we're look at this in 32 either? |
I have done some experimenting, and it may be possible to remove the kustomize dependency from kubectl while maintaining the previous behavior if we do a binary embed. |
I am not sure that is better. It gives us all of the same security
problems, with none of the visibility of having the code vendored. Also,
as you cite, exec is not always available. Lots of OSes mount just about
everything as noexec. Maybe /tmp is ok.
|
Before we get into ever more creative ways of accomplishing the removal, there's not agreement on the motivations of the KEP that we even should pursue removal. That seems important to settle first. The motivations in the KEP are:
That could have informed the decision to add kustomize to kubectl, but I don't think it should be motivation to break users once added.
This is not an issue for users currently using the embedded kustomize version successfully. Nothing stops a user impacted by an older embedded kustomize from obtaining and using a more up-to-date standalone kustomize binary. I don't see how removing the embedded one improves things for anyone here.
There's always opportunity to trim down and improve, but I haven't seen kustomize specifically causing more problems than other dependencies in recent years. |
Hi, enhancements lead here - I inadvertently added this to the 1.32 tracking board 😀. Please readd it if you wish to progress this enhancement in 1.32. /remove-label lead-opted-in |
After a discussion in sig-architecture meeting on Oct 17th (see recording and meeting notes) the decision was made to not to pursue this topic further, and leave kustomize as is, part of kubectl. I've updated the KEP to reflect the rejected status in #4984 |
Enhancement Description
Please keep this description up to date. This will help the Enhancement Team to track the evolution of the enhancement efficiently.
The text was updated successfully, but these errors were encountered: