Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ktl proposal #179

Closed
wants to merge 5 commits into from
Closed
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
243 changes: 243 additions & 0 deletions docs/proposals/ktl.md
@@ -0,0 +1,243 @@
# ktl (kubectl2)

## Metadata

## Table of Contents

- [Title](#title)
- [Metadata](#metadata)
- [Table of Contents](#table-of-contents)
- [Summary](#summary)
- [Motivation](#motivation)
- [Reference-level explanation](#reference-level-explanation)
- [Graduation Criteria](#graduation-criteria)
- [Alternatives](#alternatives-optional)
- [Unresolved Questions](#unresolved-questions-optional)

## Summary

- Stop trying to wholesale move kubectl out of kubernetes/kubernetes
- we have been working on this for a year with limited success
- Instead, create a new cli command called `ktl`.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you do a pitch here about why ktl or whatever we call it is better? This is mostly a "we have to refactor stuff because refactoring is good" proposal - adding a vision statement is really important to justify the cost we'll be taking on and adding to users.

Something like:

In order to evolve the command line experience for Kubernetes, and benefit from lessons learned about how to best support declarative configuration of applications AND easy extension of infrastructure primitives, we propose a new CLI tool that will be simpler, easier to work with and extend, drive improvements to core user actions, and still preserve existing user support via kubectl.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

also justify the possible cost for user to switch to this new command line tool.

- `ktl` lives in its own repo and vendors commands developed in other repos (kubernetes/kubernetes, kubernetes/kubectl)
- `ktl` is built and released frequently
- vendored sub commands follow their own release cycles and updated by released the vendored code

## Motivation

The core Kubernetes cli is published as a single binary called `kubectl`.
`kubectl` is a statically linked go binary in the kubernetes/kubernetes
repo that contains many sub commands to perform operations against a Kubernetes cluster, such as:

- creating & updating objects defined by commandline args
- managing objects using configuration files
- debugging objects in a cluster

Benefits of this approach:

- having a single binary facilitates discovery for the suite of Kubernetes cli commands
- static binary makes it simple to distribute the cli across all OS platforms
- static binary makes it simple for users to install the cli
- single binary limits the size of the install (MB) vs distributing ~50 separate go commands (GB)
- centralizing the development facilitates the construction of shared libraries and infrastructure
- leveraging the kubernetes/kubernetes repo infrastructure provides a process to build and release

Challenges of this approach:

- the cli cannot be released at a different intervals from the cluster itself
- slows down velocity
- slows down feedback on alpha and beta features - longer time to GA
- each release is heavier weight due to its length
- obfuscates version skew as a first class problem

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

basically we must have a different release cycle between cli and cluster itself.

- individual commands cannot be released at different intervals from one another
- some commands should be released every 3 months with the kubernetes cluster
- others could be released daily

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

does this means that we will distribute ~50 separate go commands? It could be another disaster.

- shared cli infrastructure cannot easily be publish to be used by repos / commands outside kubernetes/kubernetes
- requires vendoring kubernetes/kubernetes which is painful
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This pain is so real. Went through this exercise recently, and gave up.

- cli cannnot be owned and maintained independent from unrelated components
- submit / merge queue blocks on unrelated tests
- GitHub permissions apply to whole repo - cannot add collaborators or maintainers for just the cli code
- GitHub notifications for kubernetes/kubernetes are a firehose
- hard to manage PRs and issues because they are not scoped to CLI

To address these challenges, sig-cli maintainers have worked toward moving kubectl out of the main repo
for the past year. This process has been slow and challenging due to the following reasons:

- many kubectl commands depend on internal kubernetes/kubernetes libraries
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If you vendor current kubectl you'll end up having the same problem, still. It won't magically disappear just by vendoring. In a lot of cases you'll still have to handle that.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should switch to vendoring separate Kubernetes repos, like kubernetes/client-go

- most commands should not have these dependencies, but removing them requires large rewrites of the commands
- *some* commands do actually need these dependencies (e.g. convert)
- continuing to develop in the kubernetes/kubernetes repo results in more *bad* dependencies being added
even as we try to remove old ones
- many commands depend on test infrastructure bespoke to kubernetes/kubernetes, which would need to be moved as well

Additionally, many kubectl commands are laden with technical debt, using anti-patterns for working with APIs
that do not work with version skew or support extensibility. We have since grown out of using these patterns,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Names of some of those would good, perhaps links to some issues...

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added examples

but they are still pervasive. Frequently, is it much faster and effective to rewrite large pieces
instead of trying to refactor them into different designs.

Some examples of anti-patterns:

- Relying on specific compiled in API types for generic operations (scale, rollback)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

scale is will be generic in a month of so. kubernetes/kubernetes#56077 already merged and we're ready to start switching individual scalers over. If we're willing to crack a few eggs on the way through it should land without too much trouble.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

reconcile here kubernetes/kubernetes#56197 is also a painful one I think.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Glad to hear it. Sounds like a good start.

- Using internal type definitions (printers, various other commands)
- Round tripping objects read from the server through go structures (some exceptions where this maybe ok)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is usually ok since round tripping through an internal type back to external and computing a resultant patch is indistinguishable from using an external client. Any defaulting that happens results in a zero-diff patch for that area. The result of skewed version is no worse than a skewed version of an external client.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, you're allowed to round trip for patch. You're not allow to round trip for PUT. But no one should be allowed to PUT except replace.


## Goals for ktl

Thing we need.

Keep the advantages of:

- easy distribution
- reasonably sized (~100MB)
- easy installation
- use common cli / client infrastructure
- discoverable commands

And also:

- allow the cli to be released independently from the cluster
- allow sub command groups within the cli to be released independently from one another
- allow a decentralized ecosystem of tools to leverage centralized maintained cli / client infrastructure
- facilitate end-to-end ownership of the cli, and in some cases sub command groups with the cli
- facilitate decentralized development of extensions for the cli

## Anti-goals

Things we want to avoid.

- block on moving existing kubectl commands out of the kubernetes/kubernetes repo
- rewrite kubectl from the ground up in a new repo

## Non-goals

Even if these are good ideas, don't let them distract us from meeting our goals will simpler solutions.

- build solution for discovering installable plugins and installing them
- rely on existing package management solutions for this until we need something more
- invent new build and distribution infrastructure
- fix issues with the existing kubectl commands
- dog fooding the plugin mechanism for core commands

## Reference-level explanation

- build a new cli binary `ktl` (kubectl2) under the `kubernetes/ktl` repo that dispatches to commands developed in other repos
- keep old cli commands in `kubernetes/kubernetes/cmd/kubectl` and vendor them into `kubectl/ktl`
- build new cli commands in `kubernetes/kubectl` and vendor them into `kurnetes/ktl`
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why does this step exist? Seems strange to build new commands outside of the repo you plan to use them in.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

typo kurnetes->kubernetes

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, enumerate why this helps limit scope, as opposed to folding kubectl into here.

- build common cli Kubernetes client infrastructure and libraries that can be used to develop a decentralized cli ecosystem
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You may want to clarify where the CLI infra code lives. I assume it is kubernetes/kubectl.


### ktl

In a new repo (`kubernetes/ktl`), create the `ktl` binary that dispatches (both statically and dynamically)
to commands developed in other repos.

#### Dispatch
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1 on backing static/dynamic dispatch from day 1. It addresses lot of concerns.


Static dispatch:

- vendor in kubernetes/kubernetes/cmd/kubectl cobra commands, and register them under `ktl kube`
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I like ktl, I hope people will stop discussing how to pronounce the main command, it read naturally as ktl or something like cattle or kettle.

Seems like commands are gonna get even longer, but I like klt... have you thought of other options?
Perhaps we could consider restructuring some of the subcommands a little bit?

I know a lot of users who aliased kubectl as k, perhaps we should aim for making basic commands shorter, and not longer.

As ktl is kind of a short version of kubectl, so it already implies kube, and many people will think of it that way, and thereby is a kind of a mouthful.
How about, ktl core and with klt c alias, or a actually kctl <command> auto-mapped to kctl core <command> (by the way, it'd seem okay to disallow plugins to remap top-level commands, but we can discuss that separately).

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I know a lot of users who aliased kubectl as k, perhaps we should aim for making basic commands shorter, and not longer.

As we rewrite commands, they will become shorter since they will be directly under ktl

, or a actually kctl auto-mapped to kctl core

We could do that, but why not keep kubectl as the name if we are going to?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If you were going to do this (I'm not weighing in on that one way or the other yet), why not be more judicious about those things included. If you made kubectl match the dynamic plugin mechanism, you could avoid ever taking a dependency on k8s.io/kubernetes. Doing this appears to be in opposition to your previous statement about "many kubectl commands depend on internal kubernetes/kubernetes libraries". You'd still have that dependency, it would just be harder to see.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Using the dynamic plugin mechanism complicates things like distribution and auto-update. See goals for more details.

This would quarantine the dependency without requiring a ground up rewrite. That is sufficient for our immediate goals.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This would quarantine the dependency without requiring a ground up rewrite. That is sufficient for our immediate goals.

It usually doesn't result in that quarantine. See #179 (comment) . We've tried on very simple vendorings of client-go and ended up with intractable failures.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Throwing it behind an internal package should be sufficient.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The "glog does bad things to global flags package" is I think a stronger argument against vendoring.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I can't see the benefit of this - every system that has ktl is going to have kubectl for various reasons. Distribute them together, have ktl use dynamic dispatch until such a time as you can break it up. I can't imagine someone who is going to have just ktl installed.

- vendor in kubernetes/kubectl/cmd cobra commands, and register them under `ktl`
- vendor in commands from other sources as needed over time

Dynamic dispatch:

- support git-style plugins that allow registering new commands under `ktl`
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Doesn't this produce the same land-rush concerns we were worried about before? ktl login (for example) is likely to be a hotspot of incompatible extensions.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What simpler alternative do you suggest?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What simpler alternative do you suggest?

Simplest case? domain name separation works well.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Unix has managed to function without this

- use simplified version of the kubectl plugin implementation - make configuration files optional
- plugins only purpose is discovery of kubernetes related commands
- plugins can leverage shared client cli libraries (whether they are installed as plugins or not)
- by default throw an error if plugin names conflict with existing commands
- this is configurable
- plugins can be disabled through modifying `~/.ktlconfig`

Overriding existing commands:

- support command alias' to allow overriding one command with another
- allows plugins to extend and override built-in commands
- allows `ktl kube *` commands to be aliased to `ktl *`

#### Configuration

Use [viper](https://github.com/spf13/viper) to configure dispatch precedence

### Cli commands

All new cli commands should be built outside of kubernetes/kubernetes and vendored into the kubernetes/ktl. Command repos
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you explain the benefit of forcing a layer of vendoring? I see a distinction between developing re-useable CLI infrastructure (if that's a thing we thing we want to do) and writing commands. By all means, vendor the infrastructure, but why vendor the commands too?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  1. It keeps kubernetes/kubernetes outside of kubectl
  2. Each vendored set of commands can have a separate release cycle from the others

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It keeps kubernetes/kubernetes outside of kubectl
Each vendored set of commands can have a separate release cycle from the others

Picking winners and losers like this along with vendoring concerns seems contrary to goals of being generic and having a simple core. If we do this, we could instead focus on the simple core. We have a client (kubectl) that already works. We should be careful to avoid compromises in a new tool to have quick progress we'll later regret.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In my experience rapid iteration and continual refactoring + adjustment over time typically will have better results than careful planning with slow iteration cycles.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Note that by adding a new CLI tool ktl we are imposing an enormous cost on the kubernetes ecosystem and users. That's not free - far from it.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this because of fragmentation?

implement an interface that returns a list of cobra (sp13) commands, which ktl can register under the root.

Initially new cli commands can be built in the kubernetes/kubectl, but if needed, development maybe moved to other repos.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We (openshift) have a significant amount of experience stitching together multiple repos through vendoring. The experience is sub-par (not a chance I'd do it if they didn't pay me). In addition, the state of golang as a language makes vendoring without stripping vendor trees difficult. The more repos you add, the greater the risk of them relying on conflicting levels of base libraries which makes them impractical to vendor.

The simplest example would probably be client-go. If you have two things you're vendoring that rely on client-go and they don't agree on level, you can't even pass through the basic client config types to both. Everyone in the entire tree moves together or not at all. Or ends up trying to write shims through core types like strings.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Things are much simpler if you don't strip vendor trees.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Things are much simpler if you don't strip vendor trees.

Eh, they seem so until you start trying to provide infrastructure down the layers. When you do that, you end up having to partially flatten since second order interfaces (interface with function returning another interface) don't resolve in golang. Even very simple vendor trees usually require partial flattening since naughty packages like glog use global flag or default http registration that results in panics at runtime.

Most projects I've seen strip-vendor.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also note that multiple vendor trees will multiply the size of the binary as well.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also note that multiple vendor trees will multiply the size of the binary as well.

Its not terrible though. If we are worried about download size, each full kubectl vendor only adds about 5MB. If we are worried about the install size, I believe it was only about 25-50MB.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In general the only vendored package that can be safely left vendored are those that depend only on stdlib. The odds of a package exposing via its public API a vendored dependency approach unity when more than 2-3 parent packages are imported. In practice this means that not stripping vendor trees means you can't invoke most of the public methods.

Examples - etcdclient, if we didn't strip gRPC, would be unable to add wrappers or transports or tweak gRPC, because the packages under the vendor are hidden from us. This is a massive PITA.

I don't think vendoring without stripping is practical except in carefully chosen cases, of which this is not one.


#### Conventions

- ktl top level commands are one of
- command groups that work against a specific API group/versions (e.g. isto, svcat, kube) e.g. `create`
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

allowing abbreviations or really almost anything other than domains seems fraught for collisions. Imagine auth, or apps, or rbac are all "obvious", but they are also collision prone.

In our experience kubectl create rolebinding is a non-obvious thing. Does it create an openshift one or a kube one? Why are some preferred.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We won't have control over this for plugins.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also, making ktl create statically built will make life difficult as you attempt to vendor and reconcile different levels of core libraries.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This should be simpler if we don't strip vendors

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This should be simpler if we don't strip vendors

Commented in other places in the thread. Not stripping vendors doesn't work if you want to provide any infrastructure and you want to be able to vendor more than one thing.

for a given resource.
- generic commands that are agnostic to specific APIs, but instead discover API metadata and work against
all APIs following published conventions. e.g. a `scale` that works against anything with a scale sub resource
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The core work for much of this should happen in client-go. See how kubernetes/kubernetes#56077 leverage the generic scale client from client-go.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Look forward to seeing that.



### Library infrastructure

Develop shared set of client / cli libraries that handle registering flags / environment variables / config files
and provide common functionality to cli clients of Kubernetes.

- reading and parsing kubeconfig
- printing libraries
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we're less interested in the library and more interested in consistent flags and behavior. I'd really like to see this gotten "right" in kubectl before trying again. I'm not convinced that we've learned from prior mistakes. options, addflags, and toX provide consistency, flexibility, and ease to the generic apiserver. Seems like we could transform kubectl to use something like that an prove it works.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think we should block on perfecting something in kubectl. The iteration cycle will be much slower.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the end result of this entire effort will be two CLIs - kubectl (slower) and ktl (faster, some improvements, some differences). We aren't going to kill kubectl except by having something materially better, so we'll continue to need to evolve kubectl slowly, which means duplication and split effort. I'm not as worried about the libraries in the scheme of things, although i think printers is something that's on an arc to be correct.

- reading and parsing config into objects from files, directories, stdin
- indexing + querying discovery API and openapi data
- manipulating unstructured objects using openapi schema
- merging objects
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

please clarify what this is.

- generating patches from objects
- writing objects to the server - optimistic locking failure retries with exp backoff
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This needs rethinking. If you're writing a client from scratch, you should assiduously avoid update. The issues with skew, dropped fields, and update are real. External clients provide no benefit or protection over internal types, so don't think you'll get away from it by using them.

Also, patch already retries for you server-side.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Patch is broken. There are a number of fields it doesn't work for.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Patch is broken. There are a number of fields it doesn't work for.

Focusing on that problem seems like it will bear more fruit than trying to resolve the skew issues that have plagued clients for many years with open, unstaffed proposals to fix them.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There isn't a good way to fix it without breaking backward compatibility. :(

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can easily introduce new versioned patch mime types to fix issues.

If we don't fix patch, then we're basically saying users will continue to be broken - that has to have a really good justification.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The issues with skew, dropped fields, and update are real.

If I understand correctly, these issues are caused by serialization using go-structs. Using unstructured doesn't have this issue, and IIUC proto3 will soon also keep fields when skewed.

Also, patch already retries for you server-side.

This isn't sufficient in cases where the value being written is based off the value read. An example is incrementing a counter depends on the value of the counter that was read. Client-side retries on optimistic locking failures are a well understood pattern and we should support it.

Focusing on that problem seems like it will bear more fruit than trying to resolve the skew issues that have plagued clients for many years with open, unstaffed proposals to fix them.

What are the skew issues you are referring to besides the dropped fields? As I understand it, the problem with dropping fields is specific to the serialization, not the API itself.

If we don't fix patch, then we're basically saying users will continue to be broken - that has to have a really good justification.

Agreed. We should make a decision and justify that decision regardless of what it is. We haven't done a good job of supporting strategic merge PATCH, and we should be honest about whether we are going to do a better job in the future. It would be helpful if there was a document that describes the original justification for creating our own bespoke PATCH format and what other options where considered instead, do you know of one? In my experience it has created more problems than it has solved.

Regardless, the ownership for strategic merge PATCH should come out of api machinery. Fixing that API should not fall upon the clients.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This isn't sufficient in cases where the value being written is based off the value read. An example is incrementing a counter depends on the value of the counter that was read. Client-side retries on optimistic locking failures are a well understood pattern and we should support it.

I'm still not seeing how that precludes using patch since you can include resourceVersion in those cases and still avoid dropped fields.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

2 other things to consider w.r.t. going all in on PATCH / strategic merge PATCH:

  • My understanding is that not every resource type supports strategic merge patch
  • PATCH only helps with dropping fields for compiled in types that are skewed, but does not help for either 1) extension types that aren't compiled in or 2) new types that aren't compiled into old clients (yet)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm still not seeing how that precludes using patch since you can include resourceVersion in those cases and still avoid dropped fields.

That doesn't (though there are a number of other reasons I would give not to use PATCH), however it does preclude relying exclusively on server-side retries for optimistic locking failures. I believe that is what this discussion was originally about:

writing objects to the server - optimistic locking failure retries with exp backoff

- providing tty for subresources that accept - e.g. exec, attach
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd like to see these become polymorphic subresources as well. Accept an options struct of some kind (ExecOptions?) and you can execute against anything. Again, this benefit doesn't require tossing kubectl.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What repo (developed outside kubernetes/kubernetes) would you suggest?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What repo (developed outside kubernetes/kubernetes) would you suggest?

As we did with scale, client-go for clients works well.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

client-go is developed inside kubernetes/kubernetes. One of the goals is to develop more infrastructure outside of kubernetes/kubernetes.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't follow the logic of why polymorphic sub resources (part of API conventions) and "developing infrastructure outside of k/k" have anything to do with each other. If we have interface like sub resources, someone has to spec them, and that spec is in k/k.

Splitting out things from k/k has to have a justified reason that pays for itself. Splitting is not the goal (if all the same people working on ktl are the people working on kubectl, we're just doing things worse).

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think I may have missed @deads2k original point. I agree these should become subresources. In fact I think they already are (path from swagger.json):

"/api/v1/namespaces/{namespace}/pods/{name}/exec"

My intention was to argue that we provide libraries to simplify invoking these subresources from the cli. I would rather build such tools outside of kubernetes/kubernetes because building inside kukbernetes/kubernetes has a number of disadvantages - such as - the code is slow to release stable versions, the submit queue and release tests are a behemoth to get through, issue management is overwhelming, etc. Publishing from kubernetes/kubernetes -> kubernetes/client-go won't address these issues.

- defining shared exit codes (e.g. exit code when not doing a conditional write)
- test infrastructure for unit and integration tests
- support version skew testing
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: indentation


### Build and release infrastructure

Develop build triggers to automatically cut and publish builds based on the presence of GitHub tags. Aggregate
release notes from vendored commands.

- use GCP container builder + mirrored GCP source repo
- publish binary to gs:// bucket
- publish binary + release notes to GitHub releases page
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Publishing container image would be also very handy, if we use GCP builder, it seems like a no-brainer.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes, agreed


## Graduation Criteria

None

## Alternatives [optional]

### Keeping the kubectl name instead of rebranding

Alternatively we could call the new command `kubectl` and attach all of the legacy kubectl commands
at the root level. This would make it look and feel exactly like `kubectl`, but allow for new pieces
to be built out of kubernetes/kubernetes.

Steps:

- rename `kubernetes/kubernets/cmd/kubectl` to `kubernetes/kubernets/cmd/legacykubectl`
- rename `kubernetes/kubernets/pkg/kubectl` to `kubernetes/kubernets/pkg/legacykubectl`
- create new `kubernetes/kubectl/cmd/kubectl` command
- vendor kubectl subcommands directly under `kubernetes/kubectl/cmd/kubectl` root command

Tradeoffs:

Keeping kubectl as name: Need to make it continue to look and act like kubectl

- Minimal change from users perspective
- Can easily swap with existing kubectl without updating docs, blog posts, etc

Renaming to ktl: Need to get everyone to use the new command

- Easier to phase out old commands with new ones while changing behavior
- Easier to restructure command layout
- Easier to redefine command wide things - such as version skew support


### Don't vendor in commands, make them plugins instead*
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Vendoring in commands, especially when you want to vendor in more commands, gives a you a big boost to initial velocity at the cost of bringing forward long term debt that may end up be insoluble. See the culmination of debt/conflict with docker/distribution and k8s.io/kubernetes as it bit openshift/origin. Jordan and I were only able to solve that by amputation of the afflicted code. I would strongly recommend against this.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That is the eventual goal. We don't have the luxury of starting with a ground up rewrite.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Keeping the bad code as a vendor helps this.


This would be a more complicated approach that can be considered in a later iteration.

## Unresolved Questions [optional]

- Should we rename / restructure the command, or keep kubectl and all the bagage that comes with?
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The question is what CLI API guarantees must we preserve (which ones will users accept, and which will users reject)? Do we preserve the scripting semantics of get? The resource argument ordering? Places where the CLI invocation is broken is where we don't want to inherit, places where the cost to end users is going to be very high to switch is where we do.