Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ktl proposal #179

Closed
wants to merge 5 commits into
base: master
from

Conversation

@pwittrock
Copy link
Member

pwittrock commented Dec 14, 2017

No description provided.

@k8s-ci-robot

This comment has been minimized.

Copy link
Contributor

k8s-ci-robot commented Dec 14, 2017

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: pwittrock

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these OWNERS Files:

You can indicate your approval by writing /approve in a comment
You can cancel your approval by writing /approve cancel in a comment

@pwittrock pwittrock force-pushed the pwittrock:kubedesign branch from 240332d to 41910d5 Dec 14, 2017

@droot
Copy link
Collaborator

droot left a comment

Thanks for putting it together.
One quick note: I didn't see "using right version of kubectl depending on the targeted cluster" part in the proposal or I missed. It was part of the POC code you shared (1.7 vs 1.8 switching of kubectl).

- static binary makes it simple for users to install the cli
- single binary limits the size of the install (MB) vs distributing ~50 separate go commands (GB)
- centralizing the development facilitates the construction of shared libraries and infrastructure
- leveraging the kubernetes/kubenretes repo infrastructure provides a process to build and release

This comment has been minimized.

@droot

droot Dec 14, 2017

Collaborator

typo: kubenretes -> kubernetes

- some commands should be released every 3 months with the kubernetes cluster
- others could be released daily
- shared cli infrastructure cannot easily be publish to be used by repos / commands outside kubernetes/kubernetes
- requires vendoring kubernetes/kubernetes which is painful

This comment has been minimized.

@droot

droot Dec 14, 2017

Collaborator

This pain is so real. Went through this exercise recently, and gave up.

In a new repo (`kubernetes/ktl`), create the `ktl` binary that dispatches (both statically and dynamically)
to commands developed in other repos.

#### Dispatch

This comment has been minimized.

@droot

droot Dec 14, 2017

Collaborator

+1 on backing static/dynamic dispatch from day 1. It addresses lot of concerns.

- providing tty for subresources that accept - e.g. exec, attach
- defining shared exit codes (e.g. exit code when not doing a conditional write)
- test infrastructure for unit and integration tests
- support version skew testing

This comment has been minimized.

@droot

droot Dec 14, 2017

Collaborator

nit: indentation

But what about all the old blog posts, docs, books?

- These become out of date anyway as new and improved solutions are built (e.g. new APIs are introduced)
- Docs can be updated

This comment has been minimized.

@droot

droot Dec 14, 2017

Collaborator

Does it make sense to add Migration section which talks about how people can start using new CLI without disruption their existing tools. For ex.
Aliasing "kubectl" --> "ktl kube"

pwittrock added some commits Dec 15, 2017

@pwittrock

This comment has been minimized.

Copy link
Member Author

pwittrock commented Dec 15, 2017

One quick note: I didn't see "using right version of kubectl depending on the targeted cluster" part in the proposal or I missed. It was part of the POC code you shared (1.7 vs 1.8 switching of kubectl).

It was a fun part of the PoC, but I wanted to keep this focussed

@errordeveloper
Copy link
Member

errordeveloper left a comment

looking forward to this!

@@ -0,0 +1,214 @@
# ktl (kubectl2) design proposal

This comment has been minimized.

@errordeveloper

errordeveloper Dec 15, 2017

Member

Should this be in KEP format?

This comment has been minimized.

@pwittrock

pwittrock Dec 15, 2017

Author Member

Done

- many commands depend on test infrastructure bespoke to kubernetes/kubernetes, which would need to be moved as well

Additionally, many kubectl commands are laden with technical debt, using anti-patterns for working with APIs
that do not work with version skew or support extensibility. We have since grown out of using these patterns,

This comment has been minimized.

@errordeveloper

errordeveloper Dec 15, 2017

Member

Names of some of those would good, perhaps links to some issues...

This comment has been minimized.

@pwittrock

pwittrock Dec 15, 2017

Author Member

Added examples


Static dispatch:

- vendor in kubernetes/kubernetes/cmd/kubectl cobra commands, and register them under `ktl kube`

This comment has been minimized.

@errordeveloper

errordeveloper Dec 15, 2017

Member

I like ktl, I hope people will stop discussing how to pronounce the main command, it read naturally as ktl or something like cattle or kettle.

Seems like commands are gonna get even longer, but I like klt... have you thought of other options?
Perhaps we could consider restructuring some of the subcommands a little bit?

I know a lot of users who aliased kubectl as k, perhaps we should aim for making basic commands shorter, and not longer.

As ktl is kind of a short version of kubectl, so it already implies kube, and many people will think of it that way, and thereby is a kind of a mouthful.
How about, ktl core and with klt c alias, or a actually kctl <command> auto-mapped to kctl core <command> (by the way, it'd seem okay to disallow plugins to remap top-level commands, but we can discuss that separately).

This comment has been minimized.

@pwittrock

pwittrock Dec 15, 2017

Author Member

I know a lot of users who aliased kubectl as k, perhaps we should aim for making basic commands shorter, and not longer.

As we rewrite commands, they will become shorter since they will be directly under ktl

, or a actually kctl auto-mapped to kctl core

We could do that, but why not keep kubectl as the name if we are going to?


- use GCP container builder + mirrored GCP source repo
- publish binary to gs:// bucket
- publish binary + release notes to GitHub releases page

This comment has been minimized.

@errordeveloper

errordeveloper Dec 15, 2017

Member

Publishing container image would be also very handy, if we use GCP builder, it seems like a no-brainer.

This comment has been minimized.

@pwittrock

pwittrock Dec 15, 2017

Author Member

yes, agreed

pwittrock added some commits Dec 15, 2017

- build a new cli binary `ktl` (kubectl2) under the `kubernetes/ktl` repo that dispatches to commands developed in other repos
- keep old cli commands in `kubernetes/kubernetes/cmd/kubectl` and vendor them into `kubectl/ktl`
- build new cli commands in `kubernetes/kubectl` and vendor them into `kurnetes/ktl`
- build common cli Kubernetes client infrastructure and libraries that can be used to develop a decentralized cli ecosystem

This comment has been minimized.

@mengqiy

mengqiy Dec 19, 2017

Collaborator

You may want to clarify where the CLI infra code lives. I assume it is kubernetes/kubectl.

@deads2k

This comment has been minimized.


Some examples of anti-patterns:

- Relying on specific compiled in API types for generic operations (scale, rollback)

This comment has been minimized.

@deads2k

deads2k Dec 20, 2017

scale is will be generic in a month of so. kubernetes/kubernetes#56077 already merged and we're ready to start switching individual scalers over. If we're willing to crack a few eggs on the way through it should land without too much trouble.

This comment has been minimized.

@shiywang

shiywang Dec 20, 2017

Member

reconcile here kubernetes/kubernetes#56197 is also a painful one I think.

This comment has been minimized.

@pwittrock

pwittrock Dec 21, 2017

Author Member

Glad to hear it. Sounds like a good start.


- Relying on specific compiled in API types for generic operations (scale, rollback)
- Using internal type definitions (printers, various other commands)
- Round tripping objects read from the server through go structures (some exceptions where this maybe ok)

This comment has been minimized.

@deads2k

deads2k Dec 20, 2017

This is usually ok since round tripping through an internal type back to external and computing a resultant patch is indistinguishable from using an external client. Any defaulting that happens results in a zero-diff patch for that area. The result of skewed version is no worse than a skewed version of an external client.

This comment has been minimized.

@smarterclayton

smarterclayton Dec 21, 2017

Yeah, you're allowed to round trip for patch. You're not allow to round trip for PUT. But no one should be allowed to PUT except replace.


- build a new cli binary `ktl` (kubectl2) under the `kubernetes/ktl` repo that dispatches to commands developed in other repos
- keep old cli commands in `kubernetes/kubernetes/cmd/kubectl` and vendor them into `kubectl/ktl`
- build new cli commands in `kubernetes/kubectl` and vendor them into `kurnetes/ktl`

This comment has been minimized.

@deads2k

deads2k Dec 20, 2017

Why does this step exist? Seems strange to build new commands outside of the repo you plan to use them in.

This comment has been minimized.

@shiywang

shiywang Dec 20, 2017

Member

typo kurnetes->kubernetes

This comment has been minimized.

@smarterclayton

smarterclayton Dec 21, 2017

Yeah, enumerate why this helps limit scope, as opposed to folding kubectl into here.


Static dispatch:

- vendor in kubernetes/kubernetes/cmd/kubectl cobra commands, and register them under `ktl kube`

This comment has been minimized.

@deads2k

deads2k Dec 20, 2017

If you were going to do this (I'm not weighing in on that one way or the other yet), why not be more judicious about those things included. If you made kubectl match the dynamic plugin mechanism, you could avoid ever taking a dependency on k8s.io/kubernetes. Doing this appears to be in opposition to your previous statement about "many kubectl commands depend on internal kubernetes/kubernetes libraries". You'd still have that dependency, it would just be harder to see.

This comment has been minimized.

@pwittrock

pwittrock Dec 20, 2017

Author Member

Using the dynamic plugin mechanism complicates things like distribution and auto-update. See goals for more details.

This would quarantine the dependency without requiring a ground up rewrite. That is sufficient for our immediate goals.

This comment has been minimized.

@deads2k

deads2k Dec 20, 2017

This would quarantine the dependency without requiring a ground up rewrite. That is sufficient for our immediate goals.

It usually doesn't result in that quarantine. See #179 (comment) . We've tried on very simple vendorings of client-go and ended up with intractable failures.

This comment has been minimized.

@pwittrock

pwittrock Dec 21, 2017

Author Member

Throwing it behind an internal package should be sufficient.

This comment has been minimized.

@pwittrock

pwittrock Dec 21, 2017

Author Member

The "glog does bad things to global flags package" is I think a stronger argument against vendoring.

This comment has been minimized.

@smarterclayton

smarterclayton Dec 21, 2017

I can't see the benefit of this - every system that has ktl is going to have kubectl for various reasons. Distribute them together, have ktl use dynamic dispatch until such a time as you can break it up. I can't imagine someone who is going to have just ktl installed.


Dynamic dispatch:

- support git-style plugins that allow registering new commands under `ktl`

This comment has been minimized.

@deads2k

deads2k Dec 20, 2017

Doesn't this produce the same land-rush concerns we were worried about before? ktl login (for example) is likely to be a hotspot of incompatible extensions.

This comment has been minimized.

@pwittrock

pwittrock Dec 20, 2017

Author Member

What simpler alternative do you suggest?

This comment has been minimized.

@deads2k

deads2k Dec 20, 2017

What simpler alternative do you suggest?

Simplest case? domain name separation works well.

This comment has been minimized.

@pwittrock

pwittrock Dec 20, 2017

Author Member

Unix has managed to function without this


### Cli commands

All new cli commands should be built outside of kubernetes/kubernetes and vendored into the kubernetes/ktl. Command repos

This comment has been minimized.

@deads2k

deads2k Dec 20, 2017

Can you explain the benefit of forcing a layer of vendoring? I see a distinction between developing re-useable CLI infrastructure (if that's a thing we thing we want to do) and writing commands. By all means, vendor the infrastructure, but why vendor the commands too?

This comment has been minimized.

@pwittrock

pwittrock Dec 20, 2017

Author Member
  1. It keeps kubernetes/kubernetes outside of kubectl
  2. Each vendored set of commands can have a separate release cycle from the others

This comment has been minimized.

@deads2k

deads2k Dec 20, 2017

It keeps kubernetes/kubernetes outside of kubectl
Each vendored set of commands can have a separate release cycle from the others

Picking winners and losers like this along with vendoring concerns seems contrary to goals of being generic and having a simple core. If we do this, we could instead focus on the simple core. We have a client (kubectl) that already works. We should be careful to avoid compromises in a new tool to have quick progress we'll later regret.

This comment has been minimized.

@pwittrock

pwittrock Dec 20, 2017

Author Member

In my experience rapid iteration and continual refactoring + adjustment over time typically will have better results than careful planning with slow iteration cycles.

This comment has been minimized.

@smarterclayton

smarterclayton Dec 21, 2017

Note that by adding a new CLI tool ktl we are imposing an enormous cost on the kubernetes ecosystem and users. That's not free - far from it.

This comment has been minimized.

@pwittrock

pwittrock Jan 2, 2018

Author Member

Is this because of fragmentation?

All new cli commands should be built outside of kubernetes/kubernetes and vendored into the kubernetes/ktl. Command repos
implement an interface that returns a list of cobra (sp13) commands, which ktl can register under the root.

Initially new cli commands can be built in the kubernetes/kubectl, but if needed, development maybe moved to other repos.

This comment has been minimized.

@deads2k

deads2k Dec 20, 2017

We (openshift) have a significant amount of experience stitching together multiple repos through vendoring. The experience is sub-par (not a chance I'd do it if they didn't pay me). In addition, the state of golang as a language makes vendoring without stripping vendor trees difficult. The more repos you add, the greater the risk of them relying on conflicting levels of base libraries which makes them impractical to vendor.

The simplest example would probably be client-go. If you have two things you're vendoring that rely on client-go and they don't agree on level, you can't even pass through the basic client config types to both. Everyone in the entire tree moves together or not at all. Or ends up trying to write shims through core types like strings.

This comment has been minimized.

@pwittrock

pwittrock Dec 20, 2017

Author Member

Things are much simpler if you don't strip vendor trees.

This comment has been minimized.

@deads2k

deads2k Dec 20, 2017

Things are much simpler if you don't strip vendor trees.

Eh, they seem so until you start trying to provide infrastructure down the layers. When you do that, you end up having to partially flatten since second order interfaces (interface with function returning another interface) don't resolve in golang. Even very simple vendor trees usually require partial flattening since naughty packages like glog use global flag or default http registration that results in panics at runtime.

Most projects I've seen strip-vendor.

This comment has been minimized.

@sttts

sttts Dec 20, 2017

Also note that multiple vendor trees will multiply the size of the binary as well.

This comment has been minimized.

@pwittrock

pwittrock Dec 20, 2017

Author Member

Also note that multiple vendor trees will multiply the size of the binary as well.

Its not terrible though. If we are worried about download size, each full kubectl vendor only adds about 5MB. If we are worried about the install size, I believe it was only about 25-50MB.

This comment has been minimized.

@smarterclayton

smarterclayton Dec 21, 2017

In general the only vendored package that can be safely left vendored are those that depend only on stdlib. The odds of a package exposing via its public API a vendored dependency approach unity when more than 2-3 parent packages are imported. In practice this means that not stripping vendor trees means you can't invoke most of the public methods.

Examples - etcdclient, if we didn't strip gRPC, would be unable to add wrappers or transports or tweak gRPC, because the packages under the vendor are hidden from us. This is a massive PITA.

I don't think vendoring without stripping is practical except in carefully chosen cases, of which this is not one.

#### Conventions

- ktl top level commands are one of
- command groups that work against a specific API group/versions (e.g. isto, svcat, kube) e.g. `create`

This comment has been minimized.

@deads2k

deads2k Dec 20, 2017

allowing abbreviations or really almost anything other than domains seems fraught for collisions. Imagine auth, or apps, or rbac are all "obvious", but they are also collision prone.

In our experience kubectl create rolebinding is a non-obvious thing. Does it create an openshift one or a kube one? Why are some preferred.

This comment has been minimized.

@pwittrock

pwittrock Dec 20, 2017

Author Member

We won't have control over this for plugins.

#### Conventions

- ktl top level commands are one of
- command groups that work against a specific API group/versions (e.g. isto, svcat, kube) e.g. `create`

This comment has been minimized.

@deads2k

deads2k Dec 20, 2017

Also, making ktl create statically built will make life difficult as you attempt to vendor and reconcile different levels of core libraries.

This comment has been minimized.

@pwittrock

pwittrock Dec 20, 2017

Author Member

This should be simpler if we don't strip vendors

This comment has been minimized.

@deads2k

deads2k Dec 20, 2017

This should be simpler if we don't strip vendors

Commented in other places in the thread. Not stripping vendors doesn't work if you want to provide any infrastructure and you want to be able to vendor more than one thing.

- command groups that work against a specific API group/versions (e.g. isto, svcat, kube) e.g. `create`
for a given resource.
- generic commands that are agnostic to specific APIs, but instead discover API metadata and work against
all APIs following published conventions. e.g. a `scale` that works against anything with a scale sub resource

This comment has been minimized.

@deads2k

deads2k Dec 20, 2017

The core work for much of this should happen in client-go. See how kubernetes/kubernetes#56077 leverage the generic scale client from client-go.

This comment has been minimized.

@pwittrock

pwittrock Dec 20, 2017

Author Member

Look forward to seeing that.

and provide common functionality to cli clients of Kubernetes.

- reading and parsing kubeconfig
- printing libraries

This comment has been minimized.

@deads2k

deads2k Dec 20, 2017

I think we're less interested in the library and more interested in consistent flags and behavior. I'd really like to see this gotten "right" in kubectl before trying again. I'm not convinced that we've learned from prior mistakes. options, addflags, and toX provide consistency, flexibility, and ease to the generic apiserver. Seems like we could transform kubectl to use something like that an prove it works.

This comment has been minimized.

@pwittrock

pwittrock Dec 20, 2017

Author Member

I don't think we should block on perfecting something in kubectl. The iteration cycle will be much slower.

This comment has been minimized.

@smarterclayton

smarterclayton Dec 21, 2017

I think the end result of this entire effort will be two CLIs - kubectl (slower) and ktl (faster, some improvements, some differences). We aren't going to kill kubectl except by having something materially better, so we'll continue to need to evolve kubectl slowly, which means duplication and split effort. I'm not as worried about the libraries in the scheme of things, although i think printers is something that's on an arc to be correct.

- reading and parsing config into objects from files, directories, stdin
- indexing + querying discovery API and openapi data
- manipulating unstructured objects using openapi schema
- merging objects

This comment has been minimized.

@deads2k

deads2k Dec 20, 2017

please clarify what this is.

- manipulating unstructured objects using openapi schema
- merging objects
- generating patches from objects
- writing objects to the server - optimistic locking failure retries with exp backoff

This comment has been minimized.

@deads2k

deads2k Dec 20, 2017

This needs rethinking. If you're writing a client from scratch, you should assiduously avoid update. The issues with skew, dropped fields, and update are real. External clients provide no benefit or protection over internal types, so don't think you'll get away from it by using them.

Also, patch already retries for you server-side.

This comment has been minimized.

@pwittrock

pwittrock Dec 20, 2017

Author Member

Patch is broken. There are a number of fields it doesn't work for.

This comment has been minimized.

@deads2k

deads2k Dec 20, 2017

Patch is broken. There are a number of fields it doesn't work for.

Focusing on that problem seems like it will bear more fruit than trying to resolve the skew issues that have plagued clients for many years with open, unstaffed proposals to fix them.

This comment has been minimized.

@pwittrock

pwittrock Dec 20, 2017

Author Member

There isn't a good way to fix it without breaking backward compatibility. :(

This comment has been minimized.

@smarterclayton

smarterclayton Dec 21, 2017

We can easily introduce new versioned patch mime types to fix issues.

If we don't fix patch, then we're basically saying users will continue to be broken - that has to have a really good justification.

This comment has been minimized.

@pwittrock

pwittrock Jan 2, 2018

Author Member

The issues with skew, dropped fields, and update are real.

If I understand correctly, these issues are caused by serialization using go-structs. Using unstructured doesn't have this issue, and IIUC proto3 will soon also keep fields when skewed.

Also, patch already retries for you server-side.

This isn't sufficient in cases where the value being written is based off the value read. An example is incrementing a counter depends on the value of the counter that was read. Client-side retries on optimistic locking failures are a well understood pattern and we should support it.

Focusing on that problem seems like it will bear more fruit than trying to resolve the skew issues that have plagued clients for many years with open, unstaffed proposals to fix them.

What are the skew issues you are referring to besides the dropped fields? As I understand it, the problem with dropping fields is specific to the serialization, not the API itself.

If we don't fix patch, then we're basically saying users will continue to be broken - that has to have a really good justification.

Agreed. We should make a decision and justify that decision regardless of what it is. We haven't done a good job of supporting strategic merge PATCH, and we should be honest about whether we are going to do a better job in the future. It would be helpful if there was a document that describes the original justification for creating our own bespoke PATCH format and what other options where considered instead, do you know of one? In my experience it has created more problems than it has solved.

Regardless, the ownership for strategic merge PATCH should come out of api machinery. Fixing that API should not fall upon the clients.

This comment has been minimized.

@deads2k

deads2k Jan 2, 2018

This isn't sufficient in cases where the value being written is based off the value read. An example is incrementing a counter depends on the value of the counter that was read. Client-side retries on optimistic locking failures are a well understood pattern and we should support it.

I'm still not seeing how that precludes using patch since you can include resourceVersion in those cases and still avoid dropped fields.

This comment has been minimized.

@pwittrock

pwittrock Jan 2, 2018

Author Member

2 other things to consider w.r.t. going all in on PATCH / strategic merge PATCH:

  • My understanding is that not every resource type supports strategic merge patch
  • PATCH only helps with dropping fields for compiled in types that are skewed, but does not help for either 1) extension types that aren't compiled in or 2) new types that aren't compiled into old clients (yet)

This comment has been minimized.

@pwittrock

pwittrock Jan 3, 2018

Author Member

I'm still not seeing how that precludes using patch since you can include resourceVersion in those cases and still avoid dropped fields.

That doesn't (though there are a number of other reasons I would give not to use PATCH), however it does preclude relying exclusively on server-side retries for optimistic locking failures. I believe that is what this discussion was originally about:

writing objects to the server - optimistic locking failure retries with exp backoff

- merging objects
- generating patches from objects
- writing objects to the server - optimistic locking failure retries with exp backoff
- providing tty for subresources that accept - e.g. exec, attach

This comment has been minimized.

@deads2k

deads2k Dec 20, 2017

I'd like to see these become polymorphic subresources as well. Accept an options struct of some kind (ExecOptions?) and you can execute against anything. Again, this benefit doesn't require tossing kubectl.

This comment has been minimized.

@pwittrock

pwittrock Dec 20, 2017

Author Member

What repo (developed outside kubernetes/kubernetes) would you suggest?

This comment has been minimized.

@deads2k

deads2k Dec 20, 2017

What repo (developed outside kubernetes/kubernetes) would you suggest?

As we did with scale, client-go for clients works well.

This comment has been minimized.

@pwittrock

pwittrock Dec 20, 2017

Author Member

client-go is developed inside kubernetes/kubernetes. One of the goals is to develop more infrastructure outside of kubernetes/kubernetes.

This comment has been minimized.

@smarterclayton

smarterclayton Dec 21, 2017

I don't follow the logic of why polymorphic sub resources (part of API conventions) and "developing infrastructure outside of k/k" have anything to do with each other. If we have interface like sub resources, someone has to spec them, and that spec is in k/k.

Splitting out things from k/k has to have a justified reason that pays for itself. Splitting is not the goal (if all the same people working on ktl are the people working on kubectl, we're just doing things worse).

This comment has been minimized.

@pwittrock

pwittrock Jan 3, 2018

Author Member

I think I may have missed @deads2k original point. I agree these should become subresources. In fact I think they already are (path from swagger.json):

"/api/v1/namespaces/{namespace}/pods/{name}/exec"

My intention was to argue that we provide libraries to simplify invoking these subresources from the cli. I would rather build such tools outside of kubernetes/kubernetes because building inside kukbernetes/kubernetes has a number of disadvantages - such as - the code is slow to release stable versions, the submit queue and release tests are a behemoth to get through, issue management is overwhelming, etc. Publishing from kubernetes/kubernetes -> kubernetes/client-go won't address these issues.

- Easier to redefine command wide things - such as version skew support


### Don't vendor in commands, make them plugins instead*

This comment has been minimized.

@deads2k

deads2k Dec 20, 2017

Vendoring in commands, especially when you want to vendor in more commands, gives a you a big boost to initial velocity at the cost of bringing forward long term debt that may end up be insoluble. See the culmination of debt/conflict with docker/distribution and k8s.io/kubernetes as it bit openshift/origin. Jordan and I were only able to solve that by amputation of the afflicted code. I would strongly recommend against this.

This comment has been minimized.

@pwittrock

pwittrock Dec 20, 2017

Author Member

That is the eventual goal. We don't have the luxury of starting with a ground up rewrite.

This comment has been minimized.

@pwittrock

pwittrock Dec 20, 2017

Author Member

Keeping the bad code as a vendor helps this.

To address these challenges, sig-cli maintainers have worked toward moving kubectl out of the main repo
for the past year. This process has been slow and challenging due to the following reasons:

- many kubectl commands depend on internal kubernetes/kubernetes libraries

This comment has been minimized.

@soltysh

soltysh Dec 20, 2017

Contributor

If you vendor current kubectl you'll end up having the same problem, still. It won't magically disappear just by vendoring. In a lot of cases you'll still have to handle that.

This comment has been minimized.

@adohe

adohe Jan 23, 2018

Member

We should switch to vendoring separate Kubernetes repos, like kubernetes/client-go

@pwittrock

This comment has been minimized.

Copy link
Member Author

pwittrock commented Dec 20, 2017

Thanks @deads2k I understand the issues with vendoring kubernetes/kubernetes better now. Will incorporate feedback into the proposal.


## Unresolved Questions [optional]

- Should we rename / restructure the command, or keep kubectl and all the bagage that comes with?

This comment has been minimized.

@smarterclayton

smarterclayton Dec 21, 2017

The question is what CLI API guarantees must we preserve (which ones will users accept, and which will users reject)? Do we preserve the scripting semantics of get? The resource argument ordering? Places where the CLI invocation is broken is where we don't want to inherit, places where the cost to end users is going to be very high to switch is where we do.


- Stop trying to wholesale move kubectl out of kubernetes/kubernetes
- we have been working on this for a year with limited success
- Instead, create a new cli command called `ktl`.

This comment has been minimized.

@smarterclayton

smarterclayton Dec 21, 2017

Can you do a pitch here about why ktl or whatever we call it is better? This is mostly a "we have to refactor stuff because refactoring is good" proposal - adding a vision statement is really important to justify the cost we'll be taking on and adding to users.

Something like:

In order to evolve the command line experience for Kubernetes, and benefit from lessons learned about how to best support declarative configuration of applications AND easy extension of infrastructure primitives, we propose a new CLI tool that will be simpler, easier to work with and extend, drive improvements to core user actions, and still preserve existing user support via kubectl.

This comment has been minimized.

@adohe

adohe Jan 23, 2018

Member

also justify the possible cost for user to switch to this new command line tool.

@pwittrock

This comment has been minimized.

Copy link
Member Author

pwittrock commented Dec 21, 2017

Thanks for the responses. Out for the holidays. Will follow up next year. See you all in January!

@pwittrock

This comment has been minimized.

Copy link
Member Author

pwittrock commented Jan 2, 2018

@deads2k @smarterclayton

Thanks for the comments. Another direction to consider would be to simply vendor in all of the kubectl related code from kubernetes/kubernetes at the root of the kubernetes/kubectl repo and build the kubectl binary from the kubernetes/kubectl repo. This would allow us to introduce new commands developed in the kubernetes/kubectl repo in parallel to moving libraries and commands out of kubernetes/kubernetes. This is a more conservative approach, but unblocks new development from occurring elsewhere.

Thoughts?

- slows down velocity
- slows down feedback on alpha and beta features - longer time to GA
- each release is heavier weight due to its length
- obfuscates version skew as a first class problem

This comment has been minimized.

@adohe

adohe Jan 23, 2018

Member

basically we must have a different release cycle between cli and cluster itself.

- obfuscates version skew as a first class problem
- individual commands cannot be released at different intervals from one another
- some commands should be released every 3 months with the kubernetes cluster
- others could be released daily

This comment has been minimized.

@adohe

adohe Jan 23, 2018

Member

does this means that we will distribute ~50 separate go commands? It could be another disaster.

@soltysh

This comment has been minimized.

Copy link
Contributor

soltysh commented Feb 7, 2018

I doubt that vendoring just kubectl-related code will solve any of the concerns everyone mentioned. As you know kubectl code is too tightly integrated with current k8s code base which will always lead to all the aforementioned problems. Why not ktl being all-generic approach to talk to k8s? With all the work happening right now we're moving majority of responsibilities to the server so the direction of less loaded client seems like a good approach.

@pwittrock

This comment has been minimized.

Copy link
Member Author

pwittrock commented May 7, 2018

Closing this for now. Will reopen in some other form.

@pwittrock pwittrock closed this May 7, 2018

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.