Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update sig-cli charter scope. #3164

Closed
wants to merge 3 commits into from

Conversation

pwittrock
Copy link
Member

Be more specific w.r.t. the types of commands that are in scope and the design philosophy behind what is supported.

Be more specific w.r.t. the types of commands that are
in scope and the design philosophy behind what is
supported.
@k8s-ci-robot k8s-ci-robot added sig/cli Categorizes an issue or PR as relevant to SIG CLI. size/M Denotes a PR that changes 30-99 lines, ignoring generated files. labels Jan 29, 2019
@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: pwittrock

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot k8s-ci-robot added approved Indicates a PR has been approved by an approver from all required OWNERS files. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. labels Jan 29, 2019
@pwittrock
Copy link
Member Author

/assign @soltysh
/assign @seans3

@pwittrock
Copy link
Member Author

pwittrock commented Jan 29, 2019

/hold

This will need to be approved by SIG ARCH and the SC after it has consensus from sig-cli and other stakeholders.

@k8s-ci-robot k8s-ci-robot added the do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. label Jan 29, 2019
- Invoking Kubernetes APIs:
- Resource APIs - e.g. `create`, `replace`, `delete`, `patch`, `get`, etc
- SubResource APIs - e.g. `exec`, `attach`, `logs`, `scale`, etc
- Discovery Service - e.g. `api-versions`, `api-resources
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

closing `

- e.g. Kustomization `resources` can refer to Resource Config files
- Configure how to talk to a specific cluster from the cli
- e.g. `config`
- e.g. `--contect`
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

context

SIG CLI [README]
#### Transparently Exposing APIs through Declarative and Imperative Workflows

The scope of CLI Tools focuses on enabling declarative and imperative workflows
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this could be clarified further.

kubectl provides resource-oriented commands (as opposed to some other central concept, such as package).

kubectl is part swiss-army knife and part reference implementation for interacting with the API.

In the scope document, I mentioned that since some Kubernetes primitives are fairly low-level, in addition to general-purpose resource-oriented operations, the CLI also supports “porcelain” for common simple operational operations (both status/progress extraction and mutations) that don’t have discrete API implementations, such as run, expose, rollout, cp, top, cordon, and drain. And there should be support for non-resource-oriented APIs, such as exec, logs, attach, port-forward, and proxy.

It may be worth calling out the long-standing position that widely used functionality, such as rolling update and garbage collection, should be moved server-side, the most recent example being server-side apply. So, effectively, the CLI acts as a proving ground for new API functionality.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

But it reads more like the kubectl design principles than the SIG charter.

Maybe just say the scope is kubectl and things related to it, and then create a design principles doc separately? SIG UI calls out that its efforts center around the dashboard:
https://github.com/kubernetes/community/blob/master/sig-ui/charter.md#scope

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A subsequent but different comment: @bgrant0607 your above comments refer directly to kubectl. I think this charter is saying loudly that "other cli tools" are also in scope for sig-cli. (If we do mean "kubectl-only" then we should probably say that more explicitly)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Practically speaking, the SIG hasn't had capacity to work on other CLIs, other than prototypes of new functionality and replacements for kubectl (e.g., a pure plugin framework and a purely dynamic client). I think it's fine to accept that reality and expand scope later if there's capacity to do so.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am good with just kubectl.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would expect any new tool ideas or feature ideas need to go through a KEP. In that processes any related SIGs would need to be noted (e.g., if they did something with apps than SIG apps would need to be related). Over there we would discuss graduation criteria and situations such as has the ecosystem already handled this so something new in that space is more appropriate elsewhere (e.g., a CNCF sandbox project).

This is just me noting that some directional things are being handled by processes outside the charter.

For many of the features that move forward we should put them into the clients or move them server side so that other tools can take advantage of them. This is what happened with apply going server side and we can look to start other things there as a pattern.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes. I am a big fan of the KEP process.


It is the philosophy of the tools developed in SIG CLI to facilitate working
directly with the Kubernetes APIs and Kubernetes style Resources, and to the
extend possible, provide a transparent experience for how commands map to
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

s/extend/extent/

extend possible, provide a transparent experience for how commands map to
Kubernetes APIs and Resources.

Building tools that obfuscate the underlying Resources and APIs (e.g. through
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd say abstract rather than obfuscate

SIGs, such as kubeadm, (which is owned by SIG Cluster Lifecycle).
- SIG CLI is not responsible for tools or solutions developed outside of the Kubernetes
project.
- SIG CLI is not responsible for commands developed as plugins or other extension
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This may not be the case if existing built-in commands are converted to plugins, or plugins are used to prototype new built-in commands.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's why we say "outside of SIG CLI", even if built-in commands will turn into plugins (convert is one example since it relies on internal types that won't move with kubectl to a separate repo), but sig-cli will be still responsible for it and it will be developer under kubernetes org.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If existing commands are converted to plugins how are they installed or how do people who used those commands in the past learn about how to get them back?

- From commands + arguments + flags - e.g. `create configmap`, `run`, `expose`
- From declarative files - Kustomization `configMapGenerator`, `secretMapGenerator`
- Transforming Kubernetes Resource Config locally or patching remotely
- From commands + arguments + flags - e.g. `annotate`, `set image`, `patch`
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

s/set image/set
there are a few flavors

Copy link
Member Author

@pwittrock pwittrock Jan 30, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yeah, there are a bunch. This is just a sampling (e.g. also label, scale, etc). LMK if you think it is valuable to list them all.


SIG CLI develops go libraries for developing CLI tools for working with Kubernetes.
These libraries provide a subset of the libraries used to build the CLI tools
themselves.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe worth adding that kubectl will be always the first consumer of those.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

kubectl or cli tools in general?

SIGs, such as kubeadm, (which is owned by SIG Cluster Lifecycle).
- SIG CLI is not responsible for tools or solutions developed outside of the Kubernetes
project.
- SIG CLI is not responsible for commands developed as plugins or other extension
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's why we say "outside of SIG CLI", even if built-in commands will turn into plugins (convert is one example since it relies on internal types that won't move with kubectl to a separate repo), but sig-cli will be still responsible for it and it will be developer under kubernetes org.


Building tools that obfuscate the underlying Resources and APIs (e.g. through
generalized templating or DSLs) is an anti-goal of SIG CLI. Notable examples
of commands that violate this principle: `run`, `expose`, `autoscale`
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure what having this in the charter scope means. Does that mean it is an explicit and fundamental goal of sig-cli to remove these subcommands from kubectl? (I'm ok with that, it just seems odd to call that out as a charter scope)

If it is an explicit goal to remove anything that "abstracts the underlying resources and APIs", then we should probably carefully consider the wording of this particular phrase. Eg: describe, rollout, kustomize are some other subcommands that don't map directly onto the REST API, but it might be surprising to have them suddenly in-scope for removal.

(tl;dr: I think the direction that this paragraph is trying to convey is unclear)

Copy link
Member Author

@pwittrock pwittrock Jan 30, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was trying to convey that building out new abstractions in place of the APIs is not a goal. Your comments are great though and I will address them.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I just deleted that section in my pending update.

SIG CLI [README]
#### Transparently Exposing APIs through Declarative and Imperative Workflows

The scope of CLI Tools focuses on enabling declarative and imperative workflows
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A subsequent but different comment: @bgrant0607 your above comments refer directly to kubectl. I think this charter is saying loudly that "other cli tools" are also in scope for sig-cli. (If we do mean "kubectl-only" then we should probably say that more explicitly)

@pwittrock
Copy link
Member Author

Thanks for all the comments. I am working on them.

scope also includes publishing a subset the libraries which were used to develop the tooling
itself.

The decision whether to publish specific functionality as part of kubectl,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think that this is true, I think that SIG-Architecture should be involved in project cross-cutting decisions like this.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Over a year ago there, at the guidance of the steering committee, there was a shift for some opinions (e.g., Helm and Kompose) to be treated as ecosystem rather than a core part of the project. This allowed us to say one way was not the way and encourage competition. This is walking close to those opinion spaces depending on the solutions created.

Would that make it something SIG Arch would be interested in or provide further guidance for SIG CLI (or SIG Apps scope)?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we think we need a comparable level of rigor for CLI command reviews as for API reviews? I think that's effectively what we're discussing. Do we want that for the dashboard also?

Helm and Kompose were added to the project for pragmatic, non-technical reasons, with full awareness that they were out of scope of the "core" of the project at a technical level, as documented at the time (in https://kubernetes.io/docs/concepts/overview/what-is-kubernetes/#what-kubernetes-is-not). They were never included in Kubernetes releases.

I'm working to document those historical reasons, as well as technical and non-technical criteria. The WIP doc is here:
https://docs.google.com/document/d/1JZ6WQhBOecKViW_Fa6JMxV6jppy4ZhsJ-ULBCgH43mQ/edit?ts=5c479ea4#

Once 1.14 issues are under control, I'll work on converting that to a PR, with more explanatory text.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Another example: kubeadm, which is in releases, and is also adding commands.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Went ahead and created a PR for the scope document:
#3180

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I did not read that in Brian's comment, I've seen it more like a questions than a statement. A questions that has not been answered.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In the charter it notes:

The scope includes both low level tooling that may be used by scripts ...

If scripts are meant to use kubectl (and they do today) doesn't that make the commands, flags, and other arguments an API to those scripts?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, and are included in https://github.com/kubernetes/community/blob/master/sig-architecture/api-review-process.md#what-apis-need-to-be-reviewed

What parts of a PR are "API changes"?

  • Configuration files, flags, and command line arguments are all part of our user and script facing APIs and must be reviewed.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's what the sig is following on a daily basis.

- Defining or referencing a collection of Resource Config
- e.g. `-f` can reference a url, a file containing multiple Resources, or a directory
- e.g. `-R` can traverse a directory recursively
- e.g. Kustomization `bases` can refer to other Kustomizations
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

First reference to Kustomize as far as I can tell, perhaps delete, since a generic description would work.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

None of the commands listed here are documented here. Links to documentation could be inserted. Or this could be extracted into another doc. Or perhaps just summarized.

No other SIG catalogs every API and feature in their charter. I don't think this level of specificity in a charter is a precedent we want to set.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, agree that this level of detail is prob. unnecessary.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am good with deleting them.

[design principles](design-principles.md) for the focus of kubectl functionality.

This group focuses on command line tooling for working with Kubernetes APIs and Resource
Config. This includes both generalized tooling for working with Resources, Resource Config
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What does working with Resource Config mean? The configuration as stored in Kubernetes and accessible via the API, configuration files as they are stored and worked on outside of the API (e.g., on local disk), or both?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

BTW, the "on local disk" reference is within scope. kubectl has always read from local disk and, optionally, dumps output of operations, currently to stdout, but I wouldn't preclude writing to file if users want that.

kubectl and the API were co-designed exactly to support this kind of usage, without necessarily requiring (but not precluding) a different representation, in order to avoid the problem of the API becoming a purely internal interface, hidden beneath some other layer, as well as avoiding compounding other challenges, such as versioning of resource types. This is deeply engrained in the design of Kubernetes. That's described in the resource-model doc, the architecture doc, and other docs.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That makes this an incredibly broad scope that crosses over quite a bit with the ecosystem. For example, there are already numerous tools to do resource validation of assets on local disk, in the ecosystem. And, this is just one area.

Should the scope be this broad and overlapping with tools in the ecosystem? What's the benefit for k8s to invest in this?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Because end users need to apply minimal config every where a kube distribution happens. And minimal = takes kube objects and puts them on a server correctly. Declarative config for kube is well within scope. Templating and complex transformation/expansion (which I don't consider anything sig-cli has proposed) is not.

Every user of Kubernetes should be able to take an on-disk representation of declarative config in the Kubernetes mold and ensure the server state is correct. Secrets and configmaps are part of core, and giving users the necessary tools to take those from disk in a relatively safe way is also part of core.

I agree with Brian.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

complex transformation

I don't consider kustomize complex transformation, just patch application with conventions. It's the simplest possible thing that addresses having a declarative state with a bounded number of variants. I worked pretty closely with sig-cli to craft a set of boundaries that it shouldn't cross that keep it focused on "ensure local declarative state converges to remote state" without having unbounded scope. Templatization, language constructs, etc. Linear patch application is annoying to script, but common in practice.

Just to talk about the elephant in the room: kustomize is something like 20k lines of go, which I would not attempt to re-implement in multiple languages, and thus I would consider "not suitable as standard k8s" under the above criteria

I don't know if I'm missing something from your argument, but I don't generally consider lines of code and reimplementation difficulty as a suitability metric. kubectl has single commands that are easily 5k of go that could be reimplemented in 5 lines of python, but the extra 4995 lines are there to prevent someone from having to worry about having to write the other 4995 lines of python if they don't want to.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

One useful thing about reference implementations is that they expose when things are more complicated than they should be.

Copy link
Member Author

@pwittrock pwittrock Feb 1, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

FWIW, another way of conceptualizing this is the following:

While apply allows users and systems to cooperatively determine the desired state of an object, kustomize allows users and (other) humans to cooperatively determine the desired state of an object (using the same concepts and mechanism as apply does).

(For me) it was only after about a year of helping with efforts to fix bugs in apply and encountering issues that we had to fix as tacked-on imperative flags (e.g. --prune, -n) that we would need some declarative mechanism for these issues. Repeatedly discussing the mental model of jointly owned resource / resource config between users and the system eventually sparked the notion of empowering users with the same technique, but locally. (Though this was already sorta possible via kubectl patch -f file.yaml -p '{"spec":{"some": "json"}}' --dry-run -o yaml)

From this context we could frame 'Apply' as a mechanism to allow multiple parties to combine Resources + Resource Config together. One could argue that the kustomize functionality could be spiritually viewed as 'client-side apply' while the majority of what is today 'client-side apply' could be thought of as (what should should be) 'server-side apply'.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Given the differences of opinion about where this line is, I think that a lot more clarity is needed in this charter around how we decide what is in and out.

I don't think it does the community any service to purposefully leave things vague so that everyone is happy with the charter, but unhappy with the implementation.

Personally, I actually think that drawing these sorts of lines is really, really hard to do.

And so I'd personally prefer to say:

"sig-cli is committed to keeping the surface area of kubectl minimal and constant with the exception of bug-fixes. sig-cli is also plans to ensure that plugin mechanisms in kubectl are sufficiently sophisticated to accomodate all subsequent functionality added to kubectl is via an ecosystem of plugins"

Or alternately:

"SIG-CLI is committed to only adding functionality that will be used by 80% of all Kubernetes users"

In core tools, I strongly believe that we should focus on building things where there is strong agreement in 80-90% of users that it is both useful, and the expected format.

I personally believe that kustomize is a bridge to far by this metric. Most people won't use it, and reasonable people will disagree about the manner in which it is designed and the syntax of the commands and patches. Given that (and the fact that it could easily be implemented as a plugin) it is way outside of what I would want the scope of SIG-CLI to be.

Note that it's really important that this doesn't mean that SIG-CLI people can't work on kustomize only that it's proper home is as a separate ecosystem project and plugin, not linked into kubectl and related go libraries.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

On plugins: I don't think we've figured out what types of command-line-tool plugins are valuable and feasible. The technical difference between a standalone tool and executable plugin is small, and the challenges of building, testing, releasing, distributing, and updating tools/plugins for all of the OSes and environments that are supported by Kubernetes are similar in both cases.

This includes but is not limited to commands to generate, transform, create,
update, delete, watch, print, edit, validate and aggregate information
about Resources and Resource Config. This functionality may be either
declarative or imperative.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This focus appears to have cross over with the ecosystem. If this is talking about the CRUD of configuration files locally (outside of a cluster), with more feature intent than we have now, would it be in a similar space with existing configuration managers?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

kubectl is not in a similar space with existing configuration managers.

Kubectl provides a reference implementation for interacting with the API, with relatively low-level building blocks. We've had a long-standing position (kubernetes/kubernetes#12143) that widely used, general-purpose functionality should be implemented eventually in the server. Past examples of functionality moving to the server are Deployment (rolling-update) and garbage collection (reaping). An ongoing example is server-side apply. Apply and strategic merge patch were important mechanisms pioneered by kubectl. OpenAPI-based validation is another example.

We've long intended that kubectl's implementation to be available in library form as well as a command (kubernetes/kubernetes#7311), but that's been harder than expected, and has been preempted by other priorities, such as extracting kubectl from k/k, which is still an eventual goal, as productivity in k/k is low.

The reference implementation demonstrates how to use the API, including strategic merge patch, as well as providing a simple getting started tool and avoiding complexities of documenting the system with just, for instance, curl. It has long had (relatively simple) commands, such as run, for convenience of expected common operations. The other creation commands, especially create secret and create configmap, are in that category, as well. They help (esp. new users) not worry about schema details and yaml indentation.

kubectl's scope excludes packaging, dependency management, application publishing and discovery, lifecycle hooks, templating, configuration DSLs, and other things that configuration management tools do. And I haven't seen those tools do things, like apply, that kubectl does.

From the beginning (http://prs.k8s.io/1325), kubectl was intended to provide resource-oriented bulk declarative and imperative operations.

I wrote more about the kubectl design ethos in this comment:
#3164 (comment)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I appreciate the movement to features on the server side. For example, server side apply will help many tools by making the feature more accessible no matter the language.

We've had a long-standing position (kubernetes/kubernetes#12143) that widely used, general-purpose functionality should be implemented eventually in the server.

I wonder if this should be documented somewhere for clarity and reminder. Maybe as a SIG CLI principle?

While not in the scope of the charter, I wonder if that means the kustomization functionality need to be moved server side to follow this position. Using this feature to test the position.

kubectl's scope excludes packaging, dependency management, application publishing and discovery, lifecycle hooks, templating, configuration DSLs, and other things that configuration management tools do. And I haven't seen those tools do things, like apply, that kubectl does.

There's a difference between how and what. Templates, configuration DSLs, and so forth are how a tool is implemented but not what it does.

For example, a tool that does overlays (like kubectl with kustomization files) that then documents how to do multi-environment application deployments using that would be considered doing configuration management, right? It's dealing with configuration management use cases and workflows (some details on the what) but using different implementation design patterns (the how).

Phil has started to work on better documentation for kubectl, which I applaud. It's needed and I look forward to more work.

But, one angle to the direction can be seen in the section titles around build, delivery, and deployment. The layout is for dealing with use cases from building images through deploying changes to varying environments. Mix in overlays to have environment specific config and you have configuration management, right?

Note, I'm going deep with this because how kubernetes engages with the ecosystem is important, IMHO, to its success. Configuration management, with the legacies companies have (both vendors and consumers), is one of those areas people debate and is a whole other change beyond adopting k8s and cloud native. And, the ecosystem is vital to k8s success just like Linux generally needs GNU.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Quick comment: The server-side principle has been documented for a long, long time.

This should be merged with the new principles doc:
https://github.com/kubernetes/community/blob/master/contributors/devel/kubectl-conventions.md#principles

Another PR is open to move it, along with the other docs in devel.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

And the core of kustomize, strategic merge patch, IS on the server side.

Identification of resource cross-references is something that has been identified that needs to be published server-side. Other functionality (e.g., policy enforcement) could benefit from it also.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

On deployment patterns:

Since 1.0 we've had examples of some deployment patterns, such as for canary deployments:
https://kubernetes.io/docs/concepts/cluster-administration/manage-deployment/#canary-deployments

It doesn't preclude other patterns.

Actually, lots of our documentation needs to be updated and expanded. Few contribute to it, other than to document new features. So, yes, Phil should be applauded, for the new kubectl book, for his past work on kubectl documentation (e.g., https://kubernetes.io/docs/concepts/overview/object-management-kubectl/overview/), for the generated API documentation (https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.13/), and other contributions to the documentation, including the overall site design.

Copy link
Member Author

@pwittrock pwittrock Feb 1, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks. FYI the book is really an exploration inspired by the Rust documentation. Folks have seemed to like it, but I don't think anyone has gone deep on the content. Nothing in there should be taken as a concrete proposal until there is a real proposal to publish it. Sig docs has expressed positive feelings about it the last time I demoed it, but a lot has changed since then. It has as many good ideas as bad ones. I am hearing at least sig-cli, sig-arch and sig-docs should be included when it moves from exploratory to a concrete goal.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@pwittrock FWIW, I appreciate the style you're going for and think content with this kind of flow is needed. The question that comes to my mind how much of this content should be in the k8s project?

For example, the docs have a section for building images. If someone is going to create containers to run in Kubernetes this is very much needed. But, neither kubernetes nor kubectl do this. And, there are numerous builders available. So, how does the k8s project choose which builder to privilege by being in these examples? Should we even be in a place to do that?

I can see why SIG Docs likes the style. The choice of gitbook is one I appreciate, too.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So, how does the k8s project choose which builder to privilege by being in these examples? Should we even be in a place to do that?

I don't think what you are describing is what I envisioned for that section. I updated the book to demonstrate something close to my original thinking - e.g. decisions the user will make when building an image that will impact how they use the tooling - e.g. using a digest and updating the images references vs using latest with an imagePull always.

I'd like to see more of this sort of discussion take place. I ask these sorts of questions to myself, and being able to discuss them with a quorum of stakeholders would ensure we have the authority to make decisions on these matters. Perhaps we could driving this out of a subproject within sig-docs and invite stakeholders from workloads / apimachinery / kubectl to participate.

scope also includes publishing a subset the libraries which were used to develop the tooling
itself.

The decision whether to publish specific functionality as part of kubectl,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Over a year ago there, at the guidance of the steering committee, there was a shift for some opinions (e.g., Helm and Kompose) to be treated as ecosystem rather than a core part of the project. This allowed us to say one way was not the way and encourage competition. This is walking close to those opinion spaces depending on the solutions created.

Would that make it something SIG Arch would be interested in or provide further guidance for SIG CLI (or SIG Apps scope)?

@pwittrock
Copy link
Member Author

I have actually changed this quite a bit in response to the comments. I was polishing the wording this morning but didn't publish. Don't want to make this a moving target for everyone (then reviewing a variant). Will try to figure out the best way to move forward (e.g. publish now or wait until more resolution on discussions)

- SIG CLI is not responsible for defining the Kubernetes API or Resource Types
that it interfaces with (which is owned by SIG Apps and SIG API Machinery).
- SIG CLI is not responsible for integrating with *specific tools* or APIs
developed outside of the Kubernetes project.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we add application management as something that is out of scope? There are two reasons for this...

  1. Things with this level of functionality should be server side. This is even called out in the k8s scope doc Brian started. That way other tools and even things in other languages can use them. They are part of the REST API.
  2. In SIG Apps we are working on the Application CRD/Controller for this. It's already scoped and being worked in some capacity using the existing server side direction

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We all need to work together on this project. SIG CLI is one of the project's horizontals. It supports commands specific to scheduling, autoscaling, node, API machinery, and workloads, for example. SIG Apps should work together with SIG CLI to ensure kubectl supports what is needed in that domain.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you have a link to a formal definition of what qualifies as application management for this context? The way I have used the term in the past, many of the existing commands would fall into that category.

Additionally, in the updated charter I have written that kubectl is a proving ground for new API features. I could imagine a development path where some Application Management functionality could be started in the client and be moved to the server as has been done for other functionality. (not suggesting that we do this, but I don't know what the plans for AM are or what the owners of the area might want)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are a several reasons I wouldn't want to put something in kubectl for application management right now.

  1. The App Def WG figured out that interoperability was a high priority and it's something SIG Apps adopted to support the various tools. Tool authors in this space wanted interoperability. It is now in the SIG Apps charter. If something is in kubectl it's not very interoperable. For example, you can't use it in Dashboard. Applications deployed in one tool should be viewable in another tool and even deleted via a 3rd tool.
  2. Work has already started on app management via a CRD/Controller (leveraging kubebuilder). We started at the API layer based on a start from @ant31.
  3. The more we use CRDs and controllers the more we learn what is needed to move them to beta and GA. This is a good place to dog food. kubectl being better with these, and not just app management, would benefit more people.
  4. Do we have a traceable need for a significant number of users?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

kubectl rollout might be considered "application management". OTOH, like the CRUD commands, apply and kustomize are general-purpose and applicable to all resource types, so are not.

Anyway, as mentioned elsewhere, kubectl can configure autoscaling, cordon nodes, expose services, launch and attach to running pods, and interact with all other resource types in the system -- it's a horizontal. So that will include applications, too, to the degree that concept exists in Kubernetes.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Additionally, in the updated charter I have written that kubectl is a proving ground for new API features.

After thinking about this a little more, I think this may be a bad idea. In the API we have generally moved this to CRDs and controllers. Proving things out in core generally doesn't happen anymore unless it's an exception (do we even have those?).

I think the same would be good for kubectl. Plugins would be a good proving grounds for new features. Or, an additional app that works via pipes.

This would help to flesh out plugins and continuously look at how kubectl works with pipes.

Is it still a good idea to use kubectl as a proving ground for new feature?

Copy link
Member Author

@pwittrock pwittrock Feb 4, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@mattfarina I suspect we are more closely aligned on vision than on terminology or the levels of abstraction / context we are referring to.

  • In sig-cli contexts I will refer to both authoring of Resource Config and CRUD operations on Resources as Application Management. (Also higher level CRUD-ish things such as Apply and --prune)
  • In sig-apps contexts, I could imagine referring to lifecycle hooks and packaging + dependency management as Application Management.

We could probably interpret the same statements as having quite different meanings and implications.

While Application Management is conceptually about a collection of related user journeys, the sig charters are oriented around ownership of specific mechanics. This seems like an impedance mismatch that is steering us towards a weird place. Development of functionality between sig-cli and sig-apimachinery has been relatively fluid, due in no small part to heroes like @deads2k, @liggitt and @apelisse (and others) who have developed ownership in the mechanics owned by both sigs and focus their efforts on the problems faced by users.

This leads me to the conclusion that it may have been a mistake to try to make this overly concrete and instead focussing on building trust and x-sig leadership between the app management stakeholders would be a better strategy than trying to silo ownership of the solution space.

w.r.t. proving new API features, I agree that kubectl shouldn't be proving out new specific Resource APIs. Things like server-side apply and dry-run couldn't have been done as CRDs today so I think we should leave the door open for those types of things. A sig apps example would be kubectl rollout status which cuts across the workloads APIs.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

w.r.t. proving new API features, I agree that kubectl shouldn't be proving out new specific Resource APIs. Things like server-side apply and dry-run couldn't have been done as CRDs today so I think we should leave the door open for those types of things. A sig apps example would be kubectl rollout status which cuts across the workloads APIs.

If something like kubectl rollout status were implemented today I'd want it to be as a plugin or external tool. Maybe even more than one during experimentation to see what works for what cases.

To continue the example, what if someone did it the way kubectl does it today and someone else did one based on labels where you could see a rollout of a deployment and statefulset that were part of the same thing being rolled out at the same time?

Sometimes it's useful to do multiple experiments, see how they fair against each other, and see what useful bits come.

I'm not criticizing kubectl rollout status or the process it went though. I'm just looking at it as an example of how I would approach things today.

Wouldn't it be useful to do this kind of thing as a plugin(s) and they see how it fairs in usefulness (usability + utility) first?

I agree we need to have cross SIG engagement on these forms of issues.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

w.r.t. rollout (if it didn't exist today), I could get behind the idea of doing it as a separate binary (plugin or otherwise) to validate ideas and experiment. If workloads + machinery + cli agreed upon an approach, we could evaluate the best way to proceed.

[design principles](design-principles.md) for the focus of kubectl functionality.

This group focuses on command line tooling for working with Kubernetes APIs and Resource
Config. This includes both generalized tooling for working with Resources, Resource Config
Copy link
Contributor

@mattfarina mattfarina Feb 1, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This group focuses on command line tooling for working with Kubernetes APIs and Resource
with Kubernetes API's.

Should this use the same language as the in work Kubernetes scope document that notes the CLI is basic and a reference implementation? It would be nice to have consistent language.

@brendandburns
Copy link
Contributor

I would like to request a much broader and deeper review of this subject by both SIG-Architecture and/or the k8s steering committee before this charter is merged.

Thanks!

@k8s-ci-robot k8s-ci-robot added size/L Denotes a PR that changes 100-499 lines, ignoring generated files. and removed size/M Denotes a PR that changes 30-99 lines, ignoring generated files. labels Feb 4, 2019
@mattfarina
Copy link
Contributor

@brendanburns Per your request, I added it to the SIG Arch agenda for this week with your name next to it.

@pwittrock
Copy link
Member Author

Closing this per Sig Arch 2/7 agenda item - cli charter updates

@pwittrock pwittrock closed this Feb 7, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. sig/cli Categorizes an issue or PR as relevant to SIG CLI. size/L Denotes a PR that changes 100-499 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet