Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update sig-cli charter scope. #3164

Closed
wants to merge 3 commits into from
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
43 changes: 30 additions & 13 deletions sig-cli/charter.md
Expand Up @@ -5,25 +5,43 @@ the Roles and Organization Management outlined in [sig-governance].

## Scope

The Command Line Interface SIG (SIG CLI) is responsible for kubectl and
related tools. This group focuses on general purpose command line tools and
libraries to interface with Kubernetes API's.

### In scope

SIG CLI [README]
The Command Line Interface SIG (SIG CLI) is responsible for kubectl, as well as accompanying
libraries and documentation. See kubectl [design principles](design-principles.md) for the
focus of kubectl functionality.

**Note:** Definition of kubectl may include commands developed by SIG CLI as kubectl plugins.

Kubectl is a dynamic + resource-oriented CLI, and reference implementation for interacting
with the API, as well as a basic tool for declarative and imperative management.

SIG CLI focuses on command line tooling for working with Kubernetes APIs and Resource Config.
This includes both generalized tooling for working with Resources, Resource Config
and Resource Types (e.g. using resource / object metadata, duck-typing, openapi, discovery,
scale and status subresources, etc), as well as tooling for working with specific Kubernetes
APIs (e.g. `logs`, `exec`, `create configmap`).

#### Code, Binaries and Services
The scope includes both low level tooling that may be used by things like scripts,
as well as higher level porcelain to reduce user friction for simple,
common, difficult or important tasks performed by users. The
scope also includes publishing a subset the libraries which were used to develop the tooling
itself.

SIG CLI code include general purpose command line tools and binaries for working
with Kubernetes API's. Examples of these binaries include: [kubectl and kustomize].
The decision whether to publish specific functionality as part of kubectl,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think that this is true, I think that SIG-Architecture should be involved in project cross-cutting decisions like this.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Over a year ago there, at the guidance of the steering committee, there was a shift for some opinions (e.g., Helm and Kompose) to be treated as ecosystem rather than a core part of the project. This allowed us to say one way was not the way and encourage competition. This is walking close to those opinion spaces depending on the solutions created.

Would that make it something SIG Arch would be interested in or provide further guidance for SIG CLI (or SIG Apps scope)?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we think we need a comparable level of rigor for CLI command reviews as for API reviews? I think that's effectively what we're discussing. Do we want that for the dashboard also?

Helm and Kompose were added to the project for pragmatic, non-technical reasons, with full awareness that they were out of scope of the "core" of the project at a technical level, as documented at the time (in https://kubernetes.io/docs/concepts/overview/what-is-kubernetes/#what-kubernetes-is-not). They were never included in Kubernetes releases.

I'm working to document those historical reasons, as well as technical and non-technical criteria. The WIP doc is here:
https://docs.google.com/document/d/1JZ6WQhBOecKViW_Fa6JMxV6jppy4ZhsJ-ULBCgH43mQ/edit?ts=5c479ea4#

Once 1.14 issues are under control, I'll work on converting that to a PR, with more explanatory text.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Another example: kubeadm, which is in releases, and is also adding commands.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Went ahead and created a PR for the scope document:
#3180

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I did not read that in Brian's comment, I've seen it more like a questions than a statement. A questions that has not been answered.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In the charter it notes:

The scope includes both low level tooling that may be used by scripts ...

If scripts are meant to use kubectl (and they do today) doesn't that make the commands, flags, and other arguments an API to those scripts?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, and are included in https://github.com/kubernetes/community/blob/master/sig-architecture/api-review-process.md#what-apis-need-to-be-reviewed

What parts of a PR are "API changes"?

  • Configuration files, flags, and command line arguments are all part of our user and script facing APIs and must be reviewed.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's what the sig is following on a daily basis.

as a separate tool, as a kubectl extension, or as a library are technical
decisions made by the SIG and owners of the code under development.

### Out of scope

SIG CLI is not responsible for command-line tools built and maintained by other
SIGs, such as kubeadm, which is owned by SIG Cluster Lifecycle. SIG CLI is not
responsible for defining the Kubernetes API that it interfaces with. The
Kubernetes API is the responsibility of SIG API Machinery.
- SIG CLI is not responsible for tools developed outside of the
SIG (even if they are part of the broader Kubernetes project).
- SIG CLI is not responsible for kubectl subcommands developed outside of the
SIG (even if they are developed through kubectl extension mechanisms).
- SIG CLI is not responsible for defining the Kubernetes API or Resource Types
that it interfaces with (which is owned by SIG Apps and SIG API Machinery).
- SIG CLI is not responsible for integrating with *specific tools* or APIs
developed outside of the Kubernetes project.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we add application management as something that is out of scope? There are two reasons for this...

  1. Things with this level of functionality should be server side. This is even called out in the k8s scope doc Brian started. That way other tools and even things in other languages can use them. They are part of the REST API.
  2. In SIG Apps we are working on the Application CRD/Controller for this. It's already scoped and being worked in some capacity using the existing server side direction

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We all need to work together on this project. SIG CLI is one of the project's horizontals. It supports commands specific to scheduling, autoscaling, node, API machinery, and workloads, for example. SIG Apps should work together with SIG CLI to ensure kubectl supports what is needed in that domain.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you have a link to a formal definition of what qualifies as application management for this context? The way I have used the term in the past, many of the existing commands would fall into that category.

Additionally, in the updated charter I have written that kubectl is a proving ground for new API features. I could imagine a development path where some Application Management functionality could be started in the client and be moved to the server as has been done for other functionality. (not suggesting that we do this, but I don't know what the plans for AM are or what the owners of the area might want)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are a several reasons I wouldn't want to put something in kubectl for application management right now.

  1. The App Def WG figured out that interoperability was a high priority and it's something SIG Apps adopted to support the various tools. Tool authors in this space wanted interoperability. It is now in the SIG Apps charter. If something is in kubectl it's not very interoperable. For example, you can't use it in Dashboard. Applications deployed in one tool should be viewable in another tool and even deleted via a 3rd tool.
  2. Work has already started on app management via a CRD/Controller (leveraging kubebuilder). We started at the API layer based on a start from @ant31.
  3. The more we use CRDs and controllers the more we learn what is needed to move them to beta and GA. This is a good place to dog food. kubectl being better with these, and not just app management, would benefit more people.
  4. Do we have a traceable need for a significant number of users?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

kubectl rollout might be considered "application management". OTOH, like the CRUD commands, apply and kustomize are general-purpose and applicable to all resource types, so are not.

Anyway, as mentioned elsewhere, kubectl can configure autoscaling, cordon nodes, expose services, launch and attach to running pods, and interact with all other resource types in the system -- it's a horizontal. So that will include applications, too, to the degree that concept exists in Kubernetes.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Additionally, in the updated charter I have written that kubectl is a proving ground for new API features.

After thinking about this a little more, I think this may be a bad idea. In the API we have generally moved this to CRDs and controllers. Proving things out in core generally doesn't happen anymore unless it's an exception (do we even have those?).

I think the same would be good for kubectl. Plugins would be a good proving grounds for new features. Or, an additional app that works via pipes.

This would help to flesh out plugins and continuously look at how kubectl works with pipes.

Is it still a good idea to use kubectl as a proving ground for new feature?

Copy link
Member Author

@pwittrock pwittrock Feb 4, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@mattfarina I suspect we are more closely aligned on vision than on terminology or the levels of abstraction / context we are referring to.

  • In sig-cli contexts I will refer to both authoring of Resource Config and CRUD operations on Resources as Application Management. (Also higher level CRUD-ish things such as Apply and --prune)
  • In sig-apps contexts, I could imagine referring to lifecycle hooks and packaging + dependency management as Application Management.

We could probably interpret the same statements as having quite different meanings and implications.

While Application Management is conceptually about a collection of related user journeys, the sig charters are oriented around ownership of specific mechanics. This seems like an impedance mismatch that is steering us towards a weird place. Development of functionality between sig-cli and sig-apimachinery has been relatively fluid, due in no small part to heroes like @deads2k, @liggitt and @apelisse (and others) who have developed ownership in the mechanics owned by both sigs and focus their efforts on the problems faced by users.

This leads me to the conclusion that it may have been a mistake to try to make this overly concrete and instead focussing on building trust and x-sig leadership between the app management stakeholders would be a better strategy than trying to silo ownership of the solution space.

w.r.t. proving new API features, I agree that kubectl shouldn't be proving out new specific Resource APIs. Things like server-side apply and dry-run couldn't have been done as CRDs today so I think we should leave the door open for those types of things. A sig apps example would be kubectl rollout status which cuts across the workloads APIs.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

w.r.t. proving new API features, I agree that kubectl shouldn't be proving out new specific Resource APIs. Things like server-side apply and dry-run couldn't have been done as CRDs today so I think we should leave the door open for those types of things. A sig apps example would be kubectl rollout status which cuts across the workloads APIs.

If something like kubectl rollout status were implemented today I'd want it to be as a plugin or external tool. Maybe even more than one during experimentation to see what works for what cases.

To continue the example, what if someone did it the way kubectl does it today and someone else did one based on labels where you could see a rollout of a deployment and statefulset that were part of the same thing being rolled out at the same time?

Sometimes it's useful to do multiple experiments, see how they fair against each other, and see what useful bits come.

I'm not criticizing kubectl rollout status or the process it went though. I'm just looking at it as an example of how I would approach things today.

Wouldn't it be useful to do this kind of thing as a plugin(s) and they see how it fairs in usefulness (usability + utility) first?

I agree we need to have cross SIG engagement on these forms of issues.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

w.r.t. rollout (if it didn't exist today), I could get behind the idea of doing it as a separate binary (plugin or otherwise) to validate ideas and experiment. If workloads + machinery + cli agreed upon an approach, we could evaluate the best way to proceed.


## Roles and Organization Management

Expand All @@ -48,6 +66,5 @@ Option 1: by [SIG Technical Leads](https://github.com/kubernetes/community/blob/
[Kubernetes Charter README]: https://github.com/kubernetes/community/blob/master/committee-steering/governance/README.md
[sig-governance]: https://github.com/kubernetes/community/blob/master/committee-steering/governance/sig-governance.md
[README]: https://github.com/kubernetes/community/blob/master/sig-cli/README.md
[kubectl and kustomize]: https://github.com/kubernetes/community/blob/master/sig-cli/README.md#subprojects
[Test Playbook]: https://docs.google.com/document/d/1Z3teqtOLvjAtE-eo0G9tjyZbgNc6bMhYGZmOx76v6oM

81 changes: 81 additions & 0 deletions sig-cli/design-principles.md
@@ -0,0 +1,81 @@
# Kubectl and SIG CLI Design Principles

## Focus

kubectl provides Resource and Resource Config oriented commands
(as opposed to some other central concepts, such as packaging, integration, etc).
This includes but is not limited to commands to generate, transform, create,
update, delete, watch, print, edit, validate and aggregate information
about Resources and Resource Config. This functionality may be either
declarative or imperative.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This focus appears to have cross over with the ecosystem. If this is talking about the CRUD of configuration files locally (outside of a cluster), with more feature intent than we have now, would it be in a similar space with existing configuration managers?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

kubectl is not in a similar space with existing configuration managers.

Kubectl provides a reference implementation for interacting with the API, with relatively low-level building blocks. We've had a long-standing position (kubernetes/kubernetes#12143) that widely used, general-purpose functionality should be implemented eventually in the server. Past examples of functionality moving to the server are Deployment (rolling-update) and garbage collection (reaping). An ongoing example is server-side apply. Apply and strategic merge patch were important mechanisms pioneered by kubectl. OpenAPI-based validation is another example.

We've long intended that kubectl's implementation to be available in library form as well as a command (kubernetes/kubernetes#7311), but that's been harder than expected, and has been preempted by other priorities, such as extracting kubectl from k/k, which is still an eventual goal, as productivity in k/k is low.

The reference implementation demonstrates how to use the API, including strategic merge patch, as well as providing a simple getting started tool and avoiding complexities of documenting the system with just, for instance, curl. It has long had (relatively simple) commands, such as run, for convenience of expected common operations. The other creation commands, especially create secret and create configmap, are in that category, as well. They help (esp. new users) not worry about schema details and yaml indentation.

kubectl's scope excludes packaging, dependency management, application publishing and discovery, lifecycle hooks, templating, configuration DSLs, and other things that configuration management tools do. And I haven't seen those tools do things, like apply, that kubectl does.

From the beginning (http://prs.k8s.io/1325), kubectl was intended to provide resource-oriented bulk declarative and imperative operations.

I wrote more about the kubectl design ethos in this comment:
#3164 (comment)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I appreciate the movement to features on the server side. For example, server side apply will help many tools by making the feature more accessible no matter the language.

We've had a long-standing position (kubernetes/kubernetes#12143) that widely used, general-purpose functionality should be implemented eventually in the server.

I wonder if this should be documented somewhere for clarity and reminder. Maybe as a SIG CLI principle?

While not in the scope of the charter, I wonder if that means the kustomization functionality need to be moved server side to follow this position. Using this feature to test the position.

kubectl's scope excludes packaging, dependency management, application publishing and discovery, lifecycle hooks, templating, configuration DSLs, and other things that configuration management tools do. And I haven't seen those tools do things, like apply, that kubectl does.

There's a difference between how and what. Templates, configuration DSLs, and so forth are how a tool is implemented but not what it does.

For example, a tool that does overlays (like kubectl with kustomization files) that then documents how to do multi-environment application deployments using that would be considered doing configuration management, right? It's dealing with configuration management use cases and workflows (some details on the what) but using different implementation design patterns (the how).

Phil has started to work on better documentation for kubectl, which I applaud. It's needed and I look forward to more work.

But, one angle to the direction can be seen in the section titles around build, delivery, and deployment. The layout is for dealing with use cases from building images through deploying changes to varying environments. Mix in overlays to have environment specific config and you have configuration management, right?

Note, I'm going deep with this because how kubernetes engages with the ecosystem is important, IMHO, to its success. Configuration management, with the legacies companies have (both vendors and consumers), is one of those areas people debate and is a whole other change beyond adopting k8s and cloud native. And, the ecosystem is vital to k8s success just like Linux generally needs GNU.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Quick comment: The server-side principle has been documented for a long, long time.

This should be merged with the new principles doc:
https://github.com/kubernetes/community/blob/master/contributors/devel/kubectl-conventions.md#principles

Another PR is open to move it, along with the other docs in devel.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

And the core of kustomize, strategic merge patch, IS on the server side.

Identification of resource cross-references is something that has been identified that needs to be published server-side. Other functionality (e.g., policy enforcement) could benefit from it also.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

On deployment patterns:

Since 1.0 we've had examples of some deployment patterns, such as for canary deployments:
https://kubernetes.io/docs/concepts/cluster-administration/manage-deployment/#canary-deployments

It doesn't preclude other patterns.

Actually, lots of our documentation needs to be updated and expanded. Few contribute to it, other than to document new features. So, yes, Phil should be applauded, for the new kubectl book, for his past work on kubectl documentation (e.g., https://kubernetes.io/docs/concepts/overview/object-management-kubectl/overview/), for the generated API documentation (https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.13/), and other contributions to the documentation, including the overall site design.

Copy link
Member Author

@pwittrock pwittrock Feb 1, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks. FYI the book is really an exploration inspired by the Rust documentation. Folks have seemed to like it, but I don't think anyone has gone deep on the content. Nothing in there should be taken as a concrete proposal until there is a real proposal to publish it. Sig docs has expressed positive feelings about it the last time I demoed it, but a lot has changed since then. It has as many good ideas as bad ones. I am hearing at least sig-cli, sig-arch and sig-docs should be included when it moves from exploratory to a concrete goal.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@pwittrock FWIW, I appreciate the style you're going for and think content with this kind of flow is needed. The question that comes to my mind how much of this content should be in the k8s project?

For example, the docs have a section for building images. If someone is going to create containers to run in Kubernetes this is very much needed. But, neither kubernetes nor kubectl do this. And, there are numerous builders available. So, how does the k8s project choose which builder to privilege by being in these examples? Should we even be in a place to do that?

I can see why SIG Docs likes the style. The choice of gitbook is one I appreciate, too.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So, how does the k8s project choose which builder to privilege by being in these examples? Should we even be in a place to do that?

I don't think what you are describing is what I envisioned for that section. I updated the book to demonstrate something close to my original thinking - e.g. decisions the user will make when building an image that will impact how they use the tooling - e.g. using a digest and updating the images references vs using latest with an imagePull always.

I'd like to see more of this sort of discussion take place. I ask these sorts of questions to myself, and being able to discuss them with a quorum of stakeholders would ensure we have the authority to make decisions on these matters. Perhaps we could driving this out of a subproject within sig-docs and invite stakeholders from workloads / apimachinery / kubectl to participate.


Additionally kubectl provides:
- commands targeted at sub-Resource APIs - e.g. exec, attach, logs
- commands targeted at non-Resource Kubernetes APIs - e.g. openAPI, discovery, version, etc
- porcelain commands for simple / common operations where no discrete
API implementation exists -e.g. `run`, `expose`, `rollout`, `cp`, `top`, `cordon`,
`drain` and `describe`.
- porcelain functionality working with Resource Config files, urls, etc -
e.g.`-f -R` flags, Kustomization `bases` and `resources`.

*kubectl is part swiss-army knife and part reference implementation for interacting with the API
and driving the fututure direction of the API through identifying API needs and addressing them
client-side.*

As such, it is also a proving group for widely used functionality that may be moved
into the server. Past examples of kubectl functionality that moved into the server include -
garbage collection, rolling updates, apply, "get" and dry-run.

It may also include porcelain that bridges standard non-Kubernetes native solution to Kubernetes
native solutions - e.g. `docker run` -> `kubectl run`, `EXPOSE` -> `kubectl expose`.

## Workflows

The scope of CLI Tools focuses on enabling declarative and imperative workflows
for invoking kubernetes APIs and authoring Resource Config. Tools provide
commands for both generalized (e.g. create resource from Resource Config) tasks and
specialized (e.g. drain a node, exec into a container) tasks.

It is the philosophy of the tools developed in SIG CLI to facilitate working
directly with the Kubernetes APIs and Kubernetes style Resources, and to the
extent possible, provide a transparent experience for how commands map to
Kubernetes APIs and Resources.

Building new abstractions and concepts for users to interact with in place of
the Resource APIs rather than access them (e.g. through generalized templating,
DSLs, etc) is not a goal of SIG CLI.

## Extensibility

CLI prefers to develop commands in such a way that they can provide a native
experience for APIs developed as extensions. This requires a philosophy of
minimizing resource specific functionality and enabling it through data
published by the cluster rather than hard-coding the API data into the tools.
This includes developing specific extension mechanisms for kubectl such as plugins.
Extensibility is a design preference, not a mandate, and should not come at a practical
cost impacting the UX or functionality of the tool.

CLI prefers to develop commands in such a way that enables tools and solutions
developed independently (e.g. outside the SIG, K8S project, etc) to interoperate
with the CLI tools - e.g. through pipes or wrapping / execing. This is aligned
with the goal of remaining close to the Kubernetes APIs.

## Documentation

SIG CLI is responsible for developing documentation to accompany kubectl that both describes
the functionality and provides techniques for effective usage.

#### Examples of Functionality In Kubectl

Following are examples of functionality that is in kubectl.

- Invoking Kubernetes APIs: Resource APIs, SubResource APIs, Discovery Service, Version, OpenAPI
- Pre and Post processing API Resource Config, API Requests and API Responses
- Aggregating multiple API Responses and post processing them
- Collapsing multiple manual steps into a command
- Generating Kubernetes Resource Config locally or creating Resources remotely
- Transforming Kubernetes Resource Config locally or patching remotely
- Blocking on propagation of an event or change to the cluster
- Referencing a collection of either remote or local Resource Config
- Configure how to talk to a specific cluster from the cli
- Selecting which API group/version to invoke if ambiguous in the context of the command