Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow Tags on Containers #16

Closed
18 tasks
MikeSpreitzer opened this issue Jun 28, 2016 · 19 comments
Closed
18 tasks

Allow Tags on Containers #16

MikeSpreitzer opened this issue Jun 28, 2016 · 19 comments
Assignees
Labels
sig/node Categorizes an issue or PR as relevant to SIG Node.

Comments

@MikeSpreitzer
Copy link
Member

MikeSpreitzer commented Jun 28, 2016

Description

This feature adds a way for users to control metadata tags on their containers. Each of the various container runtimes has a way to put some sort of user tag on containers. Giving users control over this is useful for scenarios in which the user wants to integrate Kubernetes with other platforms and facilities.

Progress Tracker

  • Before Alpha
    • Design Approval
      • Design Proposal. This goes under docs/proposals. Doing a proposal as a PR allows line-by-line commenting from community, and creates the basis for later design documentation. Paste link to merged design proposal here: PR-NUMBER --- See Add proposal for container tags kubernetes#27964, which is not yet merged
      • Initial API review (if API). Maybe same PR as design doc. PR-NUMBER
        • Any code that changes an API (/pkg/apis/...)
        • cc @kubernetes/api
    • Write (code + tests + docs) then get them merged. ALL-PR-NUMBERS
      • Code needs to be disabled by default. Verified by code OWNERS
      • Minimal testing
      • Minimal docs
        • cc @kubernetes/docs on docs PR
        • cc @kubernetes/feature-reviewers on this issue to get approval before checking this off
        • New apis: Glossary Section Item in the docs repo: kubernetes/kubernetes.github.io
      • Update release notes
  • Before Beta
    • Testing is sufficient for beta
    • User docs with tutorials
      • Updated walkthrough / tutorial in the docs repo: kubernetes/kubernetes.github.io
      • cc @kubernetes/docs on docs PR
      • cc @kubernetes/feature-reviewers on this issue to get approval before checking this off
    • Thorough API review
      • cc @kubernetes/api
  • Before Stable
    • docs/proposals/foo.md moved to docs/design/foo.md
      • cc @kubernetes/feature-reviewers on this issue to get approval before checking this off
    • Soak, load testing
    • detailed user docs and examples
      • cc @kubernetes/docs
      • cc @kubernetes/feature-reviewers on this issue to get approval before checking this off

FEATURE_STATUS is used for feature tracking and to be updated by @kubernetes/feature-reviewers.
FEATURE_STATUS: IN_DEVELOPMENT

More advice:

Design

  • Once you get LGTM from a @kubernetes/feature-reviewers member, you can check this checkbox, and the reviewer will apply the "design-complete" label.

Coding

  • Use as many PRs as you need. Write tests in the same or different PRs, as is convenient for you.
  • As each PR is merged, add a comment to this issue referencing the PRs. Code goes in the http://github.com/kubernetes/kubernetes repository,
    and sometimes http://github.com/kubernetes/contrib, or other repos.
  • When you are done with the code, apply the "code-complete" label.
  • When the feature has user docs, please add a comment mentioning @kubernetes/feature-reviewers and they will
    check that the code matches the proposed feature and design, and that everything is done, and that there is adequate
    testing. They won't do detailed code review: that already happened when your PRs were reviewed.
    When that is done, you can check this box and the reviewer will apply the "code-complete" label.

Docs

  • Write user docs and get them merged in.
  • User docs go into http://github.com/kubernetes/kubernetes.github.io.
  • When the feature has user docs, please add a comment mentioning @kubernetes/docs.
  • When you get LGTM, you can check this checkbox, and the reviewer will apply the "docs-complete" label.
@philips
Copy link
Contributor

philips commented Jun 28, 2016

Is there an existing discussion about this? This is about applying labels to the containers instead of to the pods? So I would need to have a pod object by name then query all of the labels on the containers in the pod?

@jbeda
Copy link
Contributor

jbeda commented Jun 28, 2016

Reading between the lines, this is to allow tags to be pushed down to individual (Docker) containers to enhance the tools that know about Docker but not about Kubernetes. @MikeSpreitzer -- can you help motivate this with some examples of the systems that you'd like to integrate with?

@MikeSpreitzer
Copy link
Member Author

I think I brought this up in a SIG Node meeting, but the discussion did not get very far there.
The particular use case in front of me right now is that my team is working on a hosted multi-tenant service that offers both Kubernetes and Docker Swarm. We have an API interceptor for both, and it transforms API operations as necessary to implement (in conjunction with the underlying managers) the multi-tenancy. We have enhanced the underlying Swarm manager to enforce tenant isolation based on a Docker container label. We view Docker Swarm as a lower level API through which a tenant should be able to see all of her containers, regardless of whether they were created through the Kubernetes or Swarm APIs. So we want the containers created by Kubernetes to also get the Docker container label that identifies the tenant.

@MikeSpreitzer
Copy link
Member Author

Another example we have considered is, again in a system with both Kubernetes and other platforms, tooling (such as monitoring) that applies to all containers and wants to get identifying information in a uniform way.

@bgrant0607
Copy link
Member

cc @kubernetes/sig-node

@duglin
Copy link

duglin commented Jun 29, 2016

@philips
Copy link
Contributor

philips commented Jul 15, 2016

Reading that thread @duglin it seems that achieving what you want from this feature isn't actionable until Docker makes this stuff mutable: https://groups.google.com/d/msg/kubernetes-sig-node/gijxbYC7HT8/uOXwZ-J6AwAJ

I certainly don't think we should pursue a path where we push metadata down in a stale manner.

@MikeSpreitzer
Copy link
Member Author

@philips: This was always about immutable metadata being applied to containers when they are created.

@MikeSpreitzer
Copy link
Member Author

In the discussion in the SIG Node meeting of July 12, 2016 it was agreed that a better approach involves hooks in the revised container runtime (which is being pursued under #28789). So I am closing this feature proposal and will work on the other approach.

@vishh
Copy link
Contributor

vishh commented Jul 15, 2016

Yeah. Implementing a custom docker runtime based on the new container
runtime API in kubelet would be the recommended strategy for this issue.

On Fri, Jul 15, 2016 at 8:45 AM, Mike Spreitzer notifications@github.com
wrote:

In the discussion in the SIG Node meeting of July 12, 2016 it was agreed
that a better approach involves hooks in the revised container runtime
(which is being pursued under #28789). So I am closing this feature
proposal and will work on the other approach.


You are receiving this because you are on a team that was mentioned.
Reply to this email directly, view it on GitHub
#16 (comment),
or mute the thread
https://github.com/notifications/unsubscribe-auth/AGvIKEFjktEqKdX7bogKv2DC3mqMN0Pbks5qV6sTgaJpZM4JALei
.

@MikeSpreitzer
Copy link
Member Author

Does an operator have to implement a whole new runtime in order to apply Docker labels to the containers? That seems a bit much.

@idvoretskyi idvoretskyi modified the milestone: v1.4 Jul 18, 2016
@vishh
Copy link
Contributor

vishh commented Jul 18, 2016

Docker labels will be applies automatically once docker supports label
updates. Any other customizations until then will have to happen via custom
runtimes.

On Fri, Jul 15, 2016 at 10:59 AM, Mike Spreitzer notifications@github.com
wrote:

Does an operator have to implement a whole new runtime in order to apply
Docker labels to the containers? That seems a bit much.


You are receiving this because you are on a team that was mentioned.
Reply to this email directly, view it on GitHub
#16 (comment),
or mute the thread
https://github.com/notifications/unsubscribe-auth/AGvIKEf0nApHiJ69NSCi9i1xLoBiEHekks5qV8qSgaJpZM4JALei
.

@yujuhong
Copy link
Contributor

@MikeSpreitzer if all you want is to present users/admins kubernetes labels associated with the container. You can easily get the pods and their status via the apiserver, and map the labels to the container IDs before presenting this information. For example, write a script called docker_wrapper, and users can run docker_wrapper ps, and the script will annotate the output with the labels it get from the k8s apiserver. This might not be pretty or fast, but it should work. Just my two cents.

We also discussed an alternative in the sig-node meeting, you can add your own daemon listening to request, adding labels (which you can get by watch the apiserver) and proxy that to the docker daemon. This doesn't require any change to the kubernetes code.

@MikeSpreitzer
Copy link
Member Author

My present use case is a cluster operator who uses Kubernetes with the Docker runtime and wants to prescribe an additional Docker container label for every docker container created by Kubernetes, with the label's value derived by a certain computation on certain attributes of the pod that the container is part of. It is important that the extra label actually be one of the Docker container labels, not just integrated in a custom UI.
It was noted in the July 19 SIG Node meeting that this could be accomplished using the current Kubernetes and Docker if the cluster operator creates a Docker API proxy that transforms every Docker container creation operation to add the desired label.

@yujuhong
Copy link
Contributor

It was noted in the July 19 SIG Node meeting that this could be accomplished using the current Kubernetes and Docker if the cluster operator creates a Docker API proxy that transforms every Docker container creation operation to add the desired label.

That's correct, and I think this is the preferred approach until we can update docker labels.

@idvoretskyi
Copy link
Member

@MikeSpreitzer any progress on this feature?

@idvoretskyi idvoretskyi added the sig/node Categorizes an issue or PR as relevant to SIG Node. label Sep 29, 2016
@euank
Copy link

euank commented Oct 12, 2016

I don't think this belongs in the 1.5 milestone. As far as I'm aware, there are no concrete plans in sig-node to work on this feature until an upstream docker issue (moby/moby#21721) is resolved. Even once that's resolved, I don't think this has gone through prioritization or reached broad consensus.

@idvoretskyi
Copy link
Member

@euank thank you for clarifying.

@idvoretskyi idvoretskyi modified the milestones: next-candidate, v1.5 Oct 12, 2016
@MikeSpreitzer
Copy link
Member Author

Closing this, because --- as noted in the July 20 and 21 comments --- a different approach to supporting the use case is favored.

ingvagabund pushed a commit to ingvagabund/enhancements that referenced this issue Apr 2, 2020
cluster logging log forwarding proposal
brahmaroutu pushed a commit to brahmaroutu/enhancements that referenced this issue Jul 22, 2020
* change CSI driver Pod icon to DS

* drop backRef list from Bucket, add ns list to Class
howardjohn pushed a commit to howardjohn/enhancements that referenced this issue Oct 21, 2022
Move promotion process from issues to a file structure
k8s-ci-robot pushed a commit that referenced this issue Feb 8, 2024
add notes about storage version and how the storage version is selected
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
sig/node Categorizes an issue or PR as relevant to SIG Node.
Projects
None yet
Development

No branches or pull requests

10 participants