Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Propose feature for troubleshooting running pods #35584

Closed
wants to merge 9 commits into from

Conversation

verb
Copy link
Contributor

@verb verb commented Oct 26, 2016

What this PR does / why we need it: Initially described in #27140, this PR proposes a feature to better support minimal container images (i.e. images built from scratch) by adding a new kubectl debug to run a container in a pod namespace or create an ad-hoc copy of a pod. This is useful for developers writing native Kubernetes applications that prefer not to ship an entire OS image.

Special notes for your reviewer: SIG Node asked me to send this PR to facilitate further discussion.

Release note: None, this PR contains only a proposal.


This change is Reviewable

@k8s-ci-robot
Copy link
Contributor

Can a kubernetes member verify that this patch is reasonable to test? If so, please reply with "@k8s-bot ok to test" on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands will still work. Regular contributors should join the org to skip this step.

If you have questions or suggestions related to this bot's behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-github-robot k8s-github-robot added kind/design Categorizes issue or PR as related to design. size/L Denotes a PR that changes 100-499 lines, ignoring generated files. release-note-label-needed labels Oct 26, 2016
@bgrant0607
Copy link
Member

cc @kubernetes/sig-node

@bgrant0607 bgrant0607 assigned dchen1107 and unassigned bgrant0607 Oct 26, 2016
@vishh
Copy link
Contributor

vishh commented Oct 31, 2016

@k8s-bot ok to test

* Is this compatible with Admission control plugins that restrict what images
can run on a cluster?
* Would it be possible to Image Exec into pods that are in a terminal state to
determine why they've failed?
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what security context is used by the new container? how does that interact with PodSecurityPolicy?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

cc @kubernetes/sig-auth

while the latter would provide binaries for kubectl exec.

Adding a container to a running pod would be a large change to Kubernetes, and
there are several edge cases to consider such as resource limits and monitoring.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

and admission that locks security context at pod creation time

@dchen1107
Copy link
Member

Good idea, but need to iron out many details. Here are the several ones from the top of my head:

  1. How this container interactive with the pod lifecycle
  2. How this debug container present to the end user? show it as another container or hidden as today's podInfraContainer?
  3. resource management introduced by this extra consumption, charging to the existing pod? or update the pod cgroup with extra overhead?

@derekwaynecarr
Copy link
Member

Per my comments on sig-node meeting:

  1. The proposal as defined allows users to launch new containers in a pod that are not discoverable via consumers of the API. This is problematic for us as we have built security solutions that audit the cluster based on the set of containers/images that are seen running via the API. We should not allow containers to be run that are associated with a pod that are not discoverable when getting the status of the pod.
  2. I would prefer we explore something similar to init containers by allowing some type of modification to the pod spec for these ephemeral containers that are not part of the core pod lifecycle.
  3. I have some concerns about how SELinux and other tools would work @pmorie

@smarterclayton
Copy link
Contributor

I agree that this requires discoverability by the API. The status of this container is important.

A few things:

  1. Can we start this container in a terminated (success, failed, or error) pod? There are many use cases for debugging that, but I also don't want to have to keep the pod's volumes around forever
  2. Should this container be allowed to change the pod volumes? I don't think so - so it may limit some debugging scenarios.
    1. EXCEPTION: It should be possible to bind mount the other container's filesystems in without changing the pod volumes
  3. If we had this as transient-container+attach, there's less of a need for exec (because you could start bash and attach) in many cases people are abusing it.
  4. Based on statements we have made before, I think it is inappropriate for the resource limits of the pod to change as a result of this, but that makes it very hard to run one of these inside of a pod that is at its maximum resources (main process has used all available pod memory, for instance). Does that require pre-reservation?

@verb
Copy link
Contributor Author

verb commented Nov 1, 2016

Summarizing feedback from SIG Node meetup:

@derekwaynecarr added a firm requirement for ability to introspect container images

The following questions were addressed:

  1. Will this provide a view into a container's mount namespace or simply container filesystems? Only container filesystems since there's no way to support mounting the actual mount namespace as a subdirectory (if this becomes possible we should support it).
  2. How should resources be accounted? There was weak consensus that it's reasonable to expect users to include resources consumed by an kubectl exec in the initial pod sizing.
  3. Dangling pod problem with kubectl run --rm? This would be different from kubectl run in that it would be implemented server-side. That is, the kubelet must ensure there are no dangling containers.

The following open questions were added:

  1. How does this affect the pod lifecycle?
  2. What are the edge cases of current kubectl exec. Can you re-attach to a running exec? How would this be implemented for an image exec?
  3. Userns + host bindmounts, depending on how userns is implemented, could be messy (hostmount implies no userns, but userns relabeling might be enabled for the whole pod)
  4. How does this interact with SELinux?

An alternative proposed during the meeting was to allowing adding a container to the podspec. This would be a larger change, but perhaps also more worthwhile. It could be simplified by introducing a new type of ephemeral container similar to the implementation of init-containers. We ran out of time before discussing whether this solution could meet the other requirements such as container filesystem access.

It's not clear where to go next, so I'm going to work on addressing some of the questions raised by the feedback and try to estimate how much work would be required to support modifying the podspec of a running pod.

@smarterclayton
Copy link
Contributor

We've discussed the possibility of new container types before and have been
cautious about them. I think drawing up the use cases and demonstrating
that they are more difficult to reason about if implemented differently is
key. We'd probably want to find supporting use cases, like the folks at
Huawei and Red Hat who wanted to be able to commit containers in pods (so
they could have a build system for images that uses kube to manage the
containers). We'd need to rationalize why exec (which is heavily used
today for pod support tasks like backup, debugging, builds, etc) is
different than these containers.

On Tue, Nov 1, 2016 at 5:44 PM, Lee Verberne notifications@github.com
wrote:

Summarizing feedback from SIG Node meetup:

@derekwaynecarr https://github.com/derekwaynecarr added a firm
requirement for ability to introspect container images

The following questions were addressed:

  1. Will this provide a view into a container's mount namespace or
    simply container filesystems?
    Only container filesystems since
    there's no way to support mounting the actual mount namespace as a
    subdirectory (if this becomes possible we should support it).
  2. How should resources be accounted? There was weak consensus that
    it's reasonable to expect users to include resources consumed by an kubectl
    exec in the initial pod sizing.
  3. Dangling pod problem with kubectl run --rm? This would be
    different from kubectl run in that it would be implemented server-side.
    That is, the kubelet must ensure there are no dangling containers.

The following open questions were added:

  1. How does this affect the pod lifecycle?
  2. What are the edge cases of current kubectl exec. Can you re-attach
    to a running exec? How would this be implemented for an image exec?
  3. Userns + host bindmounts, depending on how userns is implemented,
    could be messy (hostmount implies no userns, but userns relabeling might be
    enabled for the whole pod)
  4. How does this interact with SELinux?

An alternative proposed during the meeting was to allowing adding a
container to the podspec. This would be a larger change, but perhaps also
more worthwhile. It could be simplified by introducing a new type of
ephemeral container similar to the implementation of init-containers. We
ran out of time before discussing whether this solution could meet the
other requirements such as container filesystem access.

It's not clear where to go next, so I'm going to work on addressing some
of the questions raised by the feedback and try to estimate how much work
would be required to support modifying the podspec of a running pod.


You are receiving this because you commented.
Reply to this email directly, view it on GitHub
#35584 (comment),
or mute the thread
https://github.com/notifications/unsubscribe-auth/ABG_p1JPTjxL5KHnihv1vPlsC7tVb3TCks5q57KrgaJpZM4KgwPp
.

Copy link

@timstclair timstclair left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There was weak consensus that it's reasonable to expect users to include resources consumed by an kubectl exec in the initial pod sizing.

This might be acceptable for an initial implementation, but I think it's far from ideal. If you're running a very large cluster, e.g. 50,000 pods, even a small constant overhead will add up very quickly. It also requires a lot of foresight, which can't always be counted on when debugging something that failed.


We could instead modify CRI's `CreateContainer` to include a list of containers
IDs to "link" to the newly created container, but then we'd have to repeat the
logic in each of the container backends.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this might be unavoidable. If the container runtime uses something other than linux namespaces for isolation (e.g. VM containers), the Kubelet wouldn't be able to manage it.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's an interesting point. How would pods look under VM containers? Would they still have a shared (e.g. network) namespace somehow? Or would they be pods in name only?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@verb e.g. for hyper, all containers belonging to the same Pod is running at same VM, and they share same network namespace. And different Pods are running at different VMs.

* Is this compatible with Admission control plugins that restrict what images
can run on a cluster?
* Would it be possible to Image Exec into pods that are in a terminal state to
determine why they've failed?

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

From the sig-node meeting:

  • What happens if the connection is dropped?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I looked into this a little today. The current behavior in exec is that the process continues to run but there doesn't seem to be any way to reattach to it. It doesn't get a signal and stdin is left open. My prototype for image exec behaves the same.

A debug container would behave the same, except perhaps that it could be attached with kubectl attach, with the kubelet removing the container when the pod is destroyed or when the process exits (potentially relying on the runtime to support automatically removing the container a la "docker run --rm")

there are several edge cases to consider such as resource limits and monitoring.
Adding a volume to a running container would be a smaller change, but it would
still be larger than the proposed change and we'd have to figure out how to
build and distribute a volume of binaries.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Related: #831

@verb
Copy link
Contributor Author

verb commented Nov 3, 2016

@smarterclayton Whom could I contact to ask about Huawei's and Red Hat's use cases?

@verb
Copy link
Contributor Author

verb commented Nov 4, 2016

Status update: I was able to prototype adding a container to the spec of a running pod with surprisingly little effort. I need to think through the consequences and edge cases, but I'm optimistic this might be a cleaner solution.

Some thoughts regarding the problem I'm trying to solve:

  • Being able to attach/detach from the debugging shell is very nice
  • the kubectl patch incantation required to do this isn't particularly user friendly. I'd really want there to be an easier command.
  • This doesn't address the container filesystems, but that could potentially be added as a general option for a Container
  • For troubleshooting pods all I need is support for adding a container (i.e. it doesn't have to be ephemeral) but other use cases may require a way for the container to exit short of deleting the pod.

@liggitt
Copy link
Member

liggitt commented Nov 4, 2016

The security and scheduling implications of making the pod spec mutable are concerning. Much has been built on the write-once nature of the security context, volume, port, and resource aspects of the pod spec.

@smarterclayton
Copy link
Contributor

Not being able to clean up terminated pods immediately will have lots of
implications for cluster stability (for instance, we'll have to detach
shared volumes almost immediately for use cases like jobs that reuse
those). It's possible that the attached container could hold the pod from
going to terminated state, but that has backwards compatibility concerns
for clients.

On Thu, Nov 3, 2016 at 9:26 PM, Jordan Liggitt notifications@github.com
wrote:

The security and scheduling implications of making the pod spec mutable
are concerning. Much has been built on the write-once nature of the
security context, volume, port, and resource aspects of the pod spec.


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
#35584 (comment),
or mute the thread
https://github.com/notifications/unsubscribe-auth/ABG_p7o4FJ0kWpZUhPsNofuw-DsgNkDGks5q6onXgaJpZM4KgwPp
.

@smarterclayton
Copy link
Contributor

Having to manage exec sessions via the API has been something I've been
dreading (Docker ended up having to do it). I think it's likely we'll need
to have the concept of "running sessions" whether it's exec or these new
type of containers, so it's worth asking how much complexity we're going to
add there (maybe we should consider these containers more like exec
sessions, and just have the ability to define an exec environment that is
either in an existing container or creates a new temporary container). But
that still doesn't remove the need to know that they are there.

On Thu, Nov 3, 2016 at 10:00 PM, Clayton Coleman ccoleman@redhat.com
wrote:

Not being able to clean up terminated pods immediately will have lots of
implications for cluster stability (for instance, we'll have to detach
shared volumes almost immediately for use cases like jobs that reuse
those). It's possible that the attached container could hold the pod from
going to terminated state, but that has backwards compatibility concerns
for clients.

On Thu, Nov 3, 2016 at 9:26 PM, Jordan Liggitt notifications@github.com
wrote:

The security and scheduling implications of making the pod spec mutable
are concerning. Much has been built on the write-once nature of the
security context, volume, port, and resource aspects of the pod spec.


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
#35584 (comment),
or mute the thread
https://github.com/notifications/unsubscribe-auth/ABG_p7o4FJ0kWpZUhPsNofuw-DsgNkDGks5q6onXgaJpZM4KgwPp
.

runtimes. We could implement the same create/start/attach/remove logic in rkt
and docker_manager, or we could save effort by linking this feature to the CRI
and return "Not Implemented" for the legacy runtimes.

Copy link
Contributor

@resouer resouer Nov 4, 2016

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@verb Could you please add a note for hypervisor based runtime like https://github.com/hyperhq/hyperd here or other place? Generally we think this will not break hypervisor based runtimes (though may require refactoring on their own side), but we should definitely keep them in mind.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure, I will definitely add a note about hypervisor based runtimes since it's come up multiple times. It's my intention, though, that any runtime implementing the CRI will be supported without modification (though we might have to add one or two small things to the CRI to accomplish that). This section is specifically about the 2 non-CRI runtimes that are implemented directly in the kubelet.

@k8s-github-robot k8s-github-robot added the do-not-merge DEPRECATED. Indicates that a PR should not merge. Label can only be manually applied/removed. label Nov 10, 2016
@verb
Copy link
Contributor Author

verb commented Nov 11, 2016

@smarterclayton's idea of exec sessions would address a lot of the usability concerns I have with modifying the podspec as a troubleshooting solution. I'm going to look a bit more into pod/container lifecycles then update the doc.

@apelisse apelisse removed the do-not-merge DEPRECATED. Indicates that a PR should not merge. Label can only be manually applied/removed. label Nov 11, 2016
@k8s-github-robot
Copy link

Adding label:do-not-merge because PR changes docs prohibited to auto merge
See http://kubernetes.io/editdocs/ for information about editing docs

@verb
Copy link
Contributor Author

verb commented May 5, 2017

/release-note-none

@k8s-bot node e2e test this

(doesn't matter sense this will be closed here and merged into the community repo, but it takes the red X off my dashboard)

@k8s-ci-robot k8s-ci-robot added release-note-none Denotes a PR that doesn't merit a release note. and removed release-note-label-needed labels May 5, 2017
* Enable troubleshooting of minimal container images
* Allow troubleshooting of containers in `CrashLoopBackoff`
* Improve supportability of native Kubernetes applications
* Enable novel uses of short-lived containers in pods
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1 this should not be a goal.


The status of a *debug container* is reported in a new `DebugContainerStatuses`
field of `PodStatus`, which is a read-only list of `ContainerStatus` that
contains an entry for every *debug container* that has ever run in this pod, and
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If many debug containers were created during the pod's lifetime, this would significantly increase the size of the pod object. Is this a concern?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think it's much of a concern, but we'll want to address this as part of garbage collection. Garbage collecting debug containers will cause this list to shrink. I think we'd want to make garbage collection configurable by cluster admins, but I'm not planning on this for the initial alpha release.

Alternatively, it might be reasonable to create some sort of MaxDebugContainersPerPod tuning parameter. What do you think?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think keeping a maximum "terminated" debug containers is an acceptable first step.

By the way, what's the purpose of keeping a list of ever-growing debug container statuses? We will have auditing soon for posterity already.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This was attempting to solve a requirement from @derekwaynecarr about the ability in inspect debug containers. I hadn't heard about the auditing effort, but maybe that will better suit his requirements and then we wouldn't need any changes to GC.


*Running Debug* requires new functionality in the kubelet and a new `kubectl
debug` command that results in a new container being introduced in the running
pod. This *debug container* is not part of the pod spec, which remains
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Without modifying the pod spec, how does kubelet know to create the new debug container?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's created by a new API call. I'll clarify this in the doc.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the doc evolves over time and the result is that many important points are not mentioned early enough. I'd suggest giving the overview of the mechanism before diving into specific details.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You're right, I think a rewrite may be in order.

```

If the specified container name already exists, `kubectl debug` will kill the
existing container prior to starting a new one. It's legal to reuse container
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why not error out and let users decide whether to kill the previous debug container or not? Replacing the container directly may cause confusion.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I thought it would be a convenient way to kill or replace the container, but I agree the implicit container might be confusing. Perhaps it would be preferable to require the user to kill the process before attempting to replace the container.

I do feel strongly that it should be possible to replace containers that have already exited, though.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think asking the user before replacing it is more reasonable.

Another thing is that the container names cannot be reused as long as the container still exists, even if it already exited. You'll need to remove (GC) the container, which may cause the loss of one "DebugContainerStatus".

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I never needed to investigate this because it "just worked" when I did the naive implementation. The exited container is replaced and the DebugContainerStatus is lost.

Since this is closely related to GC, let's just remove it as a blocker for the alpha by disallowing container name reuse in all cases (for now).

lack of configuration and restart policy. The kubelet's additional
responsibilities include:

1. Start or restart a debug container in a pod when requested.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Without modifying the pod spec, you'd be relying on kubelet to remember this request. If kubelet restarts before creating the container (or if the container crashes for other reasons), it will never be (re-)created without kubelet persisting the request somewhere. This may be okay but it should be stated clearly in this proposal.

##### Step 1: Container Type

Debug containers exist within the kubelet as a `ContainerStatus` without a
container spec. As with other `kubecontainers`, their state is persisted in the
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

s/state/type ?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I meant all state is currently persisted via the runtime.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wouldn't say we persist the state (which changes). We mostly persist the spec.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

By state I just meant "the list of containers and their metadata". I'll change this to "metadata" to make it more clear.

This step adds methods for creating debug containers, but doesn't yet modify the
kubelet API. The kubelet will gain a `RunDebugContainer()` method which accepts
a `v1.Container` and creates a debug container. Fields that don't make sense in
the context of a debug container (e.g. probes, lifecycle, resources) will be
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please define the valid fields for a debug container.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done


The kubelet will treat `DEBUG` containers differently in the following ways:

1. `SyncPod()` ignores containers of type `DEBUG`, since there is no
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What happens if the pod restarts (e.g., infra container died and gets recreated)? What would happen to the debug container?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It will be killed and not restarted. When an infra container dies the kuberuntime manager explicitly kills every container in that pod as reported by the runtime.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

But the proposal said SyncPod would just ignore the debug containers.... It can't really ignore AND compute changes for the debug containers. I'm a little bit concerned about the increased complexity of the code which may at times ignore the debug container. I know this is the implementation detail, but please make sure the logic clear and readable in the code.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You're right, there's an edge case in sandbox restarts where a debug container should be killed prior to pod deletion, but it's not a complex one. I've updated the proposal to reflect this. If you'd like to see the added SyncPod complexity, it's here:

https://github.com/verb/kubernetes/pull/3/files#diff-8da51ab012ecbdf2b013ecbcf1c37033


##### Step 3: kubelet API changes

The kubelet will gain a new streaming endpoint `/debug` similar to `/exec`,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

kubelet already has a /debug endpoint. Need another name.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

whoa, somehow missed that. Ok, will rename to.. umm... "podDebug", which seems similar to existing endpoint "containerLogs"


The kubelet will gain a new streaming endpoint `/debug` similar to `/exec`,
`/attach`, etc. The debug endpoint will create a new debug container and
automatically attach to it.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is it allowed to create a non-interactive debug container?

Also, creating and starting a container may take a long time (e.g., pulling down the image). Wouldn't this affect the user experience?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd like to allow for the non-interactive use case, but not necessarily the detached use case.

If it took a very long time container creation might affect user experience, but I don't see any way around it. definitely open to suggestions.

@verb
Copy link
Contributor Author

verb commented May 15, 2017

/unassign @smarterclayton
/unassign @thockin
/assign @yujuhong
/assign @dashpole

* Separate "Copy Mode" to another PR for another time.
* Describe feature in similarity to `kubectl exec`
* Simplify feature description, move non-essential supporting sections
  to Appendices.
@verb
Copy link
Contributor Author

verb commented May 15, 2017

Rewrite to implement @yujuhong's suggestions re: format. Simplify and move non-essential info to Appendices. Also addresses all comments from previous commit (afaik).

@dashpole
Copy link
Contributor

@verb regenerating the podSandboxConfig when creating a debug container should not be problematic, since we regenerate it when restarting containers as well.

Copy link
Contributor

@dashpole dashpole left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Implementation looks good. Just a couple general questions. Apologies if they have already been discussed


This creates an interactive shell in a pod which can examine and signal all
processes in the pod. It has access to the same network and IPC as processes in
the pod. It can access the filesystem of other processes by `/proc/$PID/root`,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why organize the filesystem by process instead of container? Does this mean that if I have 2 processes within a container, ill have two different ways of seeing the same root filesystem? Would it make more sense to place the filesystems as /container_name/?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's just because we get that for free with Linux. Originally I had intended to cross-mount the filesystems as you suggest, but that requires a change to the CRI which was contentious, so we agreed to start with this.

You're correct that there would be multiple ways of viewing the same filesystems, though technically they could be different mount namespaces.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok, that makes sense.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How does this work if there are multiple debug containers in the same pod? Do the debug containers see each other?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, as long as there's a shared PID namespace, which is enabled by default for 1.7. This feature relies on a shared PID namespace.

One way in which `kubectl debug` differs from `kubectl exec` is the ability to
reattach to Debug Container if you've been disconnected. This is accomplished by
running `kubectl debug -r` with the name of an existing, running Debug
Container. The `-r` option to `kubectl debug` is translated to a `reattach=True`
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why not implicitly reconnect, instead of making the user specify it? The only case I can think of is if the user wasnt aware that the debug container was running, and somehow mucks with a running debug container accidentally.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I like the idea of implicit reconnection, but I'm not sure how to handle the container parameters in that case. For example, say one user runs kubectl debug -it pod -m debian -c debug -- bash and, while that's running, another user runs kubectl debug -it pod -m alpine -c debug -- ls /.

The kubelet will receive a v1.Container with an image and args that differs from the existing container. The easiest thing to do would be to discard the new v1.Container and attach to the running container, but that would be confusing to the second user.

We could compare the provided v1.Container to the existing ContainerStatus and only reattach if the run time arguments are identical, but that might still be confusing for user 2 if she didn't notice user 1, so I just opted for requiring explicit behavior.

I'm a big fan of implicit behavior and reasonable defaults, though, so I'd be happy to change this if we can work out the details.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That makes sense.

is an error to attempt to replace a Debug Container that is still running.

One way in which `kubectl debug` differs from `kubectl exec` is the ability to
reattach to Debug Container if you've been disconnected. This is accomplished by
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does the debug container exit when you disconnect from it? That would seem like generally desirable behavior to me, as it would guarantee that only debug containers that actively are being used stick around.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If you're disconnected involuntarily (network interruption, pkill -9 kubectl, etc) then the container continues running so that you can reattach where you left off. The only way to disconnect intentionally is to terminate the process (exit a shell, ^C a process, etc) which would cause the container to exit.

We could improve upon this by adding an escape sequence to kubectl, but probably not in time for an alpha in 1.7.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

just curious: Is there a use case around debugging with state? Basically, is there ever a case where a user would care that they start a new container rather than return to an older debug container?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The only person that's mentioned it so far is @smarterclayton here: #35584 (comment)

I like the idea of being able to reattach, but it's not a hard requirement for our own use case. I kept it for the alpha because it was so easy to implement.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Right. I guess my question is more around whether or not to expose that functionality to users if they dont need it. With regards to @smarterclayton's comment, if a user needs to be aware of the different debug containers that are active in order to use the tool effectively, then we need to expose that to them. The alternative is to make the debug command act the same way regardless of what other debug containers exist, similar to how exec works (I think). Essentially the behavior would be to always start a new debug container, and always clean up the debug container after the session exits.

It isn't necessary to make the decision before the alpha implementation though. But it may be difficult to remove reattach functionality after we add it, even if it is just an alpha.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No matter what we choose for reattach, active debug containers will be listed with names via kubectl describe pod

My gut feeling is that reattach will be something people want, but I don't want to paint us into a corner. What if we just do implicit reattach for the alpha and live with the surprising behavior? That way we haven't added any parameters to the API that we might have to support.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think that would work well.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I updated the alpha feature scope section of the doc to reflect this.

@verb
Copy link
Contributor Author

verb commented May 17, 2017

@dashpole Awesome, thanks for checking on podSandboxConfig.

@dashpole and I agreed offline that modifying GC for the alpha feature isn't necessary since it would likely change before graduating alpha, anyway. I've updated the doc to reflect this.

@dashpole
Copy link
Contributor

This proposal LGTM

@verb
Copy link
Contributor Author

verb commented May 22, 2017

Moved PR to kubernetes/community#649 to be merged.

@verb verb closed this May 22, 2017
k8s-github-robot pushed a commit to kubernetes/community that referenced this pull request Sep 27, 2017
Automatic merge from submit-queue.

Propose a feature to troubleshoot running pods

This feature allows troubleshooting of running pods by running a new "Debug Container" in the pod namespaces.

This proposal was originally opened and reviewed in kubernetes/kubernetes#35584.

This proposal needs LGTM by the following SIGs:
- [ ] SIG Node
- [ ] SIG CLI
- [ ] SIG Auth
- [x] API Reviewer

Work in Progress:
- [x] Prototype `kubectl attach` for debug containers
- [x] Talk to sig-api-machinery about `/debug` subresource semantics
k8s-github-robot pushed a commit that referenced this pull request Jan 24, 2018
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Add a container type to the runtime manager's container status

**What this PR does / why we need it**:
This is Step 1 of the "Debug Containers" feature proposed in #35584 and is hidden behind a feature gate. Debug containers exist as container status with no associated spec, so this new runtime label allows the kubelet to treat containers differently without relying on spec.

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: cc #27140

**Special notes for your reviewer**:

**Release note**:

```release-note
NONE
```

**Integrating feedback**:
- [x] Remove Type field in favor of a help method

**Dependencies:**
- [x] #46261 Feature gate for Debug Containers
justaugustus pushed a commit to justaugustus/enhancements that referenced this pull request Sep 3, 2018
Automatic merge from submit-queue.

Propose a feature to troubleshoot running pods

This feature allows troubleshooting of running pods by running a new "Debug Container" in the pod namespaces.

This proposal was originally opened and reviewed in kubernetes/kubernetes#35584.

This proposal needs LGTM by the following SIGs:
- [ ] SIG Node
- [ ] SIG CLI
- [ ] SIG Auth
- [x] API Reviewer

Work in Progress:
- [x] Prototype `kubectl attach` for debug containers
- [x] Talk to sig-api-machinery about `/debug` subresource semantics
MadhavJivrajani pushed a commit to kubernetes/design-proposals-archive that referenced this pull request Nov 30, 2021
Automatic merge from submit-queue.

Propose a feature to troubleshoot running pods

This feature allows troubleshooting of running pods by running a new "Debug Container" in the pod namespaces.

This proposal was originally opened and reviewed in kubernetes/kubernetes#35584.

This proposal needs LGTM by the following SIGs:
- [ ] SIG Node
- [ ] SIG CLI
- [ ] SIG Auth
- [x] API Reviewer

Work in Progress:
- [x] Prototype `kubectl attach` for debug containers
- [x] Talk to sig-api-machinery about `/debug` subresource semantics
MadhavJivrajani pushed a commit to MadhavJivrajani/design-proposals that referenced this pull request Dec 1, 2021
Automatic merge from submit-queue.

Propose a feature to troubleshoot running pods

This feature allows troubleshooting of running pods by running a new "Debug Container" in the pod namespaces.

This proposal was originally opened and reviewed in kubernetes/kubernetes#35584.

This proposal needs LGTM by the following SIGs:
- [ ] SIG Node
- [ ] SIG CLI
- [ ] SIG Auth
- [x] API Reviewer

Work in Progress:
- [x] Prototype `kubectl attach` for debug containers
- [x] Talk to sig-api-machinery about `/debug` subresource semantics
MadhavJivrajani pushed a commit to MadhavJivrajani/design-proposals that referenced this pull request Dec 1, 2021
Automatic merge from submit-queue.

Propose a feature to troubleshoot running pods

This feature allows troubleshooting of running pods by running a new "Debug Container" in the pod namespaces.

This proposal was originally opened and reviewed in kubernetes/kubernetes#35584.

This proposal needs LGTM by the following SIGs:
- [ ] SIG Node
- [ ] SIG CLI
- [ ] SIG Auth
- [x] API Reviewer

Work in Progress:
- [x] Prototype `kubectl attach` for debug containers
- [x] Talk to sig-api-machinery about `/debug` subresource semantics
MadhavJivrajani pushed a commit to MadhavJivrajani/design-proposals that referenced this pull request Dec 1, 2021
Automatic merge from submit-queue.

Propose a feature to troubleshoot running pods

This feature allows troubleshooting of running pods by running a new "Debug Container" in the pod namespaces.

This proposal was originally opened and reviewed in kubernetes/kubernetes#35584.

This proposal needs LGTM by the following SIGs:
- [ ] SIG Node
- [ ] SIG CLI
- [ ] SIG Auth
- [x] API Reviewer

Work in Progress:
- [x] Prototype `kubectl attach` for debug containers
- [x] Talk to sig-api-machinery about `/debug` subresource semantics
MadhavJivrajani pushed a commit to kubernetes/design-proposals-archive that referenced this pull request Dec 1, 2021
Automatic merge from submit-queue.

Propose a feature to troubleshoot running pods

This feature allows troubleshooting of running pods by running a new "Debug Container" in the pod namespaces.

This proposal was originally opened and reviewed in kubernetes/kubernetes#35584.

This proposal needs LGTM by the following SIGs:
- [ ] SIG Node
- [ ] SIG CLI
- [ ] SIG Auth
- [x] API Reviewer

Work in Progress:
- [x] Prototype `kubectl attach` for debug containers
- [x] Talk to sig-api-machinery about `/debug` subresource semantics
MadhavJivrajani pushed a commit to kubernetes/design-proposals-archive that referenced this pull request Dec 1, 2021
Automatic merge from submit-queue.

Propose a feature to troubleshoot running pods

This feature allows troubleshooting of running pods by running a new "Debug Container" in the pod namespaces.

This proposal was originally opened and reviewed in kubernetes/kubernetes#35584.

This proposal needs LGTM by the following SIGs:
- [ ] SIG Node
- [ ] SIG CLI
- [ ] SIG Auth
- [x] API Reviewer

Work in Progress:
- [x] Prototype `kubectl attach` for debug containers
- [x] Talk to sig-api-machinery about `/debug` subresource semantics
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. do-not-merge DEPRECATED. Indicates that a PR should not merge. Label can only be manually applied/removed. kind/design Categorizes issue or PR as related to design. needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. release-note-none Denotes a PR that doesn't merit a release note. size/XL Denotes a PR that changes 500-999 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet