Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Propose feature for troubleshooting running pods #35584

Closed
wants to merge 9 commits into from

Conversation

@verb
Copy link
Contributor

commented Oct 26, 2016

What this PR does / why we need it: Initially described in #27140, this PR proposes a feature to better support minimal container images (i.e. images built from scratch) by adding a new kubectl debug to run a container in a pod namespace or create an ad-hoc copy of a pod. This is useful for developers writing native Kubernetes applications that prefer not to ship an entire OS image.

Special notes for your reviewer: SIG Node asked me to send this PR to facilitate further discussion.

Release note: None, this PR contains only a proposal.


This change is Reviewable

@k8s-ci-robot

This comment has been minimized.

Copy link
Contributor

commented Oct 26, 2016

Can a kubernetes member verify that this patch is reasonable to test? If so, please reply with "@k8s-bot ok to test" on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands will still work. Regular contributors should join the org to skip this step.

If you have questions or suggestions related to this bot's behavior, please file an issue against the kubernetes/test-infra repository.

@bgrant0607

This comment has been minimized.

Copy link
Member

commented Oct 26, 2016

cc @kubernetes/sig-node

@bgrant0607 bgrant0607 assigned dchen1107 and unassigned bgrant0607 Oct 26, 2016

@vishh

This comment has been minimized.

Copy link
Member

commented Oct 31, 2016

@k8s-bot ok to test

* Is this compatible with Admission control plugins that restrict what images
can run on a cluster?
* Would it be possible to Image Exec into pods that are in a terminal state to
determine why they've failed?

This comment has been minimized.

Copy link
@liggitt

liggitt Nov 1, 2016

Member

what security context is used by the new container? how does that interact with PodSecurityPolicy?

This comment has been minimized.

Copy link
@liggitt

liggitt Nov 1, 2016

Member

cc @kubernetes/sig-auth

while the latter would provide binaries for kubectl exec.

Adding a container to a running pod would be a large change to Kubernetes, and
there are several edge cases to consider such as resource limits and monitoring.

This comment has been minimized.

Copy link
@liggitt

liggitt Nov 1, 2016

Member

and admission that locks security context at pod creation time

@dchen1107

This comment has been minimized.

Copy link
Member

commented Nov 1, 2016

Good idea, but need to iron out many details. Here are the several ones from the top of my head:

  1. How this container interactive with the pod lifecycle
  2. How this debug container present to the end user? show it as another container or hidden as today's podInfraContainer?
  3. resource management introduced by this extra consumption, charging to the existing pod? or update the pod cgroup with extra overhead?
@derekwaynecarr

This comment has been minimized.

Copy link
Member

commented Nov 1, 2016

Per my comments on sig-node meeting:

  1. The proposal as defined allows users to launch new containers in a pod that are not discoverable via consumers of the API. This is problematic for us as we have built security solutions that audit the cluster based on the set of containers/images that are seen running via the API. We should not allow containers to be run that are associated with a pod that are not discoverable when getting the status of the pod.
  2. I would prefer we explore something similar to init containers by allowing some type of modification to the pod spec for these ephemeral containers that are not part of the core pod lifecycle.
  3. I have some concerns about how SELinux and other tools would work @pmorie
@smarterclayton

This comment has been minimized.

Copy link
Contributor

commented Nov 1, 2016

I agree that this requires discoverability by the API. The status of this container is important.

A few things:

  1. Can we start this container in a terminated (success, failed, or error) pod? There are many use cases for debugging that, but I also don't want to have to keep the pod's volumes around forever
  2. Should this container be allowed to change the pod volumes? I don't think so - so it may limit some debugging scenarios.
    1. EXCEPTION: It should be possible to bind mount the other container's filesystems in without changing the pod volumes
  3. If we had this as transient-container+attach, there's less of a need for exec (because you could start bash and attach) in many cases people are abusing it.
  4. Based on statements we have made before, I think it is inappropriate for the resource limits of the pod to change as a result of this, but that makes it very hard to run one of these inside of a pod that is at its maximum resources (main process has used all available pod memory, for instance). Does that require pre-reservation?
@verb

This comment has been minimized.

Copy link
Contributor Author

commented Nov 1, 2016

Summarizing feedback from SIG Node meetup:

@derekwaynecarr added a firm requirement for ability to introspect container images

The following questions were addressed:

  1. Will this provide a view into a container's mount namespace or simply container filesystems? Only container filesystems since there's no way to support mounting the actual mount namespace as a subdirectory (if this becomes possible we should support it).
  2. How should resources be accounted? There was weak consensus that it's reasonable to expect users to include resources consumed by an kubectl exec in the initial pod sizing.
  3. Dangling pod problem with kubectl run --rm? This would be different from kubectl run in that it would be implemented server-side. That is, the kubelet must ensure there are no dangling containers.

The following open questions were added:

  1. How does this affect the pod lifecycle?
  2. What are the edge cases of current kubectl exec. Can you re-attach to a running exec? How would this be implemented for an image exec?
  3. Userns + host bindmounts, depending on how userns is implemented, could be messy (hostmount implies no userns, but userns relabeling might be enabled for the whole pod)
  4. How does this interact with SELinux?

An alternative proposed during the meeting was to allowing adding a container to the podspec. This would be a larger change, but perhaps also more worthwhile. It could be simplified by introducing a new type of ephemeral container similar to the implementation of init-containers. We ran out of time before discussing whether this solution could meet the other requirements such as container filesystem access.

It's not clear where to go next, so I'm going to work on addressing some of the questions raised by the feedback and try to estimate how much work would be required to support modifying the podspec of a running pod.

@smarterclayton

This comment has been minimized.

Copy link
Contributor

commented Nov 1, 2016

We've discussed the possibility of new container types before and have been
cautious about them. I think drawing up the use cases and demonstrating
that they are more difficult to reason about if implemented differently is
key. We'd probably want to find supporting use cases, like the folks at
Huawei and Red Hat who wanted to be able to commit containers in pods (so
they could have a build system for images that uses kube to manage the
containers). We'd need to rationalize why exec (which is heavily used
today for pod support tasks like backup, debugging, builds, etc) is
different than these containers.

On Tue, Nov 1, 2016 at 5:44 PM, Lee Verberne notifications@github.com
wrote:

Summarizing feedback from SIG Node meetup:

@derekwaynecarr https://github.com/derekwaynecarr added a firm
requirement for ability to introspect container images

The following questions were addressed:

  1. Will this provide a view into a container's mount namespace or
    simply container filesystems?
    Only container filesystems since
    there's no way to support mounting the actual mount namespace as a
    subdirectory (if this becomes possible we should support it).
  2. How should resources be accounted? There was weak consensus that
    it's reasonable to expect users to include resources consumed by an kubectl
    exec in the initial pod sizing.
  3. Dangling pod problem with kubectl run --rm? This would be
    different from kubectl run in that it would be implemented server-side.
    That is, the kubelet must ensure there are no dangling containers.

The following open questions were added:

  1. How does this affect the pod lifecycle?
  2. What are the edge cases of current kubectl exec. Can you re-attach
    to a running exec? How would this be implemented for an image exec?
  3. Userns + host bindmounts, depending on how userns is implemented,
    could be messy (hostmount implies no userns, but userns relabeling might be
    enabled for the whole pod)
  4. How does this interact with SELinux?

An alternative proposed during the meeting was to allowing adding a
container to the podspec. This would be a larger change, but perhaps also
more worthwhile. It could be simplified by introducing a new type of
ephemeral container similar to the implementation of init-containers. We
ran out of time before discussing whether this solution could meet the
other requirements such as container filesystem access.

It's not clear where to go next, so I'm going to work on addressing some
of the questions raised by the feedback and try to estimate how much work
would be required to support modifying the podspec of a running pod.


You are receiving this because you commented.
Reply to this email directly, view it on GitHub
#35584 (comment),
or mute the thread
https://github.com/notifications/unsubscribe-auth/ABG_p1JPTjxL5KHnihv1vPlsC7tVb3TCks5q57KrgaJpZM4KgwPp
.

@timstclair
Copy link

left a comment

There was weak consensus that it's reasonable to expect users to include resources consumed by an kubectl exec in the initial pod sizing.

This might be acceptable for an initial implementation, but I think it's far from ideal. If you're running a very large cluster, e.g. 50,000 pods, even a small constant overhead will add up very quickly. It also requires a lot of foresight, which can't always be counted on when debugging something that failed.


We could instead modify CRI's `CreateContainer` to include a list of containers
IDs to "link" to the newly created container, but then we'd have to repeat the
logic in each of the container backends.

This comment has been minimized.

Copy link
@timstclair

timstclair Nov 2, 2016

I think this might be unavoidable. If the container runtime uses something other than linux namespaces for isolation (e.g. VM containers), the Kubelet wouldn't be able to manage it.

This comment has been minimized.

Copy link
@verb

verb Nov 3, 2016

Author Contributor

That's an interesting point. How would pods look under VM containers? Would they still have a shared (e.g. network) namespace somehow? Or would they be pods in name only?

This comment has been minimized.

Copy link
@feiskyer

feiskyer Nov 4, 2016

Member

@verb e.g. for hyper, all containers belonging to the same Pod is running at same VM, and they share same network namespace. And different Pods are running at different VMs.

* Is this compatible with Admission control plugins that restrict what images
can run on a cluster?
* Would it be possible to Image Exec into pods that are in a terminal state to
determine why they've failed?

This comment has been minimized.

Copy link
@timstclair

timstclair Nov 2, 2016

From the sig-node meeting:

  • What happens if the connection is dropped?

This comment has been minimized.

Copy link
@verb

verb Nov 3, 2016

Author Contributor

I looked into this a little today. The current behavior in exec is that the process continues to run but there doesn't seem to be any way to reattach to it. It doesn't get a signal and stdin is left open. My prototype for image exec behaves the same.

A debug container would behave the same, except perhaps that it could be attached with kubectl attach, with the kubelet removing the container when the pod is destroyed or when the process exits (potentially relying on the runtime to support automatically removing the container a la "docker run --rm")

there are several edge cases to consider such as resource limits and monitoring.
Adding a volume to a running container would be a smaller change, but it would
still be larger than the proposed change and we'd have to figure out how to
build and distribute a volume of binaries.

This comment has been minimized.

Copy link
@timstclair
@verb

This comment has been minimized.

Copy link
Contributor Author

commented Nov 3, 2016

@smarterclayton Whom could I contact to ask about Huawei's and Red Hat's use cases?

@verb

This comment has been minimized.

Copy link
Contributor Author

commented Nov 4, 2016

Status update: I was able to prototype adding a container to the spec of a running pod with surprisingly little effort. I need to think through the consequences and edge cases, but I'm optimistic this might be a cleaner solution.

Some thoughts regarding the problem I'm trying to solve:

  • Being able to attach/detach from the debugging shell is very nice
  • the kubectl patch incantation required to do this isn't particularly user friendly. I'd really want there to be an easier command.
  • This doesn't address the container filesystems, but that could potentially be added as a general option for a Container
  • For troubleshooting pods all I need is support for adding a container (i.e. it doesn't have to be ephemeral) but other use cases may require a way for the container to exit short of deleting the pod.
@liggitt

This comment has been minimized.

Copy link
Member

commented Nov 4, 2016

The security and scheduling implications of making the pod spec mutable are concerning. Much has been built on the write-once nature of the security context, volume, port, and resource aspects of the pod spec.

@smarterclayton

This comment has been minimized.

Copy link
Contributor

commented Nov 4, 2016

Not being able to clean up terminated pods immediately will have lots of
implications for cluster stability (for instance, we'll have to detach
shared volumes almost immediately for use cases like jobs that reuse
those). It's possible that the attached container could hold the pod from
going to terminated state, but that has backwards compatibility concerns
for clients.

On Thu, Nov 3, 2016 at 9:26 PM, Jordan Liggitt notifications@github.com
wrote:

The security and scheduling implications of making the pod spec mutable
are concerning. Much has been built on the write-once nature of the
security context, volume, port, and resource aspects of the pod spec.


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
#35584 (comment),
or mute the thread
https://github.com/notifications/unsubscribe-auth/ABG_p7o4FJ0kWpZUhPsNofuw-DsgNkDGks5q6onXgaJpZM4KgwPp
.

@smarterclayton

This comment has been minimized.

Copy link
Contributor

commented Nov 4, 2016

Having to manage exec sessions via the API has been something I've been
dreading (Docker ended up having to do it). I think it's likely we'll need
to have the concept of "running sessions" whether it's exec or these new
type of containers, so it's worth asking how much complexity we're going to
add there (maybe we should consider these containers more like exec
sessions, and just have the ability to define an exec environment that is
either in an existing container or creates a new temporary container). But
that still doesn't remove the need to know that they are there.

On Thu, Nov 3, 2016 at 10:00 PM, Clayton Coleman ccoleman@redhat.com
wrote:

Not being able to clean up terminated pods immediately will have lots of
implications for cluster stability (for instance, we'll have to detach
shared volumes almost immediately for use cases like jobs that reuse
those). It's possible that the attached container could hold the pod from
going to terminated state, but that has backwards compatibility concerns
for clients.

On Thu, Nov 3, 2016 at 9:26 PM, Jordan Liggitt notifications@github.com
wrote:

The security and scheduling implications of making the pod spec mutable
are concerning. Much has been built on the write-once nature of the
security context, volume, port, and resource aspects of the pod spec.


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
#35584 (comment),
or mute the thread
https://github.com/notifications/unsubscribe-auth/ABG_p7o4FJ0kWpZUhPsNofuw-DsgNkDGks5q6onXgaJpZM4KgwPp
.

runtimes. We could implement the same create/start/attach/remove logic in rkt
and docker_manager, or we could save effort by linking this feature to the CRI
and return "Not Implemented" for the legacy runtimes.

This comment has been minimized.

Copy link
@resouer

resouer Nov 4, 2016

Member

@verb Could you please add a note for hypervisor based runtime like https://github.com/hyperhq/hyperd here or other place? Generally we think this will not break hypervisor based runtimes (though may require refactoring on their own side), but we should definitely keep them in mind.

This comment has been minimized.

Copy link
@verb

verb Nov 9, 2016

Author Contributor

Sure, I will definitely add a note about hypervisor based runtimes since it's come up multiple times. It's my intention, though, that any runtime implementing the CRI will be supported without modification (though we might have to add one or two small things to the CRI to accomplish that). This section is specifically about the 2 non-CRI runtimes that are implemented directly in the kubelet.

@verb

This comment has been minimized.

Copy link
Contributor Author

commented Nov 11, 2016

@smarterclayton's idea of exec sessions would address a lot of the usability concerns I have with modifying the podspec as a troubleshooting solution. I'm going to look a bit more into pod/container lifecycles then update the doc.

@apelisse apelisse removed the do-not-merge label Nov 11, 2016

@k8s-github-robot

This comment has been minimized.

Copy link
Contributor

commented Dec 1, 2016

Adding label:do-not-merge because PR changes docs prohibited to auto merge
See http://kubernetes.io/editdocs/ for information about editing docs

Refine Pod Troubleshooting proposal
* Add additional Use Cases
* Simplify Copy Debug vs Running Debug distinction
* Add requirements analysis

@k8s-github-robot k8s-github-robot added size/XL and removed size/L labels Apr 20, 2017

@verb verb referenced this pull request Apr 25, 2017
2 of 5 tasks complete
@verb

This comment has been minimized.

Copy link
Contributor Author

commented May 5, 2017

/release-note-none

@k8s-bot node e2e test this

(doesn't matter sense this will be closed here and merged into the community repo, but it takes the red X off my dashboard)

* Enable troubleshooting of minimal container images
* Allow troubleshooting of containers in `CrashLoopBackoff`
* Improve supportability of native Kubernetes applications
* Enable novel uses of short-lived containers in pods

This comment has been minimized.

Copy link
@yujuhong

yujuhong May 9, 2017

Member

+1 this should not be a goal.


The status of a *debug container* is reported in a new `DebugContainerStatuses`
field of `PodStatus`, which is a read-only list of `ContainerStatus` that
contains an entry for every *debug container* that has ever run in this pod, and

This comment has been minimized.

Copy link
@yujuhong

yujuhong May 9, 2017

Member

If many debug containers were created during the pod's lifetime, this would significantly increase the size of the pod object. Is this a concern?

This comment has been minimized.

Copy link
@verb

verb May 9, 2017

Author Contributor

I don't think it's much of a concern, but we'll want to address this as part of garbage collection. Garbage collecting debug containers will cause this list to shrink. I think we'd want to make garbage collection configurable by cluster admins, but I'm not planning on this for the initial alpha release.

Alternatively, it might be reasonable to create some sort of MaxDebugContainersPerPod tuning parameter. What do you think?

This comment has been minimized.

Copy link
@yujuhong

yujuhong May 9, 2017

Member

I think keeping a maximum "terminated" debug containers is an acceptable first step.

By the way, what's the purpose of keeping a list of ever-growing debug container statuses? We will have auditing soon for posterity already.

This comment has been minimized.

Copy link
@verb

verb May 12, 2017

Author Contributor

This was attempting to solve a requirement from @derekwaynecarr about the ability in inspect debug containers. I hadn't heard about the auditing effort, but maybe that will better suit his requirements and then we wouldn't need any changes to GC.


*Running Debug* requires new functionality in the kubelet and a new `kubectl
debug` command that results in a new container being introduced in the running
pod. This *debug container* is not part of the pod spec, which remains

This comment has been minimized.

Copy link
@yujuhong

yujuhong May 9, 2017

Member

Without modifying the pod spec, how does kubelet know to create the new debug container?

This comment has been minimized.

Copy link
@verb

verb May 9, 2017

Author Contributor

It's created by a new API call. I'll clarify this in the doc.

This comment has been minimized.

Copy link
@yujuhong

yujuhong May 9, 2017

Member

I think the doc evolves over time and the result is that many important points are not mentioned early enough. I'd suggest giving the overview of the mechanism before diving into specific details.

This comment has been minimized.

Copy link
@verb

verb May 11, 2017

Author Contributor

You're right, I think a rewrite may be in order.

```

If the specified container name already exists, `kubectl debug` will kill the
existing container prior to starting a new one. It's legal to reuse container

This comment has been minimized.

Copy link
@yujuhong

yujuhong May 9, 2017

Member

Why not error out and let users decide whether to kill the previous debug container or not? Replacing the container directly may cause confusion.

This comment has been minimized.

Copy link
@verb

verb May 9, 2017

Author Contributor

I thought it would be a convenient way to kill or replace the container, but I agree the implicit container might be confusing. Perhaps it would be preferable to require the user to kill the process before attempting to replace the container.

I do feel strongly that it should be possible to replace containers that have already exited, though.

This comment has been minimized.

Copy link
@yujuhong

yujuhong May 9, 2017

Member

I think asking the user before replacing it is more reasonable.

Another thing is that the container names cannot be reused as long as the container still exists, even if it already exited. You'll need to remove (GC) the container, which may cause the loss of one "DebugContainerStatus".

This comment has been minimized.

Copy link
@verb

verb May 12, 2017

Author Contributor

I never needed to investigate this because it "just worked" when I did the naive implementation. The exited container is replaced and the DebugContainerStatus is lost.

Since this is closely related to GC, let's just remove it as a blocker for the alpha by disallowing container name reuse in all cases (for now).

lack of configuration and restart policy. The kubelet's additional
responsibilities include:

1. Start or restart a debug container in a pod when requested.

This comment has been minimized.

Copy link
@yujuhong

yujuhong May 9, 2017

Member

Without modifying the pod spec, you'd be relying on kubelet to remember this request. If kubelet restarts before creating the container (or if the container crashes for other reasons), it will never be (re-)created without kubelet persisting the request somewhere. This may be okay but it should be stated clearly in this proposal.

##### Step 1: Container Type

Debug containers exist within the kubelet as a `ContainerStatus` without a
container spec. As with other `kubecontainers`, their state is persisted in the

This comment has been minimized.

Copy link
@yujuhong

yujuhong May 9, 2017

Member

s/state/type ?

This comment has been minimized.

Copy link
@verb

verb May 9, 2017

Author Contributor

I meant all state is currently persisted via the runtime.

This comment has been minimized.

Copy link
@yujuhong

yujuhong May 9, 2017

Member

I wouldn't say we persist the state (which changes). We mostly persist the spec.

This comment has been minimized.

Copy link
@verb

verb May 12, 2017

Author Contributor

By state I just meant "the list of containers and their metadata". I'll change this to "metadata" to make it more clear.

This step adds methods for creating debug containers, but doesn't yet modify the
kubelet API. The kubelet will gain a `RunDebugContainer()` method which accepts
a `v1.Container` and creates a debug container. Fields that don't make sense in
the context of a debug container (e.g. probes, lifecycle, resources) will be

This comment has been minimized.

Copy link
@yujuhong

yujuhong May 9, 2017

Member

Please define the valid fields for a debug container.

This comment has been minimized.

Copy link
@verb

verb May 15, 2017

Author Contributor

Done


The kubelet will treat `DEBUG` containers differently in the following ways:

1. `SyncPod()` ignores containers of type `DEBUG`, since there is no

This comment has been minimized.

Copy link
@yujuhong

yujuhong May 9, 2017

Member

What happens if the pod restarts (e.g., infra container died and gets recreated)? What would happen to the debug container?

This comment has been minimized.

Copy link
@verb

verb May 9, 2017

Author Contributor

It will be killed and not restarted. When an infra container dies the kuberuntime manager explicitly kills every container in that pod as reported by the runtime.

This comment has been minimized.

Copy link
@yujuhong

yujuhong May 9, 2017

Member

But the proposal said SyncPod would just ignore the debug containers.... It can't really ignore AND compute changes for the debug containers. I'm a little bit concerned about the increased complexity of the code which may at times ignore the debug container. I know this is the implementation detail, but please make sure the logic clear and readable in the code.

This comment has been minimized.

Copy link
@verb

verb May 15, 2017

Author Contributor

You're right, there's an edge case in sandbox restarts where a debug container should be killed prior to pod deletion, but it's not a complex one. I've updated the proposal to reflect this. If you'd like to see the added SyncPod complexity, it's here:

https://github.com/verb/kubernetes/pull/3/files#diff-8da51ab012ecbdf2b013ecbcf1c37033


##### Step 3: kubelet API changes

The kubelet will gain a new streaming endpoint `/debug` similar to `/exec`,

This comment has been minimized.

Copy link
@yujuhong

yujuhong May 9, 2017

Member

kubelet already has a /debug endpoint. Need another name.

This comment has been minimized.

Copy link
@verb

verb May 9, 2017

Author Contributor

whoa, somehow missed that. Ok, will rename to.. umm... "podDebug", which seems similar to existing endpoint "containerLogs"


The kubelet will gain a new streaming endpoint `/debug` similar to `/exec`,
`/attach`, etc. The debug endpoint will create a new debug container and
automatically attach to it.

This comment has been minimized.

Copy link
@yujuhong

yujuhong May 9, 2017

Member

Is it allowed to create a non-interactive debug container?

Also, creating and starting a container may take a long time (e.g., pulling down the image). Wouldn't this affect the user experience?

This comment has been minimized.

Copy link
@verb

verb May 9, 2017

Author Contributor

I'd like to allow for the non-interactive use case, but not necessarily the detached use case.

If it took a very long time container creation might affect user experience, but I don't see any way around it. definitely open to suggestions.

@verb

This comment has been minimized.

Copy link
Contributor Author

commented May 15, 2017

/unassign @smarterclayton
/unassign @thockin
/assign @yujuhong
/assign @dashpole

Rewrite Pod Troubleshooting proposal
* Separate "Copy Mode" to another PR for another time.
* Describe feature in similarity to `kubectl exec`
* Simplify feature description, move non-essential supporting sections
  to Appendices.
@verb

This comment has been minimized.

Copy link
Contributor Author

commented May 15, 2017

Rewrite to implement @yujuhong's suggestions re: format. Simplify and move non-essential info to Appendices. Also addresses all comments from previous commit (afaik).

@dashpole

This comment has been minimized.

Copy link
Contributor

commented May 16, 2017

@verb regenerating the podSandboxConfig when creating a debug container should not be problematic, since we regenerate it when restarting containers as well.

@dashpole
Copy link
Contributor

left a comment

Implementation looks good. Just a couple general questions. Apologies if they have already been discussed


This creates an interactive shell in a pod which can examine and signal all
processes in the pod. It has access to the same network and IPC as processes in
the pod. It can access the filesystem of other processes by `/proc/$PID/root`,

This comment has been minimized.

Copy link
@dashpole

dashpole May 17, 2017

Contributor

why organize the filesystem by process instead of container? Does this mean that if I have 2 processes within a container, ill have two different ways of seeing the same root filesystem? Would it make more sense to place the filesystems as /container_name/?

This comment has been minimized.

Copy link
@verb

verb May 17, 2017

Author Contributor

It's just because we get that for free with Linux. Originally I had intended to cross-mount the filesystems as you suggest, but that requires a change to the CRI which was contentious, so we agreed to start with this.

You're correct that there would be multiple ways of viewing the same filesystems, though technically they could be different mount namespaces.

This comment has been minimized.

Copy link
@dashpole

dashpole May 17, 2017

Contributor

Ok, that makes sense.

This comment has been minimized.

Copy link
@dashpole

dashpole May 17, 2017

Contributor

How does this work if there are multiple debug containers in the same pod? Do the debug containers see each other?

This comment has been minimized.

Copy link
@verb

verb May 17, 2017

Author Contributor

Yes, as long as there's a shared PID namespace, which is enabled by default for 1.7. This feature relies on a shared PID namespace.

One way in which `kubectl debug` differs from `kubectl exec` is the ability to
reattach to Debug Container if you've been disconnected. This is accomplished by
running `kubectl debug -r` with the name of an existing, running Debug
Container. The `-r` option to `kubectl debug` is translated to a `reattach=True`

This comment has been minimized.

Copy link
@dashpole

dashpole May 17, 2017

Contributor

why not implicitly reconnect, instead of making the user specify it? The only case I can think of is if the user wasnt aware that the debug container was running, and somehow mucks with a running debug container accidentally.

This comment has been minimized.

Copy link
@verb

verb May 17, 2017

Author Contributor

I like the idea of implicit reconnection, but I'm not sure how to handle the container parameters in that case. For example, say one user runs kubectl debug -it pod -m debian -c debug -- bash and, while that's running, another user runs kubectl debug -it pod -m alpine -c debug -- ls /.

The kubelet will receive a v1.Container with an image and args that differs from the existing container. The easiest thing to do would be to discard the new v1.Container and attach to the running container, but that would be confusing to the second user.

We could compare the provided v1.Container to the existing ContainerStatus and only reattach if the run time arguments are identical, but that might still be confusing for user 2 if she didn't notice user 1, so I just opted for requiring explicit behavior.

I'm a big fan of implicit behavior and reasonable defaults, though, so I'd be happy to change this if we can work out the details.

This comment has been minimized.

Copy link
@dashpole

dashpole May 17, 2017

Contributor

That makes sense.

is an error to attempt to replace a Debug Container that is still running.

One way in which `kubectl debug` differs from `kubectl exec` is the ability to
reattach to Debug Container if you've been disconnected. This is accomplished by

This comment has been minimized.

Copy link
@dashpole

dashpole May 17, 2017

Contributor

Does the debug container exit when you disconnect from it? That would seem like generally desirable behavior to me, as it would guarantee that only debug containers that actively are being used stick around.

This comment has been minimized.

Copy link
@verb

verb May 17, 2017

Author Contributor

If you're disconnected involuntarily (network interruption, pkill -9 kubectl, etc) then the container continues running so that you can reattach where you left off. The only way to disconnect intentionally is to terminate the process (exit a shell, ^C a process, etc) which would cause the container to exit.

We could improve upon this by adding an escape sequence to kubectl, but probably not in time for an alpha in 1.7.

This comment has been minimized.

Copy link
@dashpole

dashpole May 17, 2017

Contributor

just curious: Is there a use case around debugging with state? Basically, is there ever a case where a user would care that they start a new container rather than return to an older debug container?

This comment has been minimized.

Copy link
@verb

verb May 17, 2017

Author Contributor

The only person that's mentioned it so far is @smarterclayton here: #35584 (comment)

I like the idea of being able to reattach, but it's not a hard requirement for our own use case. I kept it for the alpha because it was so easy to implement.

This comment has been minimized.

Copy link
@dashpole

dashpole May 17, 2017

Contributor

Right. I guess my question is more around whether or not to expose that functionality to users if they dont need it. With regards to @smarterclayton's comment, if a user needs to be aware of the different debug containers that are active in order to use the tool effectively, then we need to expose that to them. The alternative is to make the debug command act the same way regardless of what other debug containers exist, similar to how exec works (I think). Essentially the behavior would be to always start a new debug container, and always clean up the debug container after the session exits.

It isn't necessary to make the decision before the alpha implementation though. But it may be difficult to remove reattach functionality after we add it, even if it is just an alpha.

This comment has been minimized.

Copy link
@verb

verb May 17, 2017

Author Contributor

No matter what we choose for reattach, active debug containers will be listed with names via kubectl describe pod

My gut feeling is that reattach will be something people want, but I don't want to paint us into a corner. What if we just do implicit reattach for the alpha and live with the surprising behavior? That way we haven't added any parameters to the API that we might have to support.

This comment has been minimized.

Copy link
@dashpole

dashpole May 17, 2017

Contributor

I think that would work well.

This comment has been minimized.

Copy link
@verb

verb May 17, 2017

Author Contributor

I updated the alpha feature scope section of the doc to reflect this.

@verb

This comment has been minimized.

Copy link
Contributor Author

commented May 17, 2017

@dashpole Awesome, thanks for checking on podSandboxConfig.

@dashpole and I agreed offline that modifying GC for the alpha feature isn't necessary since it would likely change before graduating alpha, anyway. I've updated the doc to reflect this.

@dashpole

This comment has been minimized.

Copy link
Contributor

commented May 18, 2017

This proposal LGTM

@verb verb referenced this pull request May 22, 2017
5 of 6 tasks complete
@verb

This comment has been minimized.

Copy link
Contributor Author

commented May 22, 2017

Moved PR to kubernetes/community#649 to be merged.

@verb verb closed this May 22, 2017

k8s-github-robot pushed a commit to kubernetes/community that referenced this pull request Sep 27, 2017

Kubernetes Submit Queue
Merge pull request #649 from verb/pod-troubleshooting
Automatic merge from submit-queue.

Propose a feature to troubleshoot running pods

This feature allows troubleshooting of running pods by running a new "Debug Container" in the pod namespaces.

This proposal was originally opened and reviewed in kubernetes/kubernetes#35584.

This proposal needs LGTM by the following SIGs:
- [ ] SIG Node
- [ ] SIG CLI
- [ ] SIG Auth
- [x] API Reviewer

Work in Progress:
- [x] Prototype `kubectl attach` for debug containers
- [x] Talk to sig-api-machinery about `/debug` subresource semantics

k8s-github-robot pushed a commit that referenced this pull request Jan 24, 2018

Kubernetes Submit Queue
Merge pull request #45442 from verb/pod-tshoot-1
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Add a container type to the runtime manager's container status

**What this PR does / why we need it**:
This is Step 1 of the "Debug Containers" feature proposed in #35584 and is hidden behind a feature gate. Debug containers exist as container status with no associated spec, so this new runtime label allows the kubelet to treat containers differently without relying on spec.

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: cc #27140

**Special notes for your reviewer**:

**Release note**:

```release-note
NONE
```

**Integrating feedback**:
- [x] Remove Type field in favor of a help method

**Dependencies:**
- [x] #46261 Feature gate for Debug Containers

justaugustus pushed a commit to justaugustus/enhancements that referenced this pull request Sep 3, 2018

Kubernetes Submit Queue
Merge pull request kubernetes#649 from verb/pod-troubleshooting
Automatic merge from submit-queue.

Propose a feature to troubleshoot running pods

This feature allows troubleshooting of running pods by running a new "Debug Container" in the pod namespaces.

This proposal was originally opened and reviewed in kubernetes/kubernetes#35584.

This proposal needs LGTM by the following SIGs:
- [ ] SIG Node
- [ ] SIG CLI
- [ ] SIG Auth
- [x] API Reviewer

Work in Progress:
- [x] Prototype `kubectl attach` for debug containers
- [x] Talk to sig-api-machinery about `/debug` subresource semantics
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.