Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unable to get deployment logs for multi container pods #10186

Closed
jhellar opened this issue Aug 3, 2016 · 13 comments · Fixed by #10377
Closed

Unable to get deployment logs for multi container pods #10186

jhellar opened this issue Aug 3, 2016 · 13 comments · Fixed by #10377
Assignees
Labels
component/cli kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. priority/P2
Milestone

Comments

@jhellar
Copy link

jhellar commented Aug 3, 2016

When there are multiple containers in pod oc logs fails for deploymentconfigs for that pod and it is not possible to specify the container with -c parameter.

Version

oc v3.2.0.20
kubernetes v1.2.0-36-g4a3f9c5

Steps To Reproduce
  1. oc logs dc/name_of_pod_with_multiple_containers -c name_of_container
Current Result

Error from server: a container name must be specified for pod ..., choose one of: [... ...]

Expected Result

logs...

@rhcarvalho
Copy link
Contributor

rhcarvalho commented Aug 3, 2016

Thanks @jhellar.

Looks like this is fixed on a more recent version, may be broken in OSE 3.2

In any case, I have steps to reproduce the issue:

$ oc version
oc v1.3.0-alpha.2-267-g2fe486f-dirty
kubernetes v1.3.0-alpha.3-599-g2746284
$ cat dc-multi-container.json 
{
  "kind": "DeploymentConfig",
  "apiVersion": "v1",
  "metadata": {
    "name": "test-dc"
  },
  "spec": {
    "strategy": {
      "type": "Rolling"
    },
    "triggers": [
      {
        "type": "ConfigChange"
      }
    ],
    "replicas": 1,
    "selector": {
      "name": "test-dc"
    },
    "template": {
      "metadata": {
        "labels": {
          "name": "test-dc"
        }
      },
      "spec": {
        "containers": [
          {
            "name": "busybox",
            "image": "busybox:latest",
            "command": ["tail", "-f", "/dev/null"]
          },
          {
            "name": "alpine",
            "image": "alpine:latest",
            "command": ["tail", "-f", "/dev/null"]
          }
        ]
      }
    }
  }
}
$ oc create -f dc-multi-container.json 
deploymentconfig "test-dc" created
$ oc logs dc/test-dc
Error from server: a container name must be specified for pod test-dc-1-t63xy, choose one of: [busybox alpine]

@jhellar could you please confirm what output you get when running the steps above?


update 2016-08-11: updated template to include a command so that the containers run forever. That makes the issue reproducible.

@rhcarvalho rhcarvalho added kind/bug Categorizes issue or PR as related to a bug. component/cli labels Aug 3, 2016
@jhellar
Copy link
Author

jhellar commented Aug 3, 2016

@rhcarvalho I can confirm that it works.

oc logs dc/test-dc
I0802 14:23:28.411933 1 deployer.go:200] Deploying core/test-dc-1 for the first time (replicas: 1)
I0802 14:23:28.430979 1 recreate.go:126] Scaling core/test-dc-1 to 1 before performing acceptance check
I0802 14:23:30.486809 1 recreate.go:131] Performing acceptance check of core/test-dc-1
I0802 14:23:30.486882 1 lifecycle.go:445] Waiting 600 seconds for pods owned by deployment "core/test-dc-1" to become ready (checking every 1 seconds; 0 pods previously accepted)

@rhcarvalho
Copy link
Contributor

@jhellar could you find a way to reproduce this?

@rhcarvalho
Copy link
Contributor

Talking with @jhellar on IRC, he mentioned that oc logs dc/test-dc works before the pod is in running state.

After the deployment finishes, the same command stops working, and the error incorrectly tells us to choose a pod:

$ oc logs dc/test-dc
Error from server: a container name must be specified for pod test-dc-1-t63xy,
choose one of: [busybox alpine]

But:

$ oc logs -c alpine dc/test-dc
Error from server: a container name must be specified for pod test-dc-1-t63xy, choose one of: [busybox alpine]

And since we want the deployment logs, it shouldn't ask for a container.


I've updated my comment above with steps to reproduce the issue.

@rhcarvalho rhcarvalho changed the title Unable to specify container for deploymentconfigs Unable to get deployment logs for multi container pods Aug 12, 2016
@rhcarvalho
Copy link
Contributor

I've updated the title to better reflect the issue.

@mfojtik
Copy link
Member

mfojtik commented Aug 12, 2016

I'm looking at this

@mfojtik
Copy link
Member

mfojtik commented Aug 12, 2016

And I think I found the problem ;-)

@mfojtik
Copy link
Member

mfojtik commented Aug 24, 2016

Reopening because the change was reverted (it breaks the API backward compatibility for UI).

@mfojtik mfojtik added this to the 1.3.1 milestone Aug 24, 2016
@liggitt liggitt modified the milestones: 1.3.1, 1.5.0 Nov 30, 2016
@smarterclayton smarterclayton modified the milestones: 1.5.0, 1.6.0 Mar 12, 2017
@taisph
Copy link

taisph commented Jul 21, 2017

Does this issue affect pod replicas as well? I am only seeing one pod's log output using oc logs -f dc/app.

@smarterclayton smarterclayton modified the milestones: 3.6.0, 3.6.x Oct 1, 2017
@0xmichalis
Copy link
Contributor

@mfojtik fixed now?

@openshift-bot
Copy link
Contributor

Issues go stale after 90d of inactivity.

Mark the issue as fresh by commenting /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
Exclude this issue from closing by commenting /lifecycle frozen.

If this issue is safe to close now please do so with /close.

/lifecycle stale

@openshift-ci-robot openshift-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Apr 25, 2018
@openshift-bot
Copy link
Contributor

Stale issues rot after 30d of inactivity.

Mark the issue as fresh by commenting /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
Exclude this issue from closing by commenting /lifecycle frozen.

If this issue is safe to close now please do so with /close.

/lifecycle rotten
/remove-lifecycle stale

@openshift-ci-robot openshift-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels May 25, 2018
@openshift-bot
Copy link
Contributor

Rotten issues close after 30d of inactivity.

Reopen the issue by commenting /reopen.
Mark the issue as fresh by commenting /remove-lifecycle rotten.
Exclude this issue from closing again by commenting /lifecycle frozen.

/close

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
component/cli kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. priority/P2
Projects
None yet
Development

Successfully merging a pull request may close this issue.

10 participants