Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Logger can't find logs when using json-file driver #50

Closed
gabrtv opened this issue Apr 9, 2016 · 13 comments · Fixed by deis/fluentd#51
Closed

Logger can't find logs when using json-file driver #50

gabrtv opened this issue Apr 9, 2016 · 13 comments · Fixed by deis/fluentd#51
Assignees
Labels

Comments

@gabrtv
Copy link
Member

gabrtv commented Apr 9, 2016

deis logs returns zero results when using kube-aws, workflow-beta1 and Kubernetes 1.2.

After debugging a bit with @jchauncey, I learned fluentd looks for log output in /var/log/containers. This clearly works on some environments. However upon deeper investigation, the json-file log driver appears to write to /var/lib/docker/containers/<id>/*.log. This would explain why logger fails to return results when using kube-aws.

Debug session from worker node provisioned with kube-aws:

# pwd
/var/lib/docker/containers/0c380a5822803f775028d369a8e133564abc99c15d8ba44ebd86d5fecd480f51
# ls *.log
0c380a5822803f775028d369a8e133564abc99c15d8ba44ebd86d5fecd480f51-json.log
# more *.log
{"log":"2016/04/09 20:25:23 skydns: falling back to default configuration, could not read from etcd: 100: Key not found (/skydns/config) [15]\n","stream":"stderr","time":"2016-04-09T20:25:23.738565696Z"}
{"log":"2016/04/09 20:25:23 skydns: ready for queries on cluster.local. for tcp://0.0.0.0:53 [rcache 0]\n","stream":"stderr","time":"2016-04-09T20:25:23.738602805Z"}
{"log":"2016/04/09 20:25:23 skydns: ready for queries on cluster.local. for udp://0.0.0.0:53 [rcache 0]\n","stream":"stderr","time":"2016-04-09T20:25:23.738857357Z"}

# docker version
Client:
 Version:      1.10.3
 API version:  1.22
 Go version:   go1.5.3
 Git commit:   8acee1b
 Built:        
 OS/Arch:      linux/amd64

Server:
 Version:      1.10.3
 API version:  1.22
 Go version:   go1.5.3
 Git commit:   8acee1b
 Built:        
 OS/Arch:      linux/amd64
$ kube-aws version
kube-aws version 7ee32080853fa743f697b0ff914cf5fc090da801
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"2", GitVersion:"v1.2.0+5cb86ee", GitCommit:"5cb86ee022267586db386f62781338b0483733b3", GitTreeState:"not a git tree"}
Server Version: version.Info{Major:"1", Minor:"2", GitVersion:"v1.2.0+coreos.1", GitCommit:"fb71087b07e941d0925d5c25007ff7f0edca868b", GitTreeState:"clean"}

Patch to use json-file log driver in kube-aws (not the default):

  units:
    - name: docker.service
      drop-ins:
        - name: 40-flannel.conf
          content: |
            [Unit]
            Requires=flanneld.service
            After=flanneld.service
        - name: 50-insecure-registry.conf
          content: |
            [Service]
            Environment='DOCKER_OPTS=--insecure-registry="0.0.0.0/0" --log-driver=json-file'
@gabrtv
Copy link
Member Author

gabrtv commented Apr 9, 2016

Evidence of the log path used by the default driver can be found in the integration tests for Docker daemon. The file path is constructed from the daemon root via:

logPath := filepath.Join(s.d.root, "containers", id, id+"-json.log")

@jchauncey
Copy link
Member

This makes me wonder why the official fluentd image for k8s is set for /var/log/containers

@jchauncey
Copy link
Member

Ok so Ive figured this out I think. The kubelet is responsible for maintaining the logs and doing some stuff with them. So they symlink all logs in /var/lib/docker/containers to /var/log/containers. They also change the name of logs to be pod_name-container_id.log from container_id-json.log.

There must be a case where your kubelet does not do that

@jchauncey jchauncey added the bug label Apr 12, 2016
@jchauncey jchauncey self-assigned this Apr 12, 2016
@chicagozer
Copy link

chicagozer commented May 2, 2016

I stumbled onto the same issue I believe. I'm running a local docker instance of kubernetes as described here http://kubernetes.io/docs/getting-started-guides/docker/

Also found this indicating intended behavior of kubelet - https://github.com/kubernetes/kubernetes/blob/dae5ac482861382e18b1e7b2943b1b7f333c6a2a/cluster/addons/fluentd-elasticsearch/fluentd-es-image/td-agent.conf

If I can help at all let me know.

@jchauncey
Copy link
Member

Yeah our fluentd instances are loosley based on the k8s ones.

@chicagozer
Copy link

This worked fine for me using kubernetes (1.2.3) in AWS.

KUBERNETES_PROVIDER=aws
kube-up.sh

@jchauncey
Copy link
Member

See here for workaround - coreos/coreos-kubernetes#322 (comment)

@chancez
Copy link

chancez commented May 17, 2016

You'll actually probably want to go down 1 more comment and read my response. You'll actually want all of /var/log since the root of /var/log is where fluentd stores it's cursor files.

Ex:

ExecStartPre=/usr/bin/mkdir -p /var/log/containers
Environment="RKT_OPTS=--volume var-log,kind=host,source=/var/log --mount volume=var-log,target=/var/log"

@mboersma
Copy link
Member

@jchauncey are there changes this implies to fix json-file usage for deis/logger?

@jchauncey
Copy link
Member

No basically we point people to this issue if they run into this problem which just shows how to fix it.

@robinmonjo
Copy link

Versions: Workflow 2.4.2 and kubernetes 1.3.4

On AWS deis has the same problem with its logger-fluentd pod than Kubernetes have with its fluentd-elasticsearch pod which is described here: kubernetes/kubernetes#13313

The pods are configured to find log files in /var/lib/docker/containers but on AWS kube-up set the docker root directory to /mnt/ephemeral/docker. So I fixed this issue this way:

after helmc generate edit the file: ~/.helmc/workspace/charts/workflow-v2.4.2/manifests/deis-logger-fluentd-daemon.yaml and replace it with this:

#helm:generate helm tpl -d $HELM_GENERATE_DIR/tpl/generate_params.toml -o $HELM_GENERATE_DIR/manifests/deis-logger-fluentd-daemon.yaml $HELM_GENERATE_FILE
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
  name: deis-logger-fluentd
  namespace: deis
  labels:
    heritage: deis
  annotations:
    component.deis.io/version: v2.2.0
spec:
  selector:
    matchLabels:
      app: deis-logger-fluentd
      heritage: deis
  template:
    metadata:
      name: deis-logger-fluentd
      labels:
        heritage: deis
        app: deis-logger-fluentd
    spec:
      serviceAccount: deis-logger-fluentd
      containers:
      - name: deis-logger-fluentd
        image: quay.io/deis/fluentd:v2.2.0
        imagePullPolicy: IfNotPresent
        volumeMounts:
        - name: varlog
          mountPath: /var/log
        - name: containers
          mountPath: /mnt/ephemeral/docker/containers
          readOnly: true
      volumes:
      - name: varlog
        hostPath:
          path: /var/log
      - name: containers
        hostPath:
          path: /mnt/ephemeral/docker/containers

I just changed the mount path so it matches with the host FS hierarchy.

@one000mph
Copy link

+1 for this solution. I followed @chancez's instructions by updating the kubelet.service on all nodes, and restarted fluentd pods. Logging is now working

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging a pull request may close this issue.

7 participants