Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Could not find logs for "app name" #106

Closed
robinmonjo opened this issue Aug 26, 2016 · 31 comments
Closed

Could not find logs for "app name" #106

robinmonjo opened this issue Aug 26, 2016 · 31 comments
Assignees
Labels
Milestone

Comments

@robinmonjo
Copy link

Hello,

I deployed Deis Workflow (v2.4) on a Kubernetes cluster and deployed one app successfully (via buildpacks). Running deis logs outputs:

Error: There are currently no log messages. Please check the following things:
1) Logger and fluentd pods are running: kubectl --namespace=deis get pods.
2) The application is writing logs to the logger component by checking that an entry in the ring buffer was created: kubectl --namespace=deis logs <logger pod>
3) Making sure that the container logs were mounted properly into the fluentd pod: kubectl --namespace=deis exec <fluentd pod> ls /var/log/containers
3a) If the above command returns saying /var/log/containers cannot be found then please see the following github issue for a workaround: https://github.com/deis/logger/issues/50

My logger and fluentd pods are running:

deis-logger-fluentd-ls7fa                1/1       Running   0          3d
deis-logger-fluentd-s6a9z                1/1       Running   1          3d
deis-logger-j6hgu                        1/1       Running   3          3d

However:

$> kubectl --namespace=deis logs deis-logger-j6hgu
2016/08/26 13:17:26 Could not find logs for 'my_app'
2016/08/26 14:36:28 Could not find logs for 'my_app'
2016/08/26 14:41:12 Could not find logs for 'my_app'

Fluentd pods have my app's logs mounted in /var/log/containers:

[...]
my_app-web-2660195288-k56n7_my_app_POD-d1a77aaeb2b8aa6006048dccce7f3f27fb73f74ed04f1d0c27f67374a510c038.log
my_app-web-2660195288-k56n7_my_app_my_app-web-ea29c4018bf861324fb23ffd81c2f15b358951c739f90319079e34d9dee8036f.log
[...]

How can I debug further ?

Regards

@jchauncey
Copy link
Member

What do the pod logs say?

kubectl get pods --all-namespaces
kubectl logs <pod name> --namespace=<app name>

@robinmonjo
Copy link
Author

They say a lot (that's a rails app), so when accessing the logs from kubectl logs I see the logs I would expect in deis logs

@jchauncey jchauncey self-assigned this Aug 26, 2016
@jchauncey jchauncey added this to the v2.5 milestone Aug 26, 2016
@jchauncey
Copy link
Member

ok i just wanted to make sure we are getting data from kuberentes.

could you check out the graphs at grafana.mydomain.com and look at the NSQ Health dashboard. You should see it reporting data on the topic for logs and things like that. See if it gives you any information regarding the queue depth.

@robinmonjo
Copy link
Author

Got only flat lines:

screen shot 2016-08-26 at 17 25 57

screen shot 2016-08-26 at 17 26 09

@jchauncey
Copy link
Member

That means that fluentd isn't processing any log entries. Let me see if I can come up with another check

@robinmonjo
Copy link
Author

I run a bash prompt into a fluentd container and see the Dockerfile at the root of the container file system.

There is this line in the Dockerfile: CMD ["/opt/fluentd/sbin/boot"] si I ran it manually in my bash session and got:

[booting logs ...]
2016-08-26 16:20:14 +0000 [info]: using configuration file: <ROOT>
  <source>
    @type tail
    path "/var/log/containers/*.log"
    pos_file "/var/log/containers.log.pos"
    time_format "%Y-%m-%dT%H:%M:%S.%NZ"
    tag "kubernetes.*"
    format json
    read_from_head false
  </source>
  <source>
    @type tail
    format syslog
    path "/var/log/startupscript.log"
    pos_file "/var/log/startupscript.log.pos"
    tag "startupscript"
  </source>
[blablabla one source for a lot of stuff]

[many warnings]
2016-08-26 16:20:14 +0000 [warn]: /var/log/containers/my_app-web-2660195288-k56n7_my_app_my_app-web-ea29c4018bf861324fb23ffd81c2f15b358951c739f90319079e34d9dee8036f.log unreadable. It is excluded and would be examined next time.
[many wranings]

I isolated the warning for one of my app process but basically, I've got this error for every pod running on my k8s cluster:

[warn]: /var/log/containers/my_app-web-2660195288-k56n7_my_app_my_app-web-ea29c4018bf861324fb23ffd81c2f15b358951c739f90319079e34d9dee8036f.log unreadable. It is excluded and would be examined next time.

@jchauncey
Copy link
Member

I have never seen taht error before. This is interesting. Is this on AWS? GKE? Bare metal? How did you provison your cluster?

@robinmonjo
Copy link
Author

This is on AWS, provisionned with kube-up.sh on a multi-zone cluster (not sure if this is relevant information).

@jchauncey
Copy link
Member

what does a stat say if you do it on one of the log files?

@krancour
Copy link
Contributor

@jchauncey could this be related to #50

I've been burned by that a few times before, albeit that was with kube-aws and not kube-up.sh.

@jchauncey
Copy link
Member

potentially but he does see the files in the container unlike #50 where the files dont even show up

@jchauncey jchauncey reopened this Aug 26, 2016
@robinmonjo
Copy link
Author

robinmonjo commented Aug 26, 2016

Ok so:

$> stat /var/log/containers/my_app-web-2660195288-k56n7_my_app_my_app-web-ea29c4018bf861324fb23ffd81c2f15b358951c739f90319079e34d9dee8036f.log

File: '/var/log/containers/my_app-web-2660195288-k56n7_my_app_POD-d1a77aaeb2b8aa6006048dccce7f3f27fb73f74ed04f1d0c27f67374a510c038.log' -> '/mnt/ephemeral/docker/containers/d1a77aaeb2b8aa6006048dccce7f3f27fb73f74ed04f1d0c27f67374a510c038/d1a77aaeb2b8aa6006048dccce7f3f27fb73f74ed04f1d0c27f67374a510c038-json.log'
  Size: 171         Blocks: 8          IO Block: 4096   symbolic link
Device: ca01h/51713d    Inode: 394575      Links: 1
Access: (0777/lrwxrwxrwx)  Uid: (    0/    root)   Gid: (    0/    root)
Access: 2016-08-26 14:33:03.301616184 +0000
Modify: 2016-08-26 14:32:40.458300745 +0000
Change: 2016-08-26 14:32:40.458300745 +0000
 Birth: -

So this is a symlink and if I stat the target:

$> stat /mnt/ephemeral/docker/containers/d1a77aaeb2b8aa6006048dccce7f3f27fb73f74ed04f1d0c27f67374a510c038/d1a77aaeb2b8aa6006048dccce7f3f27fb73f74ed04f1d0c27f67374a510c038-json.log

/mnt/ephemeral/docker/containers/d1a77aaeb2b8aa6006048dccce7f3f27fb73f74ed04f1d0c27f67374a510c038/d1a77aaeb2b8aa6006048dccce7f3f27fb73f74ed04f1d0c27f67374a510c038-json.log: No such file or directory

So it doesn' exist, I have nothing in /mnt.

kubectl --namespace=deis describe pod deis-logger-fluentd-ls7fa
Name:       deis-logger-fluentd-ls7fa
Namespace:  deis
Node:       ip-172-20-0-31.eu-west-1.compute.internal/172.20.0.31
Start Time: Mon, 22 Aug 2016 21:44:00 +0200
Labels:     app=deis-logger-fluentd
        heritage=deis
Status:     Running
IP:     10.244.0.22
Controllers:    DaemonSet/deis-logger-fluentd
Containers:
  deis-logger-fluentd:
    Container ID:       docker://bd5cb9cc6f759937d180f872b277eb14352234bd6aa01b7a71ef30284d581a4e
    Image:          quay.io/deis/fluentd:v2.2.0
    Image ID:           docker://sha256:53b2cb0263be300c0a9c37d5eb4a3b429ec9fcb3c856a760053a7e2d72c845dd
    Port:           
    State:          Running
      Started:          Mon, 22 Aug 2016 21:44:01 +0200
    Ready:          True
    Restart Count:      0
    Environment Variables:  <none>
Conditions:
  Type      Status
  Initialized   True 
  Ready     True 
  PodScheduled  True 
Volumes:
  varlog:
    Type:   HostPath (bare host directory volume)
    Path:   /var/log
  varlibdockercontainers:
    Type:   HostPath (bare host directory volume)
    Path:   /var/lib/docker/containers
  deis-logger-fluentd-token-ebhjp:
    Type:   Secret (a volume populated by a Secret)
    SecretName: deis-logger-fluentd-token-ebhjp
QoS Tier:   BestEffort
No events.

@jchauncey
Copy link
Member

Alright so this does look like it is similar to #50

@robinmonjo
Copy link
Author

Ok I read (quickly) about this "famous" issue, but thought it was not my issue since the message of deis logs says:

Error: There are currently no log messages. Please check the following things:
1) Logger and fluentd pods are running: kubectl --namespace=deis get pods.
2) The application is writing logs to the logger component by checking that an entry in the ring buffer was created: kubectl --namespace=deis logs <logger pod>
3) Making sure that the container logs were mounted properly into the fluentd pod: kubectl --namespace=deis exec <fluentd pod> ls /var/log/containers
3a) If the above command returns saying /var/log/containers cannot be found then please see the following github issue for a workaround: https://github.com/deis/logger/issues/50

and I had an error on 2. Anyway thanks for the help, I will follow the workaround 👍

@jchauncey
Copy link
Member

=)

@olalonde
Copy link

Had a similar problem, but fixed by simply deleting one of my deis-logger-fluentd pods

@rukor
Copy link

rukor commented Jan 1, 2017

Hi, I am encountering this issue. I setup my cluster using kubeadm and havent made any changes to the kubernetes setup. Please how do I apply this fix? I see /var/lib/docker/containers have content in it, but /var/log/containers have nothing in it, and deis-logger continuously outputs 'cannot find logs for '<appname>'

Thanks

@olalonde
Copy link

olalonde commented Jan 2, 2017

@rukor if you are having this problem only for some apps, maybe check your deis-logger-fluentd pod logs in kubernetes to see if any of them is having issues.

@rukor
Copy link

rukor commented Jan 2, 2017 via email

@olalonde
Copy link

olalonde commented Jan 2, 2017

@rukor any chance this might fix your issue? deis/fluentd#51 Otherwise, I have no idea... You might have better luck asking on the Deis community slack.

@rukor
Copy link

rukor commented Jan 7, 2017

@olalonde Thanks. I have no idea how to apply that. I am running deis 2.9.1, I assume as the linked item is a merge, the contents would already be included in what I am currently running.

@bacongobbler
Copy link
Member

Yep, that fix has been in since v2.6.

I think the first thing we need to do is check the logs of all your fluentd and logger pods. Can you post a gist of your log output?

https://deis.com/docs/workflow/troubleshooting/kubectl/

@rukor
Copy link

rukor commented Jan 7, 2017

The logs turn out empty now, I think perhaps kubernetes have reccycled them. I'll delete them and post the logs when they restart.

@rukor
Copy link

rukor commented Jan 7, 2017

@bacongobbler
Copy link
Member

Check and see what's in /var/log/containers in the fluentd instances

@rukor
Copy link

rukor commented Jan 7, 2017

Thanks. I checked that and I see nothing.

roland@devops:~$ kd exec deis-logger-fluentd-v666h -- ls -al /var/log/containers
total 4
drwxr-xr-x.  2 root root    6 Dec 31 12:39 .
drwxr-xr-x. 10 root root 4096 Jan  1 03:46 ..

Pretty much the same for all instances. I can see some stuff here though:

roland@devops:~$ kd exec deis-logger-fluentd-v666h -- ls -al /var/lib/docker/containers
total 4
drwx------. 40 root root 4096 Jan  7 21:34 .
drwxr-xr-x.  3 root root   24 Jan  7 21:34 ..
drwx------.  4 root root  149 Dec 31 18:10 03be8b8b2e25ac2237a2148fb117e7078dfeb797a9fc416dfed7daa77ee65a6a
drwx------.  3 root root   66 Dec 31 14:17 06ff485fc93f0a3dae42ab95ecff495989eca804136eac297d69d8fc8e405b5c
drwx------.  3 root root   66 Dec 31 18:30 143e7d6fd854317085fd87f0bafdde6bde7b413c9428243565e91b4338bd9804
drwx------.  3 root root   66 Dec 31 14:37 1b297c4427d32f6b0dfa1ae9c9a78e0f327a92276396694cda61fe740011bf7f
drwx------.  3 root root   66 Dec 31 16:35 1db05fc55426c770219ea70bd7f62b143e513b16453ff70eb61b0fe2fecdcd2c
drwx------.  3 root root   66 Dec 31 14:15 229ccb6bdf900a0c73dcb83a74a9ed14810fd44fce96ce53be7d802fdbcbfc41

@rukor
Copy link

rukor commented Jan 7, 2017

If it helps, I did setup my kubernetes cluster using kubeadm.

@jchauncey
Copy link
Member

jchauncey commented Jan 7, 2017 via email

@rukor
Copy link

rukor commented Jan 8, 2017

@jchauncey Thanks. It actually did fix it. Somehow, I had looked at that page before, and veered on to the coreos bit, and got lost. Adding the log-driver=json fixed it for me. Thanks again.

@pasanchamikara
Copy link

Hi Team, The application is writing logs to the logger component by checking that an entry in the ring buffer was created: kubectl --namespace=deis logs <logger pod> what is the keyring mentioned here, and where can we find the entries @jchauncey

@krancour
Copy link
Contributor

@pasanchamikara, it's not a keyring. It's a ring buffer. That is a "circular" in-memory data structure where, once full, the oldest entry is overwritten with the newest entry.

Also, are you aware that this software was sunset 5 years ago and isn't supported?

The state of the art in deploying and hosting cloud native applications has improved by leaps and bounds since then. As a member of the team that built this -- please, don't use this.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

7 participants