-
Notifications
You must be signed in to change notification settings - Fork 887
Cant open pod logs (probably problem with ids) #2640
Comments
cc @yifan-gu , @woodbor |
I don't know why it was opening |
Don't know, after i manualy removed it, i still cant see logs |
|
@pskrzyns Does the target of the log link exist? Do you know if rkt was run with |
@iaguis yes target existed lrwxrwxrwx. 1 root systemd-journal 121 May 16 21:38 fc032abf76224f18b732dc73f943c036 -> /var/lib/rkt/pods/run/fc032abf-7622-4f18-b732-dc73f943c036/stage1/rootfs/var/log/journal/fc032abf76224f18b732dc73f943c036 99% it wasnt run with interactive, cuase kubelet was creating pod, i will check it in few minutes |
Its not using --interactive /opt/bin/rkt cat-manifest a68e9b84 {"acVersion":"0.7.4+git","acKind":"PodManifest","apps":[{"name":"nginx","image":{"id":"sha512-d1a304bb63e81a43a04e676bb531c9cea239a2dbaa5ec6a2708f87a8e65ab709"},"app":{"exec":["nginx","-g","daemon off;"],"user":"0","group":"0","environment":[{"name":"NGINX_VERSION","value":"1.9.15-1~jessie"},{"name":"KUBERNETES_SERVICE_PORT_HTTPS","value":"443"},{"name":"KUBERNETES_PORT_443_TCP_PROTO","value":"tcp"},{"name":"KUBERNETES_PORT_443_TCP_ADDR","value":"10.2.0.1"},{"name":"KUBERNETES_SERVICE_PORT","value":"443"},{"name":"PATH","value":"/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"},{"name":"KUBERNETES_PORT","value":"tcp://10.2.0.1:443"},{"name":"KUBERNETES_PORT_443_TCP","value":"tcp://10.2.0.1:443"},{"name":"KUBERNETES_PORT_443_TCP_PORT","value":"443"},{"name":"KUBERNETES_SERVICE_HOST","value":"10.2.0.1"}],"mountPoints":[{"name":"termination-message-675a5b8e-1c46-11e6-a624-da5cfcca6b40","path":"/dev/termination-log"},{"name":"default-token-05w6t","path":"/var/run/secrets/kubernetes.io/serviceaccount","readOnly":true}],"ports":[{"name":"443-tcp","protocol":"tcp","port":443,"count":1,"socketActivated":false},{"name":"80-tcp","protocol":"tcp","port":80,"count":1,"socketActivated":false},{"name":"nginx-tcp-80","protocol":"TCP","port":80,"count":1,"socketActivated":false}]},"annotations":[{"name":"rkt.kubernetes.io/container-hash","value":"3678416530"},{"name":"rkt.kubernetes.io/termination-message-path","value":"/var/lib/kubelet/pods/6a20eb39-1c46-11e6-806f-001e6776a094/containers/nginx/675a5b8e-1c46-11e6-a624-da5cfcca6b40"}]}],"volumes":[{"name":"termination-message-675a5b8e-1c46-11e6-a624-da5cfcca6b40","kind":"host","source":"/var/lib/kubelet/pods/6a20eb39-1c46-11e6-806f-001e6776a094/containers/nginx/675a5b8e-1c46-11e6-a624-da5cfcca6b40"},{"name":"default-token-05w6t","kind":"host","source":"/var/lib/kubelet/pods/6a20eb39-1c46-11e6-806f-001e6776a094/volumes/kubernetes.io~secret/default-token-05w6t"}],"isolators":null,"annotations":[{"name":"rkt.kubernetes.io/managed-by-kubelet","value":"true"},{"name":"rkt.kubernetes.io/uid","value":"6a20eb39-1c46-11e6-806f-001e6776a094"},{"name":"rkt.kubernetes.io/name","value":"nginx"},{"name":"rkt.kubernetes.io/namespace","value":"default"},{"name":"rkt.kubernetes.io/restart-count","value":"0"}],"ports":null} /opt/bin/rkt status a68e9b84
state=running
created=2016-05-17 17:45:40 +0200 CEST
started=2016-05-17 17:45:40 +0200 CEST
networks=rkt.kubernetes.io:ip4=10.1.38.3, default-restricted:ip4=172.16.28.3
pid=44029
exited=false machinectl
MACHINE CLASS SERVICE
rkt-a68e9b84-70a4-4ffb-8f9a-9bed58ed833a container rkt
1 machines listed. ls -l /var/log/journal/
total 24
drwxr-sr-x. 2 root systemd-journal 4096 May 17 17:51 98bd53d93a0443a5b7c5b9cba2737482
lrwxrwxrwx. 1 root systemd-journal 121 May 17 17:45 a68e9b8470a44ffb8f9a9bed58ed833a -> /var/lib/rkt/pods/run/a68e9b84-70a4-4ffb-8f9a-9bed58ed833a/stage1/rootfs/var/log/journal/a68e9b8470a44ffb8f9a9bed58ed833a
drwxr-sr-x. 2 systemd-journal-remote systemd-journal-remote 4096 May 5 21:56 remote ls -l /var/lib/rkt/pods/run/a68e9b84-70a4-4ffb-8f9a-9bed58ed833a/stage1/rootfs/var/log/journal/a68e9b8470a44ffb8f9a9bed58ed833a/
total 8196
-rw-r-----. 1 root root 8388608 May 17 17:50 system.journa |
When I run rkt in a systemd unit file, my logs look like:
In the logs you pasted before you don't have the container process after the |
@iaguis could you show me your unit file ? |
|
In my case it looks like: Run command:[prepare --quiet --pod-manifest /tmp/manifest-nginx-825906943 --stage1-path /opt/rkt/stage1-coreos.aci] # /run/systemd/system/k8s_ec54daa6-379f-4756-9b86-bae152de64c8.service
[Service]
ExecStart=/opt/bin/rkt --insecure-options=image,ondisk --local-config=/etc/rkt --system-config=/usr/lib/rkt --dir=/var/lib/rkt run-prepared --net=rkt.kubernetes.io --dns=10.2.0.10 --dns-search=default.svc.cluster.local --dns-search=svc.cluster.local --dns-search=cluster.local --dns-opt=ndots:5 --hostname=nginx ec54daa6-379f-4756-9b86-bae152de64c8
ExecStopPost=/usr/bin/touch /var/lib/kubelet/pods/1cf4cbbb-1daf-11e6-a069-001e6776a094/finished-ec54daa6-379f-4756-9b86-bae152de64c8
KillMode=mixed |
for etcd i have similiar logs like you have journalctl -u k8s_3cf80863-e690-4b78-b06e-27b67196db09.service
-- Logs begin at Tue 2016-05-17 04:50:06 CEST, end at Thu 2016-05-19 13:04:50 CEST. --
May 19 13:02:27 localhost systemd[1]: Stopped k8s_3cf80863-e690-4b78-b06e-27b67196db09.service.
May 19 13:02:27 localhost systemd[1]: Started k8s_3cf80863-e690-4b78-b06e-27b67196db09.service.
May 19 13:02:27 localhost rkt[39834]: networking: loading networks from /etc/rkt/net.d
May 19 13:02:27 localhost rkt[39834]: networking: loading network rkt.kubernetes.io with type flannel
May 19 13:02:27 localhost rkt[39834]: networking: loading network default-restricted with type ptp
May 19 13:02:27 localhost rkt[39834]: [259175.378021] etcd[5]: 2016/05/19 11:02:27 etcd: no data-dir provided, using default
May 19 13:02:27 localhost rkt[39834]: [259175.378262] etcd[5]: 2016/05/19 11:02:27 etcd: listening for peers on http://loc and i can see etcd logs: kubectl logs etcd
2016/05/19 11:02:27 etcd: no data-dir provided, using default data-dir ./default.etcd
2016/05/19 11:02:27 etcd: listening for peers on http://localhost:2380
2016/05/19 11:02:27 etcd: listening for peers on http://localhost:7001
2016/05/19 11:02:27 etcd: listening for client requests on http://localhost:2379
2016/05/19 11:02:27 etcd: listening for client requests on http://localhost:4001
2016/05/19 11:02:27 etcdserver: datadir is valid for the 2.0.1 format |
The issue is basically https://bugzilla.redhat.com/show_bug.cgi?id=1212756 The nginx image writes its logs to However, apps that write directly to Since Docker images expect being able to write to Unfortunately, that also means the output doesn't go to the journal, which is the behavior we're seeing here. Related: #1617 |
Setting these values in
|
Implementing a "access_log stdout" in nginx and get the Docker image to use that would be my preferred option (#1617 (comment)). I don't know how to fix this otherwise. Alternatively, if we ever add an additional process for managing stdout/stderr as per #1799 and systemd/systemd#2069, that could fix this at the same time. |
@yifan-gu : do we have another method for logs gathering in kubernetes that the one currently implemented ? |
@pskrzyns I don't think so, unless users redirect the log to a volume by themselves. |
@iaguis can't we fix up the base images of nginx to do this? And we should write a doc documenting this. |
Let's try to figure out this for the next release. |
@iaguis : I can update the docs, but what is the direction according to this:
|
@woodbor I think documenting logging behavior is a good start. Same for #2417. We have some not-a-bug behaviors which works as intended but could surprise users. As they are related, can you please tackle the two tickets together? |
I don't think there's a way to fix the image to keep the current Docker behavior and make it work on rkt unless the option access_log stdout is implemented in upstream nginx. |
@woodbor what's the status of this PR? I'm bumping the milestone |
@tmrts plz assign it to me |
Moving to v1+, please reassess |
When running kubernetes with rkt i cant get logs, from kubectl get logs.
After some debugging i found:
https://github.com/kubernetes/kubernetes/blob/master/pkg/kubelet/rkt/log.go
Environment
What did you do?
What did you expect to see?
logs from pod, including:
What did you see instead?
An error was encountered while opening journal file /var/log/journal/271b753befe94448bdd8f18d393acc45, ignoring file. -- No entries --
Additional info:
it looks like container id is messed somewhere
additionaly rkt gc doesnt clear this noexisting container logs
The text was updated successfully, but these errors were encountered: