-
Notifications
You must be signed in to change notification settings - Fork 15
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Logger can't find logs when using json-file driver #50
Comments
Evidence of the log path used by the default driver can be found in the integration tests for Docker daemon. The file path is constructed from the daemon root via:
|
This makes me wonder why the official fluentd image for k8s is set for |
Ok so Ive figured this out I think. The There must be a case where your |
I stumbled onto the same issue I believe. I'm running a local docker instance of kubernetes as described here http://kubernetes.io/docs/getting-started-guides/docker/ Also found this indicating intended behavior of kubelet - https://github.com/kubernetes/kubernetes/blob/dae5ac482861382e18b1e7b2943b1b7f333c6a2a/cluster/addons/fluentd-elasticsearch/fluentd-es-image/td-agent.conf If I can help at all let me know. |
Yeah our fluentd instances are loosley based on the k8s ones. |
This worked fine for me using kubernetes (1.2.3) in AWS. KUBERNETES_PROVIDER=aws |
See here for workaround - coreos/coreos-kubernetes#322 (comment) |
You'll actually probably want to go down 1 more comment and read my response. You'll actually want all of /var/log since the root of /var/log is where fluentd stores it's cursor files. Ex:
|
@jchauncey are there changes this implies to fix json-file usage for deis/logger? |
No basically we point people to this issue if they run into this problem which just shows how to fix it. |
Versions: Workflow 2.4.2 and kubernetes 1.3.4 On AWS deis has the same problem with its The pods are configured to find log files in after #helm:generate helm tpl -d $HELM_GENERATE_DIR/tpl/generate_params.toml -o $HELM_GENERATE_DIR/manifests/deis-logger-fluentd-daemon.yaml $HELM_GENERATE_FILE
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: deis-logger-fluentd
namespace: deis
labels:
heritage: deis
annotations:
component.deis.io/version: v2.2.0
spec:
selector:
matchLabels:
app: deis-logger-fluentd
heritage: deis
template:
metadata:
name: deis-logger-fluentd
labels:
heritage: deis
app: deis-logger-fluentd
spec:
serviceAccount: deis-logger-fluentd
containers:
- name: deis-logger-fluentd
image: quay.io/deis/fluentd:v2.2.0
imagePullPolicy: IfNotPresent
volumeMounts:
- name: varlog
mountPath: /var/log
- name: containers
mountPath: /mnt/ephemeral/docker/containers
readOnly: true
volumes:
- name: varlog
hostPath:
path: /var/log
- name: containers
hostPath:
path: /mnt/ephemeral/docker/containers I just changed the mount path so it matches with the host FS hierarchy. |
+1 for this solution. I followed @chancez's instructions by updating the kubelet.service on all nodes, and restarted fluentd pods. Logging is now working |
deis logs
returns zero results when usingkube-aws
,workflow-beta1
and Kubernetes 1.2.After debugging a bit with @jchauncey, I learned fluentd looks for log output in
/var/log/containers
. This clearly works on some environments. However upon deeper investigation, the json-file log driver appears to write to/var/lib/docker/containers/<id>/*.log
. This would explain why logger fails to return results when usingkube-aws
.Debug session from worker node provisioned with
kube-aws
:Patch to use
json-file
log driver inkube-aws
(not the default):The text was updated successfully, but these errors were encountered: