New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Echoing to /dev/stdout does not appear in 'docker logs' #19616
Comments
If you are reporting a new issue, make sure that we do not have any duplicates already open. You can ensure this by searching the issue list for this repository. If there is a duplicate, please close your issue and add a comment to the existing issue instead. If you suspect your issue is a bug, please edit your issue description to include the BUG REPORT INFORMATION shown below. If you fail to provide this information within 7 days, we cannot debug your issue and will close it. We will, however, reopen it if you later provide the information. For more information about reporting issues, see CONTRIBUTING.md. You don't have to include this information if this is a feature request (This is an automated, informational response) BUG REPORT INFORMATIONUse the commands below to provide key information from your environment:
Provide additional environment details (AWS, VirtualBox, physical, etc.): List the steps to reproduce the issue: Describe the results you received: Describe the results you expected: Provide additional info you think is important: ----------END REPORT --------- #ENEEDMOREINFO |
Can you provide an example? |
Just added the required info :) |
This is because |
I moved the symlink creation to daemon.sh and it still fails to produce anything in 'docker logs'. However if I remove the symlink so it just writes to the /var/log/test.log file it writes just fine. Why does this work in the nginx dockerfile? https://github.com/nginxinc/docker-nginx/blob/a8b6da8425c4a41a5dedb1fb52e429232a55ad41/Dockerfile |
FYI: I also just tried to use FROM nginx (including installing cron in the image) and still no logs show from the cronjob. Can I suggest this is re-opened please? |
I think the main difference with the nginx image is that it only has one main process in the foreground, while in your case you have daemon.sh but also cronjobs running in the background. For communication between them and daemon.sh, it might be more suited to use a named pipe. Eg. something like this in the beginning of daemon.sh :
|
The reason this doesn't work is because What you want is to redirect output to STDOUT of PID 1. PID 1 is the process launched by docker, and its STDOUT will be what docker picks up.
you'll see it works just fine. Note however this will only work if you don't launch your container with
|
Ok i must be missing something, as my /dev/stdout has a symlink to /proc/1/fd/1 by default... That's what comes as vanilla with Ubuntu and debian containers. If it didn't already have the symlink I'd get what you mean, but it does :) I believe I have already tried this but I will confirm when I have access later. |
|
Ok... Agree I'm seeing it now. However, I'm now getting permission errors when writing to /proc/1/fd/1 .
The modified Dockerfile to reflect the changes (aka what I have been taught...! Appreciate it btw!) Dockerfile
|
This might have something to do with apparmor, getting these logs:
|
I guess there's no need to call
Edit: Only if TTY is available (-t); See also: #6880 |
@elifarley : thanks for the tip, works for me too! For the record, in my case After looking at the source code here, I thought the reason it fails is because stdout is not seekable, so I ended up using a fifo (as there is an exception for Just to get the bottom of this... Does it mean that |
Wouldn't a PHP FPM processes stdout be where it writes the output from whatever script it runs? Therefore there is no stdout as you expect there to be. Only stderr, thats why you can write to /2, but not to /1 Isn't that how it works? |
Hi, |
quite a nice topic, thx |
This change starts Horovod worker processes with their output streams redirected to the output streams of the main process of the container they are in, so that logs do not go through agent zero and are available through standard docker logging. There were several ways to do the redirection. We could redirect in `worker_process.py`, either with the contextlib or dup'ing the file descriptors or do the redirection in a shell script, or you could do it in a python script (once again by dup'ing file descriptors). A python script was determined to be the simplest, to avoid `contextlib.redirect_stdout` not being respected, packaging woes of a shell script and dup'ing file descriptors in `worker_process.py` being a bit more tricky (in regard to things logging before the swap (imports) resulting in potentially misleading logs). See moby/moby#19616 for more around what informed the choices made here.
Did you manage to sort out the permissions issue? |
Whilst attempting to symlink a log file in /var/logs/ to /dev/stdout the logs do not appear in 'docker logs'.
I have also attempted simply just echoing into /dev/stdout but it is just echo'd back to the terminal.
Hosted with DigitalOcean, pretty vanilla ubuntu image.
Reproduction steps:
Dockerfile:
daemon.sh
Run with: docker build -t test . ; docker run -d --name=test1 test
Results received: Logs from the daemon.sh 'keepalive' script are shown in 'docker logs'
Results expected: Logs from the cron job to also show in 'docker logs'
For ref, I originally opened a stackexchange question here: http://stackoverflow.com/questions/34950465/logging-from-multiprocess-docker-containers
The text was updated successfully, but these errors were encountered: