New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
When processing with high memory load is performed, it is forcibly terminated with "shim reaped". #2202
Comments
I tried to reporduce this behavior at my local machine, but couldn't. Probably, full docker daemon log may help .
Above container will allocate 4 GB memory and wait for Enter key to release it. Also, for your docker container, you may try |
@yohiram In general, I would recommend troubleshooting this in https://github.com/moby/moby. There may be problems in docker contributing to the killing of the process. In general, a "shim reaped" message simply means that the shim process was collected properly after an exit. This doesn't necessarily indicate a defect. Other things you can try include updating to a newer kernel, if possible. If that helps, let us know and we'll what we can do. |
@kunalkushwaha Thank you for your reply. I tried running your docker image several times, but it seems that the problem didn't reproduce with my probrem. Thank you for making the tool to debug! Using "journalctl -u docker.service" log I also tried the --oom-kill-disable option, but it doesn't work for some reason. However, When --kernel-memory option is specified, it seems that it takes longer to generate the error than in other cases. |
@stevvooe Thank you for your reply.
I'll try to investigate variously using moby/moby. Thank you for telling! |
If this doesn't work, it will not cause fatal problems in my work, so it may take time to investigate. |
You should be able to use The logs you are seeing from docker/containerd are standard log messages for when a task is kill, nothing important here. Also, could this be something with GPU memory/resources and nothing to do with system RAM? |
Thanks for your advice. I tried immediately.
In particular, no abnormal message might be displayed. This phenomenon happens even with only CPU... |
have you enabled the debug mode in docker daemon? |
Yes, I have enabled the debug mode. |
I'm sorry to behave selfishly, I close this issue. |
I have to admit, I wouldnt mind seeing this re-opened just as I am having the same problem, with exactly the same output from jounalctl This problem seems intermittent, where sometimes the software within the docker runs for hours before being forcefully restarted, and others less than 1/2 hour. @yohiram - did you ever find an answer, or any better understanding of what is going on? |
I have the same output in journalctl my JNLP slave for Jenkins is dead spontaneously when many build are created and aborted in short period of time. And I cannot reproduce this if disable network related config in docker-compose.yml |
I am facing the same issue in google cloud compute docker environment with high memory nodejs tasks. Have you guys found a solution? |
For those of you who are still encountering this issue, can you please open a new issue with information for us to debug and reproduce what you are seeing? |
I am facing the same issue in centos 4.1.0-28.el7, docker version 18.03-ce. |
We also encountered similar issue on 19.03.8-ce on Fedora 31. For fellow subscribers, some issues at moby have the same error message at https://github.com/moby/moby/issues?q=is%3Aissue+is%3Aopen+shim+reaped . Maybe it's worth to take a look. |
In my case, this is caused by stackoverflow problem in my code. |
Hi,all: |
Had this same problem. You can try (like jwongz suggested above) disabling transparent hugepages. https://docs.mongodb.com/manual/tutorial/transparent-huge-pages/ |
Hello. I'm sorry for my poor English.
I run the Tensorflow program in Docker. And this program is forcibly terminated with "shim reaped".
Processing with high memory load is executed, but as far as the results of the top command are concerned, there is a considerable margin in the memory. My server has 256 GB memory. It uses at most 600 MB - 4 GB during execution. This program runs without problem unless using Docker. I almost tried the docker run option on memory. And I tried the storage driver with devicemapper, overlay, overlay 2.
Could you tell me whether this problem is bad on how to use Docker or if it is planned to be cured with already recognized problems?
BUG REPORT INFORMATION
Use the commands below to provide key information from your environment:
You do NOT have to include this information if this is a FEATURE REQUEST
-->
It occurs when executing high load Tensorflow processing (train ()) in Docker.
Describe the results you received:
In console Error 137
In syslog (use overlay2)
Describe the results you expected:
Output of
containerd --version
:Output of
sudo docker info
:The text was updated successfully, but these errors were encountered: