Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Can't run/start any docker container after update #597

Closed
2 of 3 tasks
ChsRmb opened this issue Feb 18, 2019 · 58 comments
Closed
2 of 3 tasks

Can't run/start any docker container after update #597

ChsRmb opened this issue Feb 18, 2019 · 58 comments

Comments

@ChsRmb
Copy link

ChsRmb commented Feb 18, 2019

  • This is a bug report
  • This is a feature request
  • I searched existing issues before opening this one

Expected behavior

Any Docker Container should start normally

Actual behavior

Any Docker Container doesn't start after system update

Steps to reproduce the behavior

Any of these commands create the same error:

sudo docker run --rm  -it hello-world
docker: Error response from daemon: OCI runtime create failed: container_linux.go:344: starting container process caused "process_linux.go:297: getting the final child's pid from pipe caused \"read init-p: connection reset by peer\"": unknown.

LOG:

Feb 18 09:42:26 REDACTED systemd-udevd[25568]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable.
Feb 18 09:42:26 REDACTED systemd-udevd[25569]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable.
Feb 18 09:42:26 REDACTED systemd-udevd[25569]: Could not generate persistent MAC address for veth9c1a796: No such file or directory
Feb 18 09:42:26 REDACTED systemd-udevd[25568]: Could not generate persistent MAC address for vethde93f21: No such file or directory
Feb 18 09:42:26 REDACTED containerd[447]: time="2019-02-18T09:42:26.256445966+01:00" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/2f890ab51cb6473379880ffc47b5740fe0ac91d07af86439ca57f55d273befcc/shim.sock" debug=false pid=25581
Feb 18 09:42:26 REDACTED containerd[447]: time="2019-02-18T09:42:26.627371190+01:00" level=info msg="shim reaped" id=2f890ab51cb6473379880ffc47b5740fe0ac91d07af86439ca57f55d273befcc
Feb 18 09:42:26 REDACTED dockerd[16966]: time="2019-02-18T09:42:26.630886837+01:00" level=error msg="stream copy error: reading from a closed fifo"
Feb 18 09:42:26 REDACTED dockerd[16966]: time="2019-02-18T09:42:26.864764053+01:00" level=error msg="2f890ab51cb6473379880ffc47b5740fe0ac91d07af86439ca57f55d273befcc cleanup: failed to delete container from containerd: no such container"
Feb 18 09:42:26 REDACTED dockerd[16966]: time="2019-02-18T09:42:26.873946938+01:00" level=error msg="Handler for POST /v1.39/containers/2f890ab51cb6473379880ffc47b5740fe0ac91d07af86439ca57f55d273befcc/start returned error: OCI runtime create failed: container_linux.go:344: starting container process caused \"process_linux.go:297: getting the final child's pid from pipe caused \\\"read init-p: connection reset by peer\\\"\": unknown"

This is an existing container (MongoDB)

sudo docker start 565ba386c40c
Error response from daemon: OCI runtime create failed: container_linux.go:344: starting container process caused "process_linux.go:297: getting the final child's pid from pipe caused \"read init-p: connection reset by peer\"": unknown
Error: failed to start containers: 565ba386c40c

LOG:

Feb 18 09:44:01 REDACTED systemd-udevd[25652]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable.
Feb 18 09:44:01 REDACTED systemd-udevd[25652]: Could not generate persistent MAC address for vethd29d69f: No such file or directory
Feb 18 09:44:01 REDACTED systemd-udevd[25651]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable.
Feb 18 09:44:01 REDACTED systemd-udevd[25651]: Could not generate persistent MAC address for vethd000744: No such file or directory
Feb 18 09:44:01 REDACTED containerd[447]: time="2019-02-18T09:44:01.441461185+01:00" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/565ba386c40cec5e0c52f72e80ad421ec30f878870311b1557f6e29f46374ad9/shim.sock" debug=false pid=25664
Feb 18 09:44:01 REDACTED containerd[447]: time="2019-02-18T09:44:01.921811124+01:00" level=info msg="shim reaped" id=565ba386c40cec5e0c52f72e80ad421ec30f878870311b1557f6e29f46374ad9
Feb 18 09:44:02 REDACTED dockerd[16966]: time="2019-02-18T09:44:01.949986107+01:00" level=error msg="stream copy error: reading from a closed fifo"
Feb 18 09:44:02 REDACTED dockerd[16966]: time="2019-02-18T09:44:01.959013697+01:00" level=error msg="stream copy error: reading from a closed fifo"
Feb 18 09:44:02 REDACTED dockerd[16966]: time="2019-02-18T09:44:02.298058055+01:00" level=error msg="565ba386c40cec5e0c52f72e80ad421ec30f878870311b1557f6e29f46374ad9 cleanup: failed to delete container from containerd: no such container"
Feb 18 09:44:02 REDACTED dockerd[16966]: time="2019-02-18T09:44:02.298134909+01:00" level=error msg="Handler for POST /v1.39/containers/565ba386c40c/start returned error: OCI runtime create failed: container_linux.go:344: starting container process caused \"process_linux.go:297: getting the final child's pid from pipe caused \\\"read init-p: connection reset by peer\\\"\": unknown"

Output of docker version:

Client:
 Version:           18.09.2
 API version:       1.39
 Go version:        go1.10.6
 Git commit:        6247962
 Built:             Sun Feb 10 04:13:47 2019
 OS/Arch:           linux/amd64
 Experimental:      false

Server: Docker Engine - Community
 Engine:
  Version:          18.09.2
  API version:      1.39 (minimum version 1.12)
  Go version:       go1.10.6
  Git commit:       6247962
  Built:            Sun Feb 10 03:42:13 2019
  OS/Arch:          linux/amd64
  Experimental:     false

Output of docker info:

Containers: 4
 Running: 0
 Paused: 0
 Stopped: 4
Images: 7
Server Version: 18.09.2
Storage Driver: overlay2
 Backing Filesystem: extfs
 Supports d_type: true
 Native Overlay Diff: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
 Volume: local
 Network: bridge host macvlan null overlay
 Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 9754871865f7fe2f4e74d43e2fc7ccd237edcbce
runc version: 09c8266bf2fcf9519a651b04ae54c967b9ab86ec
init version: fec3683
Security Options:
 seccomp
  Profile: default
Kernel Version: 4.15.0
Operating System: Ubuntu 18.04.2 LTS
OSType: linux
Architecture: x86_64
CPUs: 8
Total Memory: 16GiB
Name: REDACTED
ID: SHWZ:SI3O:RZ65:V3BD:3FRV:WEBM:2UG6:CZME:6T2D:XFBG:TAVN:LYI7
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
Labels:
Experimental: false
Insecure Registries:
 127.0.0.0/8
Live Restore Enabled: false
Product License: Community Engine

WARNING: bridge-nf-call-iptables is disabled
WARNING: bridge-nf-call-ip6tables is disabled

Additional environment details (AWS, VirtualBox, physical, etc.)

@ghandim
Copy link

ghandim commented Feb 18, 2019

I can confirm this behaviour for

  • Kernel Version: 4.4.0-142-generic
  • Operating System: Ubuntu 16.04.5 LTS

I am not able to start any container after the update. Also a downgrade to 18.09.1 does not work :-(
A general Ubuntu LTS problem?

@ghandim
Copy link

ghandim commented Feb 18, 2019

Workaround which is working for me:

  • Downgrade docker-ce: sudo apt install docker-ce=5:18.09.1~3-0~ubuntu-xenial
  • Downgrade containerd.io: sudo apt install containerd.io=1.2.2-1

Then it seems to work for me for on Ubuntu 16.04 LTS...

@ChsRmb
Copy link
Author

ChsRmb commented Feb 18, 2019

Thats sound great, can you test it for Ubuntu 18.04?

@thaJeztah
Copy link
Member

Do you have a custom MountFlags option set in your systemd unit? See #485 (comment)

@ghandim
Copy link

ghandim commented Feb 18, 2019

No, only a proxy configuration in /etc/systemd/system/docker.service.d/http-proxy.conf.

@thaJeztah
Copy link
Member

It's definitely not a general problem with Ubuntu 16.04 or 18.04 packages (I've been able to run the latest packages on those versions without problems)

No direct clue though what could cause this (at a glance, the error looks to originate from runc)

@kevvok
Copy link

kevvok commented Feb 18, 2019

I'm also seeing this behavior on latest Debian Buster (kernel 4.19.16-1) after updating docker-ce and containerd.io. Rolling back the updates also solved my problem.

@epsilon-jpage
Copy link

Same, brand new instance of RHEL 7.,6 in AWS. Install Docker-CE and it's requirements and cannot start any container. Rolling back to 18.09.1 fixes.

@stmaute
Copy link

stmaute commented Feb 19, 2019

I can confirm what @ghandim suggested (worked for Ubuntu 16.04 LTS.) 👍

@thaJeztah
Copy link
Member

Could someone post the output of runc --version after rolling back? Trying to narrow-down where the problem is (in dockerd, containerd or runc); if runc itself was rolled back, that means you'd no longer have the patch for CVE-2019-5736

For RHEL 7.5, 7.6 wondering if that relates to opencontainers/runc#1988 (which looks to be a kernel bug in the RHEL kernels)

@thaJeztah
Copy link
Member

The error

read init-p: connection reset by peer "": unknown

Was mentioned in moby/moby#34776 ("Can´t specify memory limit in docker run for docker version 17.07.0-ce, build 8784753"), and two tickets in runc: opencontainers/runc#1547, and opencontainers/runc#1914

It's known that the CVE fix requires more memory when starting a container (see opencontainers/runc#1980). The top comment in this ticket (#597 (comment)) shows that just starting a hello-world container, without memory-limits set fails, so that doesn't seem to be related at a glance.

However, both runc issues; opencontainers/runc#1547 and The last link (Tight container limits may cause "read init-p: connection reset by peer") describe that pids/pidmax limits can cause that situation. Possibly the runc CVE fix results in more processes (or threads) to be started, so wondering if that's the underlying cause.

ping @justincormack @kolyshkin @cyphar PTAL

@cyphar
Copy link

cyphar commented Feb 19, 2019

Possibly the runc CVE fix results in more processes (or threads) to be started, so wondering if that's the underlying cause.

It doesn't. The mitigation is all done in C code and only uses execve.

@ghandim
Copy link

ghandim commented Feb 19, 2019

Could someone post the output of runc --version after rolling back? Trying to narrow-down where the problem is (in dockerd, containerd or runc); if runc itself was rolled back, that means you'd no longer have the patch for CVE-2019-5736

For RHEL 7.5, 7.6 wondering if that relates to opencontainers/runc#1988 (which looks to be a kernel bug in the RHEL kernels)

~# runc --version
runc version 1.0.0-rc6+dev
commit: 96ec2177ae841256168fcf76954f7177af9446eb
spec: 1.0.1-dev

on

  • Kernel Version: 4.4.0-142-generic
  • Operating System: Ubuntu 16.04.5 LTS

@thaJeztah
Copy link
Member

thaJeztah commented Feb 19, 2019

Thanks; so runc 96ec2177ae841256168fcf76954f7177af9446eb (docker-archive/runc@96ec217) does not have the fix for the CVE (docker-archive/runc@96ec217...09c8266)

@ghandim if, in your case, you upgrade just the containerd.io package (which bundles runc); does the problem introduce itself again?

# downgrade the docker engine and cli to 18.09.1
apt-get -y --allow-downgrades install \
  docker-ce=5:18.09.1~3-0~ubuntu-xenial \
  docker-ce-cli=5:18.09.1~3-0~ubuntu-xenial

# but make sure the containerd.io package is at the latest version
apt-get -y install containerd.io=1.2.2-3

@ghandim
Copy link

ghandim commented Feb 19, 2019

After upgrading only containerd.io I cannot start any container anymore :-(

@thaJeztah
Copy link
Member

Trying to reproduce on a DigitalOcean machine, which looks to have exactly the same kernel, but I don't see the problem 😞;

Kernel Version: 4.4.0-142-generic
Operating System: Ubuntu 16.04.5 LTS

No clue at this moment what the difference would be.

For those on CentOS; #595 (comment) mentions that the problem occurred on an outdated CentOS kernel, but did not reproduce on 3.10.0-957.5.1.el7.centos.plus.x86_64

@ChsRmb
Copy link
Author

ChsRmb commented Feb 19, 2019

@ghandim Your workaround doesn't fix this problem on Ubuntu 18.04 :/

@thaJeztah
Copy link
Member

Your workaround doesn't fix this problem on Ubuntu 18.04 :/

@ChaosRambo did downgrading also downgrade the containerd.io package? Or just the docker-ce / docker-ce-cli package? What version were you running before updating (before this issue occurred for you)? Were you already on 18.09.x, or on an older version of docker?

@goddib
Copy link

goddib commented Feb 19, 2019

I had the same issue/error message with the following setup:

Docker version:
Client:
Version: 18.09.2
API version: 1.39
Go version: go1.10.6
Git commit: 6247962
Built: Sun Feb 10 04:13:47 2019
OS/Arch: linux/amd64
Experimental: false

Server: Docker Engine - Community
Engine:
Version: 18.09.2
API version: 1.39 (minimum version 1.12)
Go version: go1.10.6
Git commit: 6247962
Built: Sun Feb 10 03:42:13 2019
OS/Arch: linux/amd64
Experimental: false

Kernel:
4.15.0 #1 SMP Thu Aug 23 19:33:51 MSK 2018 x86_64 x86_64 x86_64 GNU/Linux

On a week old vServer with Ubuntu 18.04.2 I "fixed" it by downgrading to docker-ce.

My now working setup:
`runc version 1.0.0-rc6+dev
commit: 09c8266bf2fcf9519a651b04ae54c967b9ab86ec
spec: 1.0.1-dev

containerd github.com/containerd/containerd 1.2.2 9754871865f7fe2f4e74d43e2fc7ccd237edcbce

Docker version 18.06.1-ce, build e68fc7a`

I would also be curious about a real fix. Or what even went wrong...

Thanks and cheers
goddib

@leeningli
Copy link

update your kernel >=3.10.0.927

@goddib
Copy link

goddib commented Feb 22, 2019

@leeningli as posted above, this is not a helpful suggestion.

Kernel:
4.15.0 #1 SMP Thu Aug 23 19:33:51 MSK 2018 x86_64 x86_64 x86_64 GNU/Linux

@leeningli
Copy link

@leeningli as posted above, this is not a helpful suggestion.

Kernel:
4.15.0 #1 SMP Thu Aug 23 19:33:51 MSK 2018 x86_64 x86_64 x86_64 GNU/Linux

I am so sorry.
My error is:
docker: Error response from daemon: OCI runtime create failed: container_linux.go:344: starting container process caused "process_linux.go:293: copying bootstrap data to pipe caused "write init-p: broken pipe"": unknown.

@trapier
Copy link

trapier commented Feb 22, 2019

As far as I can tell 4.15.0 is not an Ubuntu kernel. Even Ubuntu's mainline builds have some sort of suffix on the kernel version. If that assertion is true, then in terms of repeatability 4.4.0-142-generic #597 (comment) seems like more approachable recreate target.

@goddib
Copy link

goddib commented Feb 22, 2019

Thanks for pointing it out - maybe that's where my problem arises? I'm not sure where this kernel comes from, it came with my vServer when I installed Ubuntu.

root@XXX:~# uname -r
4.15.0

How else could I find out what kernel is running?

@flowsworld
Copy link

flowsworld commented Feb 23, 2019

I'm not sure where this kernel comes from, it came with my vServer when I installed Ubuntu.

Just for curiosity, are you using a Strato vServer? Because I also get this error with the exact same kernel (4.15.0) on my vServer there...

Edit: Ok. So it's their problem. I just sent them a mail "Probleme mit Kernel 4.15.0 und Docker" and urged them to solve the problem with their used kernel...

@goddib
Copy link

goddib commented Feb 23, 2019

I'm not sure where this kernel comes from, it came with my vServer when I installed Ubuntu.

Just for curiosity, are you using a Strato vServer? Because I also get this error with the exact same kernel (4.15.0) on my vServer there...

Yes I do use Strato. So maybe it's something in the configuration there? I would have thought they install a standard Ubuntu but apparently they modify some things. How can we get to the root of this?

@flowsworld
Copy link

flowsworld commented Feb 23, 2019

Yes I do use Strato. So maybe it's something in the configuration there? I would have thought they install a standard Ubuntu but apparently they modify some things. How can we get to the root of this?

I just got an answer from them, that a needed kernel module is missing on their vServers and there are no plans on adding it in the foreseeable future. The original answer is as follows:

Sie möchten Docker auf Ihrem Virtual Server nutzen.
Ein für den Betrieb notwendiges Kernel Modul steht auf den Virtual Server Kerneln leider nicht zur Verfügung und wird nach aktueller Planung in Zukunft auch nicht nachinstalliert werden. Daher ist für den Betrieb von Docker ein Dedicated Root Server notwendig
Ich bedauere Ihnen keine positivere Antwort geben zu können, stehe Ihnen bei weiteren Fragen jedoch gerne zur Verfügung.

Edit: Just got another answer from Strato because I ran docker there before I did a fresh server install - So they work on it, but have no eta [they use Virtuozzo for their vServers]:

Ihr Docker ist deshalb nicht verwendbar, weil unsere V Server Systeme in einer Containerlösung angelegt sind. Der Kernel des Hauptservers in dem die Container sind gibt so die entsprechend möglichen Installationen vor. Auf diesem Kerne bzw. in der Containerlösung ist mit der aktuellen version keine Docker Lösungen realisierbar.
Wir abreiten aktuell an einer Lösung um diese Container dockerfähig zu machen, aktuell ist dies aber noch nicht möglich. Ein genaues Releasedatum liegt uns hier noch nicht vor.

Sorry, if this doesn't help others with the problem, but for all Strato customers this could be a warning...

@thaJeztah
Copy link
Member

@flownex could you run the check-config.sh script on that server? I'd like to see if that script picks up what's missing (if not, we should update the script so that it can detect such situations); latest version can be found here; https://raw.githubusercontent.com/moby/moby/master/contrib/check-config.sh

@flowsworld
Copy link

First try:

root@h2813492:~# ./check-config.sh
warning: /proc/config.gz does not exist, searching other paths for kernel config ...
error: cannot find kernel config
  try running this script again, specifying the kernel config:
    CONFIG=/path/to/kernel/.config ./check-config.sh or ./check-config.sh /path/to/kernel/.config
root@h2813492:~# modprobe configs
modprobe: ERROR: ../libkmod/libkmod.c:514 lookup_builtin_file() could not open builtin file '/lib/modules/4.15.0/modules.builtin.bin'
modprobe: FATAL: Module configs not found in directory /lib/modules/4.15.0

@thaJeztah Any hint on how to get you what you need?

@kolyshkin
Copy link

OK I have provided details to my ex-colleagues at VZ (although from my perspective Strato should have contacted VZ support); I vaguely remember that as of some time ago docker inside container was a supported configuration.

@kolyshkin
Copy link

@flownex @ogrady thanks for the info! Preliminary reply from VZ team: the latest VZ7 kernel is tested and works with Docker CE 18.09.2. Perhaps Strato just need to update their kernel, it should be trivial if they use readykernel. I will update this as soon as I'll have more info.

@kolyshkin
Copy link

@flownex @ogrady @goddib @ChaosRambo looks like you guys are all using a container under Virtuozzo or OpenVZ kernel:

4.15.0 #1 SMP Thu Aug 23 19:33:51 MSK 2018 x86_64 x86_64 x86_64 GNU/Linux

Now, this appears to be the latest full build of VZ7 Virtuozzo kernel, version 3.10.0-862.11.6.vz7.64.7 (actual version can be guessed by comparing compilation date reported, in-container version is spoofed for userspace to work).

For this kernel, there is a readykernel update v72.1, released 15 Feb. This update, as well as any later one (the list is at https://readykernel.com/?distro=Virtuozzo-7&kernel=3.10.0-862.11.6.vz7.64.7), should work fine with Docker CE 18.09.2.

As far as I understand there's no way to see what kernel is running from inside a container, but you can definitely ask your hosting service provider to upgrade to the latest readykernel Virtuozzo, pointing out to this comment if needed.

@cyphar
Copy link

cyphar commented Feb 26, 2019

As far as I understand there's no way to see what kernel is running from inside a container, but you can definitely ask your hosting service provider to upgrade to the latest readykernel Virtuozzo, pointing out to this comment if needed.

If you can do something like docker cp or -v you could do something like docker cp /proc/version container:foo or similar -- which would give you the version of the kernel from the host and put it in the container so you can see its contents.

@kolyshkin
Copy link

If you can do something like docker cp

There's a confusion between Virtuozzo containers and Docker containers here. What I was talking about is a Virtuozzo host (which provides "OS" containers a la LXC/LXD but with a custom kernel, so slightly more VM-like, yet they are still containers), and dockerd running inside such an (OS) container. Those users reporting kernel 4.15 above are renting such OS containers from a hoster, and they don't have access to a (Virtuozzo or OpenVZ) host system.

With that, there might be a /proc/vz/version inside a Virtuozzo container, but it might not reflect the live patch (readykernel) kernel version. On the Virtuozzo host, one can run readykernel info to check which (live-patched) kernel is running, and if it is 72.1 or later, latest dockerd should work fine inside (OS) containers.

Hope that clears things up

@ryujisnote
Copy link

I was face the same error. in CentOS7.6 (basic install, yum updated.)
SELINUX=disabled(Service Provider Customed) -> permissive change. and reboot... fixed. I can run hello-world.
Mention to me if you have the info(command) I want.

# docker version
Client:
 Version:           18.09.2
 API version:       1.39
 Go version:        go1.10.6
 Git commit:        1ac774d
 Built:             Sun Feb 10 03:49:56 2019
 OS/Arch:           linux/amd64
 Experimental:      false

Server: Docker Engine - Enterprise
 Engine:
  Version:          18.09.2
  API version:      1.39 (minimum version 1.12)
  Go version:       go1.10.6
  Git commit:       1ac774d
  Built:            Sun Feb 10 03:43:47 2019
  OS/Arch:          linux/amd64
  Experimental:     false

# docker info
Containers: 7                                                                                                                                            
 Running: 0                                                                                                                                              
 Paused: 0                                                                                                                                               
 Stopped: 7                                                                                                                                             Images: 1
Server Version: 18.09.2
Storage Driver: devicemapper
 Pool Name: docker-253:0-947656-pool
 Pool Blocksize: 65.54kB
 Base Device Size: 10.74GB
 Backing Filesystem: xfs
 Udev Sync Supported: true
 Data file: /dev/loop0
 Metadata file: /dev/loop1
 Data loop file: /var/lib/docker/devicemapper/devicemapper/data
 Metadata loop file: /var/lib/docker/devicemapper/devicemapper/metadata
 Data Space Used: 50.46MB
 Data Space Total: 107.4GB
 Data Space Available: 15.34GB
 Metadata Space Used: 17.42MB
 Metadata Space Total: 2.147GB
 Metadata Space Available: 2.13GB
 Thin Pool Minimum Free Space: 10.74GB
 Deferred Removal Enabled: true
 Deferred Deletion Enabled: true
 Deferred Deleted Device Count: 0
 Library Version: 1.02.149-RHEL7 (2018-07-20)
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
 Volume: local
 Network: bridge host macvlan null overlay
 Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 9754871865f7fe2f4e74d43e2fc7ccd237edcbce
runc version: 09c8266bf2fcf9519a651b04ae54c967b9ab86ec
init version: fec3683
Security Options:
 seccomp
  Profile: default
Kernel Version: 3.10.0-957.5.1.el7.x86_64
Operating System: CentOS Linux 7 (Core)
OSType: linux
Architecture: x86_64
CPUs: 4
Total Memory: 7.638GiB
Name: HOSTNAME
ID: <<<concern>>>
Docker Root Dir: /var/lib/docker 
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
Labels:
 com.docker.security.seccomp=enabled
Experimental: false
Insecure Registries:
 127.0.0.0/8
Live Restore Enabled: false
Product License: Unlicensed Enterprise Engine

WARNING: the devicemapper storage-driver is deprecated, and will be removed in a future release.
WARNING: devicemapper: usage of loopback devices is strongly discouraged for production use.
         Use `--storage-opt dm.thinpooldev` to specify a custom block storage device.

@kathibeepboop
Copy link

kathibeepboop commented Mar 2, 2019

Got the same problem on Ubuntu 18.04.
Downgrading to the old version (mentioned by @ghandim) worked (as temporary fix)

 apt install docker-ce=5:18.09.1~3-0~ubuntu-bionic
 apt install containerd.io=1.2.2-1

@jblaine
Copy link

jblaine commented Mar 6, 2019

In case it helps anyone else... the following packages now work for me on Red Hat Enterprise Linux Server release 7.6 (Maipo) with 3.10.0-957.5.1.el7.x86_64:

  • docker-ce-18.09.3-3.el7.x86_64 (NEW as of 2/28)
  • docker-ce-cli-18.09.3-3.el7.x86_64 (NEW as of 2/28)
  • containerd.io-1.2.4-3.1.el7.x86_64 (NEW as of 2/28)
  • ... and even the rhel-7-server-extras-rpms version, docker-1.13.1-91.git07f3374.el7

and none of those work for me on CentOS Linux release 7.6.1810 (Core) with the same kernel as above (3.10.0-957.5.1.el7.x86_64). docker run hello-world never says a thing and never returns.

@thaJeztah
Copy link
Member

@jblaine could you also check the version of container-selinux that is installed in both cases (RHEL/CentOS)?

@jblaine
Copy link

jblaine commented Mar 7, 2019

@thaJeztah 2.77-1 on RHEL 7 and 2.74-1 on CentOS 7

@thaJeztah
Copy link
Member

thaJeztah commented Mar 7, 2019

I think that may be the problem; see containers/container-selinux#63

We contributed a fix upstream (containers/container-selinux#64), but that may not have found its way yet to a new version of the package.

@pblayo
Copy link

pblayo commented Mar 13, 2019

I had the same error (write init-p: broken pipe "": unknown) after updating my ubuntu 14.04 LTS. It's pretty a nightmare to realize that docker is so fragile.

Docker version 18.06.3-ce, build d7080c1

I had to downgrade manually following https://docs.docker.com/cs-engine/1.13/
down to Docker version 1.13.1-cs9, build 1bc62a2

@thaJeztah
Copy link
Member

@pblayo what kernel version are you running on? (what does uname -a show?)

It's pretty a nightmare to realize that docker is so fragile.

docker (and containers in general), use features provided by the kernel; in case of the patch-releases for 18.06.3 and 18.09.3, there were no actual changes in docker itself, but an updated version of the runc runtime was included to address a critical vulnerability (CVE-2019-5736) that allowed container escapes.

The fix for that vulnerability required kernel features that are not available in older kernel versions, so if you're using the original 3.13 kernel, you need to update to a later Ubuntu kernel through the LTS Enablement stack; https://wiki.ubuntu.com/Kernel/LTSEnablementStack

I had to downgrade manually following https://docs.docker.com/cs-engine/1.13/ down to Docker version 1.13.1-cs9, build 1bc62a2

I highly recommend not running that version; that's a very old version of Docker, and that's no longer maintained and may have unpatched vulnerabilities (actually, I'm not sure why those pages are still listed on the documentation; I'll open a pull-request to have them removed)

If you cannot upgrade your kernel; the previous version of Docker 18.06 (18.06.2) should would (but won't have the updated version of runc, so is not patched to address CVE-2019-5736)

Note that Ubuntu 14.04 reaches EOL next month (April 2019), so it's worth considering to upgrade to the current LTS version (18.04)

@pblayo
Copy link

pblayo commented Mar 13, 2019

@thaJeztah : thanks a lot, downgrading Docker to 18.06.2-ce worked

$ uname -a
Linux 4.4.0-142-generic #168~14.04.1-Ubuntu x86_64

@thaJeztah
Copy link
Member

@pblayo you're welcome! At least that would get you going, but of course, it's a workaround because you won't have the fix for the CVE 😕

The kernel version you mentioned; that's the version on which the problem occurred?

$ uname -a
Linux 4.4.0-142-generic #168~14.04.1-Ubuntu x86_64

Trying to reproduce; I started a Ubuntu 14.04 machine on DigitalOcean;

Upgrade the kernel, and reboot (never hurts);

apt-get update && apt-get install --install-recommends linux-generic-lts-xenial -y

reboot


uname -a
Linux ubuntu-s-1vcpu-1gb-ams3-01 4.4.0-142-generic #168~14.04.1-Ubuntu SMP Sat Jan 19 11:26:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux

(same version as you reported 👍)

I Installed docker;

curl -fsSL https://get.docker.com -o get-docker.sh
sh get-docker.sh

docker version

Client:
 Version:           18.06.3-ce
 API version:       1.38
 Go version:        go1.10.3
 Git commit:        d7080c1
 Built:             Wed Feb 20 02:27:13 2019
 OS/Arch:           linux/amd64
 Experimental:      false

Server:
 Engine:
  Version:          18.06.3-ce
  API version:      1.38 (minimum version 1.12)
  Go version:       go1.10.3
  Git commit:       d7080c1
  Built:            Wed Feb 20 02:25:38 2019
  OS/Arch:          linux/amd64
  Experimental:     false


docker run --rm hello-world
Unable to find image 'hello-world:latest' locally
latest: Pulling from library/hello-world
1b930d010525: Pull complete 
Digest: sha256:2557e3c07ed1e38f26e389462d03ed943586f744621577a99efb77324b0fe535
Status: Downloaded newer image for hello-world:latest

Hello from Docker!
...

Unfortunately, I'm not able to reproduce the issue

@pblayo
Copy link

pblayo commented Mar 13, 2019

@thaJeztah : no sorry 4.4.0 is the kernel version after the upgrade (and reboot) you suggested in your first answer. The error write init-p: broken pipe occurred with the original 3.13 kernel as you had already guessed correctly.

@pblayo
Copy link

pblayo commented Mar 13, 2019

it's a workaround because you won't have the fix for the CVE

@thaJeztah : I'm not sure I understand the next step : do I have to wait for an updated packaged version of Docker or of the kernel? (or both?)

@thaJeztah
Copy link
Member

I'm not sure I understand the next step : do I have to wait for an updated packaged version of Docker or of the kernel? (or both?)

Sorry for the confusion; if you upgraded your kernel to 4.x, you should be able to install version 18.06.3-ce, which has the fix for the CVE

@pblayo
Copy link

pblayo commented Mar 13, 2019

OK thanks so my final working configuration is:

$ docker --version
Docker version 18.06.3-ce, build d7080c1
$ uname -a
Linux 4.4.0-142-generic

@ghandim
Copy link

ghandim commented Mar 28, 2019

Can confirm that with

containerd.io = 1.2.5-1
docker-ce-cli = 5:18.09.4~3-0~ubuntu-xenial
docker-ce = 5:18.09.4~3-0~ubuntu-xenial

the bug is fixed on my Ubuntu environment.

@thaJeztah
Copy link
Member

Thanks! I'll tentatively close this issue, but feel free to comment if you're still running into this after installing the versions mentioned above #597 (comment)

@m4rr
Copy link

m4rr commented Apr 22, 2019

Same docker: Error response from daemon: OCI runtime create failed: container_linux.go:344: starting container process caused "process_linux.go:293: copying bootstrap data to pipe caused \"write init-p: broken pipe\"": unknown.

 Version:           18.09.2
 API version:       1.39
 Go version:        go1.10.4
 Git commit:        6247962
 Built:             Tue Feb 26 23:52:23 2019
 OS/Arch:           linux/amd64
 Experimental:      false

Server:
 Engine:
  Version:          18.09.2
  API version:      1.39 (minimum version 1.12)
  Go version:       go1.10.4
  Git commit:       6247962
  Built:            Wed Feb 13 00:24:14 2019
  OS/Arch:          linux/amd64
  Experimental:     false

@gaozhidf
Copy link

meet the same problem when I update to docker-18.09.5,and fix it after restart my pc

$ docker version
Client:
 Version:           18.09.5
 API version:       1.39
 Go version:        go1.10.4
 Git commit:        e8ff056
 Built:             Thu May  9 23:11:19 2019
 OS/Arch:           linux/amd64
 Experimental:      false

Server:
 Engine:
  Version:          18.09.5
  API version:      1.39 (minimum version 1.12)
  Go version:       go1.10.4
  Git commit:       e8ff056
  Built:            Thu May  9 22:59:19 2019
  OS/Arch:          linux/amd64
  Experimental:     false
$ uname -a
Linux linux 4.15.0-50-generic #54-Ubuntu SMP Mon May 6 18:46:08 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
$ lsb_release -a
No LSB modules are available.
Distributor ID:	Ubuntu
Description:	Ubuntu 18.04.2 LTS
Release:	18.04
Codename:	bionic

@zxlin
Copy link

zxlin commented Aug 20, 2019

@ghandim @thaJeztah I just ran into this issue on the following versions:

container.io = 1.2.6-3
docker-ce = 5:18.09.7~3-0~ubuntu-xenial
docker-ce-cli = 5:18.09.7~3-0~ubuntu-xenial

kernel = 4.4.0-1090-aws

As with everyone else, a reboot fixes the issue. The server was up for 52 days at the time of the issue if that helps at all.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests