Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unable to recover corrupt image after unexpected host reboot #3671

Closed
mcginne opened this issue Sep 23, 2019 · 8 comments
Closed

Unable to recover corrupt image after unexpected host reboot #3671

mcginne opened this issue Sep 23, 2019 · 8 comments
Labels

Comments

@mcginne
Copy link

mcginne commented Sep 23, 2019

We occasionally see hosts rebooting unexpectedly, and some times when this occurs certain containers are unable to start with the following errors:

Sep 17 17:18:34 test-bm0glfe20a9a2salavcg-dmnetperfa1-default-00000372 kubelet.service[2092]: I0917 17:18:34.123476    2092 kuberuntime_manager.go:409] No sandbox for pod "ibm-master-proxy-static-10.143.255.15_kube-system(f51dbf3439a39cd1567f6b8e5c99dc94)" can be found. Need to start a new one
Sep 17 17:18:34 test-bm0glfe20a9a2salavcg-dmnetperfa1-default-00000372 kubelet.service[2092]: E0917 17:18:34.140251    2092 kubelet.go:2244] node "10.143.255.15" not found
Sep 17 17:18:34 test-bm0glfe20a9a2salavcg-dmnetperfa1-default-00000372 kubelet.service[2092]: E0917 17:18:34.162667    2092 remote_runtime.go:109] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to create containerd container: failed to create snapshot: missing parent "k8s.io/9/sha256:e17133b79956ad6f69ae7f775badd1c11bad2fc64f0529cab863b9d12fbaa5c4" bucket: not found
Sep 17 17:18:34 test-bm0glfe20a9a2salavcg-dmnetperfa1-default-00000372 kubelet.service[2092]: E0917 17:18:34.162767    2092 kuberuntime_sandbox.go:68] CreatePodSandbox for pod "ibm-master-proxy-static-10.143.255.15_kube-system(f51dbf3439a39cd1567f6b8e5c99dc94)" failed: rpc error: code = Unknown desc = failed to create containerd container: failed to create snapshot: missing parent "k8s.io/9/sha256:e17133b79956ad6f69ae7f775badd1c11bad2fc64f0529cab863b9d12fbaa5c4" bucket: not found
Sep 17 17:18:34 test-bm0glfe20a9a2salavcg-dmnetperfa1-default-00000372 kubelet.service[2092]: E0917 17:18:34.162802    2092 kuberuntime_manager.go:697] createPodSandbox for pod "ibm-master-proxy-static-10.143.255.15_kube-system(f51dbf3439a39cd1567f6b8e5c99dc94)" failed: rpc error: code = Unknown desc = failed to create containerd container: failed to create snapshot: missing parent "k8s.io/9/sha256:e17133b79956ad6f69ae7f775badd1c11bad2fc64f0529cab863b9d12fbaa5c4" bucket: not found
Sep 17 17:18:34 test-bm0glfe20a9a2salavcg-dmnetperfa1-default-00000372 kubelet.service[2092]: E0917 17:18:34.162867    2092 pod_workers.go:190] Error syncing pod f51dbf3439a39cd1567f6b8e5c99dc94 ("ibm-master-proxy-static-10.143.255.15_kube-system(f51dbf3439a39cd1567f6b8e5c99dc94)"), skipping: failed to "CreatePodSandbox" for "ibm-master-proxy-static-10.143.255.15_kube-system(f51dbf3439a39cd1567f6b8e5c99dc94)" with CreatePodSandboxError: "CreatePodSandbox for pod \"ibm-master-proxy-static-10.143.255.15_kube-system(f51dbf3439a39cd1567f6b8e5c99dc94)\" failed: rpc error: code = Unknown desc = failed to create containerd container: failed to create snapshot: missing parent \"k8s.io/9/sha256:e17133b79956ad6f69ae7f775badd1c11bad2fc64f0529cab863b9d12fbaa5c4\" bucket: not found"

Containerd logs report:

Sep 17 17:29:51 test-bm0glfe20a9a2salavcg-dmnetperfa1-default-00000372 containerd[1988]: time="2019-09-17T17:29:51.125588170Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:ibm-master-proxy-static-10.143.255.15,Uid:f51dbf3439a39cd1567f6b8e5c99dc94,Namespace:kube-system,Attempt:0,}"
Sep 17 17:29:51 test-bm0glfe20a9a2salavcg-dmnetperfa1-default-00000372 containerd[1988]: time="2019-09-17T17:29:51.170208023Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:ibm-master-proxy-static-10.143.255.15,Uid:f51dbf3439a39cd1567f6b8e5c99dc94,Namespace:kube-system,Attempt:0,} failed, error" error="failed to create containerd container: failed to create snapshot: missing parent "k8s.io/9/sha256:e17133b79956ad6f69ae7f775badd1c11bad2fc64f0529cab863b9d12fbaa5c4" bucket: not found"
Sep 17 17:30:03 test-bm0glfe20a9a2salavcg-dmnetperfa1-default-00000372 containerd[1988]: time="2019-09-17T17:30:03.124078256Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:ibm-master-proxy-static-10.143.255.15,Uid:f51dbf3439a39cd1567f6b8e5c99dc94,Namespace:kube-system,Attempt:0,}"
Sep 17 17:30:03 test-bm0glfe20a9a2salavcg-dmnetperfa1-default-00000372 containerd[1988]: time="2019-09-17T17:30:03.166402807Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:ibm-master-proxy-static-10.143.255.15,Uid:f51dbf3439a39cd1567f6b8e5c99dc94,Namespace:kube-system,Attempt:0,} failed, error" error="failed to create containerd container: failed to create snapshot: missing parent "k8s.io/9/sha256:e17133b79956ad6f69ae7f775badd1c11bad2fc64f0529cab863b9d12fbaa5c4" bucket: not found"
Sep 17 17:30:14 test-bm0glfe20a9a2salavcg-dmnetperfa1-default-00000372 containerd[1988]: time="2019-09-17T17:30:14.124150574Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:ibm-master-proxy-static-10.143.255.15,Uid:f51dbf3439a39cd1567f6b8e5c99dc94,Namespace:kube-system,Attempt:0,}"
Sep 17 17:30:14 test-bm0glfe20a9a2salavcg-dmnetperfa1-default-00000372 containerd[1988]: time="2019-09-17T17:30:14.190567400Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:ibm-master-proxy-static-10.143.255.15,Uid:f51dbf3439a39cd1567f6b8e5c99dc94,Namespace:kube-system,Attempt:0,} failed, error" error="failed to create containerd container: failed to create snapshot: missing parent "k8s.io/9/sha256:e17133b79956ad6f69ae7f775badd1c11bad2fc64f0529cab863b9d12fbaa5c4" bucket: not found"
Sep 17 17:30:29 test-bm0glfe20a9a2salavcg-dmnetperfa1-default-00000372 containerd[1988]: time="2019-09-17T17:30:29.123812110Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:ibm-master-proxy-static-10.143.255.15,Uid:f51dbf3439a39cd1567f6b8e5c99dc94,Namespace:kube-system,Attempt:0,}"
Sep 17 17:30:29 test-bm0glfe20a9a2salavcg-dmnetperfa1-default-00000372 containerd[1988]: time="2019-09-17T17:30:29.190065496Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:ibm-master-proxy-static-10.143.255.15,Uid:f51dbf3439a39cd1567f6b8e5c99dc94,Namespace:kube-system,Attempt:0,} failed, error" error="failed to create containerd container: failed to create snapshot: missing parent "k8s.io/9/sha256:e17133b79956ad6f69ae7f775badd1c11bad2fc64f0529cab863b9d12fbaa5c4" bucket: not found"
Sep 17 17:30:44 test-bm0glfe20a9a2salavcg-dmnetperfa1-default-00000372 containerd[1988]: time="2019-09-17T17:30:44.124012565Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:ibm-master-proxy-static-10.143.255.15,Uid:f51dbf3439a39cd1567f6b8e5c99dc94,Namespace:kube-system,Attempt:0,}"
Sep 17 17:30:44 test-bm0glfe20a9a2salavcg-dmnetperfa1-default-00000372 containerd[1988]: time="2019-09-17T17:30:44.174658740Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:ibm-master-proxy-static-10.143.255.15,Uid:f51dbf3439a39cd1567f6b8e5c99dc94,Namespace:kube-system,Attempt:0,} failed, error" error="failed to create containerd container: failed to create snapshot: missing parent "k8s.io/9/sha256:e17133b79956ad6f69ae7f775badd1c11bad2fc64f0529cab863b9d12fbaa5c4" bucket: not found"

I have tried to delete and pull the image manually on the host using:

crictl pull --creds xxx:yyy registry.ng.bluemix.net/armada-master/haproxy:967a34e6512d2d318e796959d962b59fbfa616fb

But this fails with a similar error:

image

Version info:

crictl version
Version:  0.1.0
RuntimeName:  containerd
RuntimeVersion:  v1.2.9
RuntimeApiVersion:  v1alpha2

I can understand things being left in a bad state when a host has crashed unexpectedly, but I would like a way of being able to recover a node - currently I am having to reload the node when this issue occurs.

@crosbymichael
Copy link
Member

Thanks for the report, we will look into it

@fuweid
Copy link
Member

fuweid commented Sep 24, 2019

I meet issue at v1.0.3 and v1.2.5 before. The case is that when the machine crash unexpectedly, it seems that the metadata boltdb missing data so that the gc module removes the snapshot boltdb data. But when you restart the containerd, the missing data comes back...

The timeline is like:

  • 1 machine crashed and reboot
  • 2 containerd restart - missing metadata boltdb data

I added code to trace the boltdb id, not available in containerd upstream

time="2019-03-01T11:57:26+08:00" level=info msg="in snapshotter prepare, snapshotter key is default/6491/7e57eee40fdae222eba895cd2af4de2061b805c8262ff21f0af42efd13241bfb" module="containerd/snapshot"
time="2019-03-01T17:49:41+08:00" level=info msg="in snapshotter prepare, snapshotter key is default/1/extract-52037048-Hj63 sha256:89ec547e2c8be9d0f292dc7fa42ef49906141ddb89e97fc0776441f6da8dc290" module="containerd/snapshot"

you can see the boltdb sequence id has been reset so that I think the existing data lost.

  • 3 containerd gc module cleanup snapshot boltdb data

As 2 mentioned, the metadata boltdb data doesn't match the the snapshot boltdb data. And then all unmatched data in snapshot boltdb will be removed.

  • 4 running for a while and restart
time="2019-03-08T10:41:15+08:00" level=info msg="in snapshotter prepare, snapshotter key is default/1161/ed1e535711985c14820d8eff8aea1fde8a28d2acdd80d26803a1be8401e1ae85" module="containerd/snapshot"
time="2019-03-08T12:22:56+08:00" level=info msg="in snapshotter prepare, snapshotter key is default/6492/extract-767436137-uqok sha256:7c5281251db713afc8d899746fd12a1ebb9aa83b1e0942f8e5b217818fc55f2b" module="containerd/snapshot"

after you restart containerd, the missing seq id in metadata boltdb is back and start with 6492. And gc module will cleanup the snapshot boltdb again. Therefore, it will raise the missing parent issue.

It is hard to reproduce the issue. It might be related to boltdb. Just for information, hope it can help.

@fuweid
Copy link
Member

fuweid commented Oct 9, 2019

ping @mcginne the issue comes out after reboot unexpectly? or running normal for a while and then issue comes out?

Could you provides more information about this? Thanks!

@fuweid
Copy link
Member

fuweid commented Oct 9, 2019

I think that we found that root cause. In my testing environment, rootfs is in the device /dev/sda3 and the data dir for containerd is in the device /dev/sda5.

$df -h | egrep 'sda3|sda5'
/dev/sda3        30G   17G   11G  61% /
/dev/sda5       3.6T  1.1T  2.4T  31% /home

But the containerd doesn't wait for the /dev/sda5 mount ready.

// from log
time="2019-10-06T01:52:43.242086646+08:00" level=info msg="starting containerd" revision=603ed5c5c3967352ae327a39a68b1319fd05097e version=1.2.5


$systemctl status home.mount
● home.mount - /home
   Loaded: loaded (/etc/fstab)
   Active: active (mounted) since Sun 2019-10-06 01:53:01 CST; 3 days ago
    Where: /home
     What: /dev/sda5
     Docs: man:fstab(5)
           man:systemd-fstab-generator(8)
   Memory: 49.4M

containerd will create meta.db for metadata plugin in the device /dev/sda3. That is why that the sequence seems reset to zero. Let's use lsof to check.

You can see that the io.containerd.metadata.v1.bolt/meta.db is in the /dev/sda3 but the io.containerd.snapshotter.v1.overlayfs/metadata.db is in the /dev/sda5.

Why are they different here? Because containerd always opens io.containerd.metadata.v1.bolt/meta.db to check migration and opens io.containerd.snapshotter.v1.overlayfs/metadata.db lazily. If there is no request to use io.containerd.snapshotter.v1.overlayfs/metadata.db, containerd will not open it.

In this case, the gc will remove all the records from io.containerd.snapshotter.v1.overlayfs/metadata.db because they don't match in io.containerd.metadata.v1.bolt/meta.db.

After you restart the containerd, since the device /dev/sda5 is already mounted, the old data in io.containerd.metadata.v1.bolt/meta.db is back. And gc will be mad again! Because data in io.containerd.snapshotter.v1.overlayfs/metadata.db doesn't match io.containerd.metadata.v1.bolt/meta.db again... Since the data are in metadata, not in snapshot, that is why that error shows parent xxx bucket: not found.

$sudo lsof -p 3154
COMMAND    PID USER   FD      TYPE             DEVICE  SIZE/OFF      NODE NAME
container 3154 root  cwd       DIR                8,3      4096         2 /
container 3154 root  rtd       DIR                8,3      4096         2 /
container 3154 root  txt       REG                8,3  56094232   1847902 /usr/local/bin/containerd
container 3154 root  mem-W     REG                8,3              265824 /home/t4/pouch/containerd/root/io.containerd.metadata.v1.bolt/meta.db (path dev=8,5, inode=163712610)
container 3154 root  mem-W     REG                8,5   4194304 163712670 /home/t4/pouch/containerd/root/io.containerd.snapshotter.v1.overlayfs/metadata.db
container 3154 root  mem       REG                8,3   2112416   1712345 /usr/lib64/libc-2.17.so
container 3154 root  mem       REG                8,3    142304   1711263 /usr/lib64/libpthread-2.17.so
container 3154 root  mem       REG                8,3     19520   1712298 /usr/lib64/libdl-2.17.so
container 3154 root  mem       REG                8,3    164440   1712618 /usr/lib64/ld-2.17.so
container 3154 root    0r      CHR                1,3       0t0      1028 /dev/null
container 3154 root    1w      REG                8,3 158744958    791690 /var/log/pouch
container 3154 root    2w      REG                8,3 158744958    791690 /var/log/pouch
container 3154 root    3r      REG                0,4         0     42069 /proc/3154/mountinfo
container 3154 root    4u  a_inode               0,11         0      8671 [eventpoll]
container 3154 root    5uW     REG                8,3   2097152    265824 /home/t4/pouch/containerd/root/io.containerd.metadata.v1.bolt/meta.db
container 3154 root    6u  a_inode               0,11         0      8671 [eventpoll]
container 3154 root    7u  a_inode               0,11         0      8671 [eventpoll]
container 3154 root    8u     unix 0xffff881fea513600       0t0     13735 /run/containerd/debug.sock
container 3154 root    9u     unix 0xffff881fea513f00       0t0     13737 /var/run/containerd.sock
container 3154 root   10u     unix 0xffff881feb383a80       0t0     33340 /var/run/containerd.sock
container 3154 root   11u     unix 0xffff881feb384380       0t0     12939 /var/run/containerd.sock
container 3154 root   12u     unix 0xffff881feb394380       0t0     28087 /var/run/containerd.sock
container 3154 root   13u     unix 0xffff881feb394c80       0t0     12941 /var/run/containerd.sock
container 3154 root   14u     unix 0xffff881feb395580       0t0     17381 /var/run/containerd.sock
container 3154 root   15uW     REG                8,5   4194304 163712670 /home/t4/pouch/containerd/root/io.containerd.snapshotter.v1.overlayfs/metadata.db

ping @mcginne your env is like this? using different device for the containerd persisted data?

If so, I think the containerd systemd service should be required the mount point.

cc @crosbymichael @estesp @dmcgowan

@mcginne
Copy link
Author

mcginne commented Oct 9, 2019

@fuweid We only see the issue after the host unexpectedly reboots. However it doesn't always occur, sometimes we get the unexpected reboots and don't see the corruption.

And yes, we have the same setup with containerd data using a different device to the rootfs, so we could well fit your theory..

@fuweid
Copy link
Member

fuweid commented Oct 9, 2019

And yes, we have the same setup with containerd data using a different device to the rootfs, so we could well fit your theory..

@mcginne OK. If you can find the io.containerd.metadata.v1.bolt/meta.db in system default device, the theory mentioned above works. For example, in my case, I can find the boltdb data from /dev/sda3 like

$ mkdir -p /tmp/issue
$ mount -o bind / /tmp/issue # / is in /dev/sda3`

Mount the original device into dir and check /tmp/issue/${containerd-root-dir}/io.containerd.metadata.v1.bolt/meta.db. If it exists, we meets the same case.

However, I will remove all the containers/images and cleanup containerd data in the node because the node mess up and we can't recover it. And it seems that there is nothing we can do in containerd side right now. Just need to make sure that containerd service must require the device mount.

@fuweid
Copy link
Member

fuweid commented Oct 10, 2019

Hi @mcginne

For this case, I suggest to use After= dependency in containerd.service if you use systemd to manage your containerd.

You can use systemctl list-units --type=mount to list your mount target. For example, in my testing env, I got the result like:

$systemctl list-units --type=mount
UNIT                                                                    LOAD   ACTIVE SUB     DESCRIPTION
-.mount                                                                 loaded active mounted /
boot-grub2.mount                                                        loaded active mounted /boot/grub2
dev-hugepages.mount                                                     loaded active mounted Huge Pages File System
dev-mqueue.mount                                                        loaded active mounted POSIX Message Queue File System
disk10.mount                                                            loaded active mounted /disk10
disk11.mount                                                            loaded active mounted /disk11
disk12.mount                                                            loaded active mounted /disk12
disk2.mount                                                             loaded active mounted /disk2
disk3.mount                                                             loaded active mounted /disk3
disk4.mount                                                             loaded active mounted /disk4
disk5.mount                                                             loaded active mounted /disk5
disk6.mount                                                             loaded active mounted /disk6
disk7.mount                                                             loaded active mounted /disk7
disk8.mount                                                             loaded active mounted /disk8
disk9.mount                                                             loaded active mounted /disk9
home.mount                                                              loaded active mounted /home
proc-sys-fs-binfmt_misc.mount                                           loaded active mounted Arbitrary Executable File Formats File System
sys-kernel-config.mount                                                 loaded active mounted Configuration File System
sys-kernel-debug.mount                                                  loaded active mounted Debug File System

LOAD   = Reflects whether the unit definition was properly loaded.
ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
SUB    = The low-level unit activation state, values depend on unit type.

19 loaded units listed. Pass --all to see loaded but inactive units, too.
To show all installed unit files use 'systemctl list-unit-files'.

My service needs the home.mount so that I can add After=home.mount in the service. Hope it can help.

And thanks for reporting this. I am gonna to close this issue. If you have any problem on this, please reopen it or file new one. Thanks

@fuweid fuweid closed this as completed Oct 10, 2019
@mcginne
Copy link
Author

mcginne commented Oct 10, 2019

@fuweid Many thanks for your work on this! We will add in the After= dependency.

dmcgowan pushed a commit to thaJeztah/containerd that referenced this issue Nov 19, 2019
* Update the runc vendor to v1.0.0-rc9 which includes an additional mitigation for [CVE-2019-16884](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-16884).
    - More details on the runc CVE in [opencontainers/runc#2128](opencontainers/runc#2128), and the additional mitigations in [opencontainers/runc#2130](opencontainers/runc#2130).
* Add local-fs.target to service file to fix corrupt image after unexpected host reboot. Reported in [containerd#3671](containerd#3671), and fixed by [containerd#3745](containerd#3745).
* Fix large output of processes with TTY getting occasionally truncated. Reported in [containerd#3738](containerd#3738) and fixed by [containerd#3754](containerd#3754).
* Fix direct unpack when running in user namespace. Reported in [containerd#3762](containerd#3762), and fixed by [containerd#3779](containerd#3779).
* Update Golang runtime to 1.12.13, which includes security fixes to the `crypto/dsa` package made in Go 1.12.11 ([CVE-2019-17596](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-17596)), and fixes to the go command, `runtime`, `syscall` and `net` packages (Go 1.12.12).
* Add Windows process shim installer [containerd#3792](containerd#3792)

* CRI fixes:
    - Fix shim delete error code to avoid unnecessary retries in the CRI plugin. Discovered in [containerd/cri#1309](containerd/cri#1309), and fixed by [containerd#3733](containerd#3733) and [containerd#3740](containerd#3740).

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
dmcgowan pushed a commit to thaJeztah/containerd that referenced this issue Nov 20, 2019
* Update the runc vendor to v1.0.0-rc9 which includes an additional mitigation for
  [CVE-2019-16884](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-16884).
    - More details on the runc CVE in [opencontainers/runc#2128](opencontainers/runc#2128),
      and the additional mitigations in [opencontainers/runc#2130](opencontainers/runc#2130).
* Add local-fs.target to service file to fix corrupt image after unexpected host reboot.
  Reported in [containerd#3671](containerd#3671),
  and fixed by [containerd#3746](containerd#3746).
* Update Golang runtime to 1.12.13, which includes security fixes to the `crypto/dsa`
  package made in Go 1.12.11 ([CVE-2019-17596](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-17596)),
  and fixes to the go command, `runtime`, `syscall` and `net` packages (Go 1.12.12).

* CRI fixes:
    - Fix shim delete error code to avoid unnecessary retries in the CRI plugin.
      Discovered in [containerd/cri#1309](containerd/cri#1309),
      and fixed by [containerd#3732](containerd#3732)
      and [containerd#3739](containerd#3739).

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
thaJeztah added a commit to thaJeztah/docker that referenced this issue Nov 28, 2019
full diff: containerd/containerd@v1.2.10...v1.2.11

The eleventh patch release for containerd 1.2 includes an updated runc with
an additional fix for CVE-2019-16884 and a Golang update.

Notable Updates
-----------------------

- Update the runc vendor to v1.0.0-rc9 which includes an additional mitigation
  for CVE-2019-16884.
  More details on the runc CVE in opencontainers/runc#2128, and the additional
  mitigations in opencontainers/runc#2130.
- Add local-fs.target to service file to fix corrupt image after unexpected host
  reboot. Reported in containerd/containerd#3671, and fixed by containerd/containerd#3746.
- Update Golang runtime to 1.12.13, which includes security fixes to the crypto/dsa
  package made in Go 1.12.11 (CVE-2019-17596), and fixes to the go command, runtime,
  syscall and net packages (Go 1.12.12).

CRI fixes:
-----------------------

- Fix shim delete error code to avoid unnecessary retries in the CRI plugin. Discovered
  in containerd/cri#1309, and fixed by containerd/containerd#3732 and containerd/containerd#3739.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
docker-jenkins pushed a commit to docker/docker-ce that referenced this issue Jan 16, 2020
full diff: containerd/containerd@v1.2.10...v1.2.11

The eleventh patch release for containerd 1.2 includes an updated runc with
an additional fix for CVE-2019-16884 and a Golang update.

Notable Updates
-----------------------

- Update the runc vendor to v1.0.0-rc9 which includes an additional mitigation
  for CVE-2019-16884.
  More details on the runc CVE in opencontainers/runc#2128, and the additional
  mitigations in opencontainers/runc#2130.
- Add local-fs.target to service file to fix corrupt image after unexpected host
  reboot. Reported in containerd/containerd#3671, and fixed by containerd/containerd#3746.
- Update Golang runtime to 1.12.13, which includes security fixes to the crypto/dsa
  package made in Go 1.12.11 (CVE-2019-17596), and fixes to the go command, runtime,
  syscall and net packages (Go 1.12.12).

CRI fixes:
-----------------------

- Fix shim delete error code to avoid unnecessary retries in the CRI plugin. Discovered
  in containerd/cri#1309, and fixed by containerd/containerd#3732 and containerd/containerd#3739.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
Upstream-commit: cfcf25bb5409eb0c3a9c257b225f2b8890142030
Component: engine
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants