Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support runtime level snapshotter for issue 6657 #6899

Merged

Conversation

shuaichang
Copy link
Contributor

@shuaichang shuaichang commented May 5, 2022

What is this change

This is the implementation for #6657, the following lists core ideas:

  1. Added snapshotter option in CRI per-runtime config
  2. For containers related operations e.g. create containers, respect the runtime.snapshotter if set, otherwise using the global snapshotter.
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.my-runtime]
  snapshotter = "devmapper"
  1. For pull images, given the runtime info is not passed along in the PullImageRequest, we used a configurable annotation key io.kubernetes.cri/runtimehandler to specify runtime during pull time such that the appropriate snapshotter can be used.

Pod sandbox config with annotations specifying the runtime, and image_pull will get the correct snapshotter for unpacking

{
  "metadata": {
    "name": "busybox-sandbox-devmapper",
    "namespace": "default",
    "attempt": 1,
    "uid": "hdishd83djaidwnduwk28bcsb"
  },
  "log_directory": "/tmp",
  "linux": {
  },
  "annotations": {
    "io.kubernetes.cri/runtimehandler": "my-runtime"
  }
}

Testings done

  • Pull an image with annotation "io.kubernetes.cri/runtimehandler": "my-runtime", image is unpacked with devmapper
  • Pull the same image without annotation, image is unpacked with overlayfs
  • Create a pod with runtime my-runtime, sandbox is created with devmapper snapshotter
  • Create a pod without runtime, sandbox is created with overlayfs snapshotter
  • Create a container within the my-runtime pod, container uses devmapper snapshotter
  • Create a container within the default pod, container uses overlayfs snapshotter
  • Snapshots were removed cleanly after containers/pods are deleted
  • No snapshotter override for runtime, used the default snapshoter
  • No snapshotter override for runtime, with "io.kubernetes.cri/runtimehandler": "my-runtime" still used default snapshotter
  • No snapshotter override for runtime, pull with "io.kubernetes.cri/runtimehandler": "my-runtime", using default snapshotter
  • No snapshotter override for runtime, pull without "io.kubernetes.cri/runtimehandler": "my-runtime", using default snapshotter
  • List images
  • Remove an image with both overlayfs and devmapper snapshotters, both snapshotter are cleaned up
  • imagefsinfo, expecting only the default snapshotter fs info, the full support is in the TODO list
  • Inspect images
  • crictl info, all runtimes and their snapshotters are of the expected values
  • stats expect behavior unchanged

Test logs

  1. Pull images with annotations "io.kubernetes.cri/runtimehandler": "my-runtime"
crictl pull --pod-config ./podsandbox-config-devmapper.json docker.io/library/busybox:latest

cat ./podsandbox-config-devmapper.json 
{
  "metadata": {
    "name": "busybox-sandbox-devmapper",
    "namespace": "default",
    "attempt": 1,
    "uid": "hdishd83djaidwnduwk28bcsb"
  },
  "log_directory": "/tmp",
  "linux": {
  },
  "annotations": {
    "io.kubernetes.cri/runtimehandler": "my-runtime"
  }
}

May 17 22:07:45 shuaichang-containerd-patch containerd[27091]: time="2022-05-17T22:07:45.693190447Z" level=debug msg="Set snapshotter for runtime io.containerd.my-runtimev2 to devmapper"
May 17 22:07:45 shuaichang-containerd-patch containerd[27091]: time="2022-05-17T22:07:45.693230747Z" level=info msg="experimental: PullImage \"docker.io/library/busybox:latest\" for runtime my-runtime, using snapshotter devmapper"
  1. Pull the same image without annotation, image is unpacked with overlayfs
crictl pull --pod-config ./podsandbox-config-default.json docker.io/library/busybox:latest

{
  "metadata": {
    "name": "busybox-sandbox-default",
    "namespace": "default",
    "attempt": 1,
    "uid": "hdishd83djaidwnduwk28bcsb"
  },
  "log_directory": "/tmp",
  "linux": {
  }
}

May 17 22:08:30 shuaichang-containerd-patch containerd[27091]: time="2022-05-17T22:08:30.888721442Z" level=info msg="PullImage \"docker.io/library/busybox:latest\""
May 17 22:08:30 shuaichang-containerd-patch containerd[27091]: time="2022-05-17T22:08:30.888831043Z" level=debug msg="PullImage \"docker.io/library/busybox:latest\" with snapshotter overlayfs"
  1. Create a pod with runtime my-runtime, sandbox is created with devmapper snapshotter
crictl runp --runtime my-runtime ./podsandbox-config-devmapper.json

# Inspect pod
  "info": {
    "pid": 12251,
    "processStatus": "running",
    "netNamespaceClosed": false,
    "image": "mcr.microsoft.com/oss/kubernetes/pause:3.6",
    "snapshotKey": "ad0458f9cb8f9584c59c14320d7c3fa898d19ae5f6c2a42abd6de2eda6f69b73",
    "snapshotter": "devmapper",
    "runtimeHandler": "my-runtime",
  1. Create a pod without runtime, sandbox is created with overlayfs snapshotter
crictl runp ./podsandbox-config-default.json

# Inspect pod
  "info": {
    "pid": 13512,
    "processStatus": "running",
    "netNamespaceClosed": false,
    "image": "mcr.microsoft.com/oss/kubernetes/pause:3.6",
    "snapshotKey": "06cf19be56bab58536f1a44ab67af96da8d6a3867624b307b66c3ef70d00362d",
    "snapshotter": "overlayfs",
    "runtimeHandler": "",
  1. Create a container within the default pod, container uses overlayfs snapshotter
root@shuaichang-containerd-patch:~/crictl-exp# crictl create ad0458f9cb8f9 container-config.json podsandbox-config-default.json 

# Inspect container
  "info": {
    "sandboxID": "ad0458f9cb8f9584c59c14320d7c3fa898d19ae5f6c2a42abd6de2eda6f69b73",
    "pid": 0,
    "removing": false,
    "snapshotKey": "0b6abe6d4582e959ea38665b25e5a2d6fb5d5cd0e22cea1ea918487069a93d9d",
    "snapshotter": "devmapper",
    "runtimeType": "io.containerd.my-runtime.v2",
  1. Create a container within the default pod, container uses overlayfs snapshotter
crictl create 06cf19be56bab container-config.json podsandbox-config-devmapper.json 

# Inspect container

  "info": {
    "sandboxID": "06cf19be56bab58536f1a44ab67af96da8d6a3867624b307b66c3ef70d00362d",
    "pid": 0,
    "removing": false,
    "snapshotKey": "4259cabaeaa9dbda6d25252ceba6161983c307b960fcc206c9bce9c8de42a060",
    "snapshotter": "overlayfs",
    "runtimeType": "io.containerd.runc.v2",
    "runtimeOptions": {
      "binary_name": "/usr/bin/runc"
    },
  1. Snapshots were removed cleanly after containers/pods are deleted
# Note all the mounts are gone both device mapper and overlayfs
crictl stopp 06cf19be56bab
crictl stopp ad0458f9cb8f9
crictl rmp ad0458f9cb8f9
crictl rmp 06cf19be56bab
mount|grep runc
  1. No snapshotter override for runtime, used the default snapshoter
  "info": {
    "pid": 9818,
    "processStatus": "running",
    "netNamespaceClosed": false,
    "image": "mcr.microsoft.com/oss/kubernetes/pause:3.6",
    "snapshotKey": "1f7561825971d3b02e7280ae80fdabba3646c158bf1e2c83dc2d2afdb2681d7b",
    "snapshotter": "overlayfs",
    "runtimeHandler": "my-runtime",
  1. No snapshotter override for runtime, with "io.kubernetes.cri/runtimehandler": "my-runtime" still used default
crictl runp --runtime my-runtime ./podsandbox-config-default.json

  "info": {
    "pid": 10946,
    "processStatus": "running",
    "netNamespaceClosed": false,
    "image": "mcr.microsoft.com/oss/kubernetes/pause:3.6",
    "snapshotKey": "898a1775c10bda85ac107759d6611d19a561028452cd5a104fe120c1ed016e28",
    "snapshotter": "overlayfs",
    "runtimeHandler": "my-runtime",
  1. No snapshotter override for runtime, pull with "io.kubernetes.cri/runtimehandler": "my-runtime", using default snapshotter
crictl pull --pod-config ./podsandbox-config-devmapper.json docker.io/library/busybox:latest

May 18 05:09:27 shuaichang-containerd-patch containerd[4422]: time="2022-05-18T05:09:27.311754823Z" level=debug msg="PullImage \"docker.io/library/busybox:latest\" with snapshotter overlayfs"
  1. No snapshotter override for runtime, pull without "io.kubernetes.cri/runtimehandler": "my-runtime", using default snapshotter
crictl pull --pod-config ./podsandbox-config-default.json docker.io/library/busybox:latest
May 18 05:10:30 shuaichang-containerd-patch containerd[4422]: time="2022-05-18T05:10:30.361737931Z" level=debug msg="PullImage \"docker.io/busybox:latest/busybox:latest\" with snapshotter overlayfs"
  1. List images
crictl images
IMAGE                                                                                                     TAG                                                                                                                  IMAGE ID            SIZE
docker.io/library/busybox                                                                                 latest                                                                                                               1a80408de790c       777kB
docker.io/library/ubuntu                                                                                  latest                                                                                                               825d55fb63400       28.6MB
  1. Remove an image with both overlayfs and devmapper snapshotters, both snapshotter are cleaned up
crictl rmi docker.io/library/busybox:latest

sha256:eb6b01329ebe73e209e44a616a0e16c2b8e91de6f719df9c35e6cdadadbe5965" snapshotter=overlayfs
May 19 06:30:50 shuaichang-containerd-patch containerd[4001]: time="2022-05-19T06:30:50.037608744Z" level=debug msg="snapshot garbage collected" d=10.517849ms snapshotter=overlayfs

sha256:eb6b01329ebe73e209e44a616a0e16c2b8e91de6f719df9c35e6cdadadbe5965" snapshotter=devmapper
May 19 06:30:50 shuaichang-containerd-patch containerd[4001]: time="2022-05-19T06:30:50.055348796Z" level=debug msg="snapshot garbage collected" d=28.284601ms snapshotter=devmapper
  1. imagefsinfo, expecting only the default snapshotter fs info, the full support is in the TODO list
{
  "status": {
    "timestamp": "1652944758835233108",
    "fsId": {
      "mountpoint": "/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs"
    },
    "usedBytes": {
      "value": "28228698112"
    },
    "inodesUsed": {
      "value": "731121"
    }
  }
}
  1. Inspect images
crictl inspecti docker.io/library/busybox:latest
{
  "status": {
    "id": "sha256:1a80408de790c0b1075d0a7e23ff7da78b311f85f36ea10098e4a6184c200964",
    "repoTags": [
      "docker.io/library/busybox:latest"
    ],
    "repoDigests": [
      "docker.io/library/busybox@sha256:d2b53584f580310186df7a2055ce3ff83cc0df6caacf1e3489bff8cf5d0af5d8"
    ],
    "size": "777091",
    "uid": null,
    "username": "",
    "spec": null
  },
  "info": {
    "chainID": "sha256:eb6b01329ebe73e209e44a616a0e16c2b8e91de6f719df9c35e6cdadadbe5965",
    "imageSpec": {
      "created": "2022-04-14T02:29:36.517566461Z",
      "architecture": "amd64",
      "os": "linux",
      "config": {
        "Env": [
          "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
        ],
        "Cmd": [
          "sh"
        ]
      },
      "rootfs": {
        "type": "layers",
        "diff_ids": [
          "sha256:eb6b01329ebe73e209e44a616a0e16c2b8e91de6f719df9c35e6cdadadbe5965"
        ]
      },
      "history": [
        {
          "created": "2022-04-14T02:29:36.368193089Z",
          "created_by": "/bin/sh -c #(nop) ADD file:1c8dd4a97e690506e2c94f7dee8e24c0612c0f227736d6259e5045f9d1efce02 in / "
        },
        {
          "created": "2022-04-14T02:29:36.517566461Z",
          "created_by": "/bin/sh -c #(nop)  CMD [\"sh\"]",
          "empty_layer": true
        }
      ]
    }
  }
}
  1. crictl info, all runtimes and their snapshotters are of the expected values
crictl info

  "config": {
    "containerd": {
      "snapshotter": "overlayfs",

        "my-runtime": {
          "runtimeType": "io.containerd.my-runtime.v2",
         .......
          "snapshotter": "devmapper"
        },
  1. stats expect behavior unchanged
crictl stats
CONTAINER           CPU %               MEM                 DISK                INODES
1826178f1d02f       0.00                798.7kB             16.38kB             7

@k8s-ci-robot
Copy link

Hi @shuaichang. Thanks for your PR.

I'm waiting for a containerd member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.

Once the patch is verified, the new status will be reflected by the ok-to-test label.

I understand the commands that are listed here.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@shuaichang shuaichang closed this May 5, 2022
@shuaichang shuaichang reopened this May 5, 2022
@shuaichang shuaichang force-pushed the ISSUE6657-support-runtime-snapshotter branch 2 times, most recently from a42c17b to 6aaea06 Compare May 5, 2022 23:41
.gitignore Outdated Show resolved Hide resolved
@shuaichang shuaichang force-pushed the ISSUE6657-support-runtime-snapshotter branch from 6aaea06 to 366d89a Compare May 6, 2022 00:01
Comment on lines 189 to 190
containerd.WithSnapshotter(c.config.ContainerdConfig.Snapshotter),
containerd.WithSnapshotter(c.runtimeSnapshotter(ociRuntime)),
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How does it work with criService.snapshotStore? In the following-up PRs, changes might be needed for some other APIs that use snapshotStore (e.g. ImageFsInfo, ContainerStats, etc.)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good points.

  • for ContainerStats this should be no change of behavior, the snapshot info should still be accurate for different snapshotters.
  • For ImageFsInfo, we probably want to do something like the following. BTW, the device mapper snapshotter seems to be using /var/lib/containerd/io.containerd.snapshotter.v1.devmapper/ as the mount point, which does not really mean anything, but seems to be reasonable to return snapshotter usages separately.
    image

I can address the ImageFsInfo in a following-up PR.

@shuaichang shuaichang force-pushed the ISSUE6657-support-runtime-snapshotter branch from 9430e3f to afe2ec3 Compare May 6, 2022 06:42
@AkihiroSuda AkihiroSuda added kind/enhancement area/cri Container Runtime Interface (CRI) labels May 6, 2022
@AkihiroSuda AkihiroSuda requested a review from Random-Liu May 6, 2022 08:01
@shuaichang
Copy link
Contributor Author

@AkihiroSuda @ktock @dmcgowan @Random-Liu kindly ping to check if you have more comments? Would appreciate if I can get the action items list to get to ok-to-test ?

@mikebrow
Copy link
Member

mikebrow commented May 9, 2022

/ok-to-test

Copy link
Member

@mikebrow mikebrow left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the PR! Couple comments

pkg/cri/config/config.go Outdated Show resolved Hide resolved
pkg/cri/config/config.go Outdated Show resolved Hide resolved
pkg/cri/server/image_pull.go Outdated Show resolved Hide resolved
pkg/cri/server/image_pull.go Outdated Show resolved Hide resolved
@kevpar
Copy link
Member

kevpar commented May 10, 2022

I'm going to take a more thorough look at this, but my main concern is that there could be cases that arise in CRI where a snapshotter is implied (or one is needed in a context where we don't have a runtime handler), and those parts may not play nicely with this.

For instance:

  • What happens if you have images pulled with multiple snapshotters and you try to calculate usage for those images. Will the various code paths for this have access to the snapshotter that should be used?
  • What if the same image is pulled twice, but with two different snapshotters? Will the correct bookkeeping happen?
  • Presumably Kubelet just tracks images by name, and doesn't know what snapshotter was used. So if image A is pulled by snapshotter 1, then Kubelet will just know that the image is present, and potentially schedule a pod that wants to use A with snapshotter 2.

@mikebrow
Copy link
Member

I'm going to take a more thorough look at this, but my main concern is that there could be cases that arise in CRI where a snapshotter is implied (or one is needed in a context where we don't have a runtime handler), and those parts may not play nicely with this.

For instance:

  • What happens if you have images pulled with multiple snapshotters and you try to calculate usage for those images. Will the various code paths for this have access to the snapshotter that should be used?
  • What if the same image is pulled twice, but with two different snapshotters? Will the correct bookkeeping happen?
  • Presumably Kubelet just tracks images by name, and doesn't know what snapshotter was used. So if image A is pulled by snapshotter 1, then Kubelet will just know that the image is present, and potentially schedule a pod that wants to use A with snapshotter 2.

nod.. and a "KEP adding the runtime handler to the image services part of CRI" is a must have for both (n-snapshotters 1/runtime class), and for in VM guest pulled images work (confidential containers / kata / ...), and we'll have to cover the extra aggregation and filtering work for image presence, info, and stats required in the sync/metrics/gc parts of kubelet due to the per runtime expansion. I'm looking at this as experimental work as a part of the larger advancement of the CRI image service.

@fuweid
Copy link
Member

fuweid commented May 10, 2022

What happens if you have images pulled with multiple snapshotters and you try to calculate usage for those images. Will the various code paths for this have access to the snapshotter that should be used?
What if the same image is pulled twice, but with two different snapshotters? Will the correct bookkeeping happen?

IMO, it is easy to handle the bookkeeping in containerd side. But the question
is the kubelet side. For different snapshots, those maybe use different storage
as volume. In current CRI-API design, the CRI just uses one storage for image
data and container writable layer. The storage path will be return by
ImageService.ImageFsInfo API.

The eviction manager relies on this FilesystemIdentifier to handle ImageGC
and evict low-priority pods in node to release disk space.

If the main storage path is used for overlayfs but the most of snapshots uses devicemapper,
the eviction will not work.

// https://github.com/kubernetes/cri-api/blob/master/pkg/apis/runtime/v1/api.proto#L1400

// FilesystemUsage provides the filesystem usage information.
message FilesystemUsage {
    // Timestamp in nanoseconds at which the information were collected. Must be > 0.
    int64 timestamp = 1;
    // The unique identifier of the filesystem.
    FilesystemIdentifier fs_id = 2;
    // UsedBytes represents the bytes used for images on the filesystem.
    // This may differ from the total bytes used on the filesystem and may not
    // equal CapacityBytes - AvailableBytes.
    UInt64Value used_bytes = 3;
    // InodesUsed represents the inodes used by the images.
    // This may not equal InodesCapacity - InodesAvailable because the underlying
    // filesystem may also be used for purposes other than storing images.
    UInt64Value inodes_used = 4;
}

message ImageFsInfoResponse {
    // Information of image filesystem(s).
    repeated FilesystemUsage image_filesystems = 1;
}

I just bring one case here. But I think it will be big change between kubelet and CRI-container
(move the things down to CRI-Runtime):).

@shuaichang
Copy link
Contributor Author

shuaichang commented May 10, 2022

@kevpar these are definitely great questions, as @mikebrow and @fuweid the comprehensive picture seems to be a bigger project and should be driven by a Kep. For now if we can provide an experimental feature, that should unblock us and potentially some other use cases.

What happens if you have images pulled with multiple snapshotters and you try to calculate usage for those images. Will the various code paths for this have access to the snapshotter that should be used?
I think from CRI point of view, separate the usage into per snapshotter should be sufficient. However, it's upto kubelet to interpret the result and make decisions.

A side note, I think for many snapshotter other than overlayfs, kubelet usage accounting is already not reliable. For example for devmapper, it counts total used bytes correctly but uses the dir of the snapshotter meta dir for total fs bytes and therefore the usage percentage is not accurate. This is probably true for many other snapshotters.

I think all these deserve a KEP for a re-think on kubelet or CRI interactions with different snapshotters.

image

What if the same image is pulled twice, but with two different snapshotters? Will the correct bookkeeping happen?
Reading from the code, it seems the CRI behavior is clear that it will fetch and unpack for the given snapshotter. But if kubelet decides the image already exists and do not pull again, then the unpacked (by different snapshotters) will be delayed until container creation time. This seems to be harmless, kubelet probably need to be aware of image-runtime to avoid such behavior.

Presumably Kubelet just tracks images by name, and doesn't know what snapshotter was used. So if image A is pulled by snapshotter 1, then Kubelet will just know that the image is present, and potentially schedule a pod that wants to use A with snapshotter 2.

When a pod using snapshotter2 is scheduled on the node with image unpacked with snapshotter1, containerd has the logic to identify the image in unpacked, so it will do the unpack during container creation time. This will make the creation slow, but should be harmless IMO (similar to pullImage without the unpack opt).

@shuaichang
Copy link
Contributor Author

@mikebrow addressed the following comments:

suggest just make the key a const string

Done

pls mark the const as needing to be removed.. upon a KEP adding the runtime handler to the image services

Done

@shuaichang
Copy link
Contributor Author

@fuweid totally agree that CRI is a better place for bookkeeping. I think kubelet has difficulty to get total usage for most snapshotters other than overlayfs. Seems to me ImageService.ImageFsInfo API should return the per-runtime-usage and per-runtime-total, for pod scheduling and image GC. But this requires quite some changes on the kubelet end.

@dmcgowan
Copy link
Member

dmcgowan commented Jun 1, 2022

"io.kubernetes.cri/runtimehandler" is not a namespace owned by this project. Is there existing discussions with sig-node about defining this experimental annotation? If this experiment is only related to containerd we should use our own namespace.

@shuaichang
Copy link
Contributor Author

"io.kubernetes.cri/runtimehandler" is not a namespace owned by this project. Is there existing discussions with sig-node about defining this experimental annotation? If this experiment is only related to containerd we should use our own namespace.

@dmcgowan I think the feature is only related to containerd, could you point me to the one that's owned by containerd?

@shuaichang shuaichang force-pushed the ISSUE6657-support-runtime-snapshotter branch from a46e4a8 to 6b9307a Compare June 2, 2022 21:18
@shuaichang
Copy link
Contributor Author

shuaichang commented Jun 2, 2022

As @dmcgowan suggested, I've updated the PR to use annotation name within the containerd namespace io.containerd.cri.runtime-handler

RuntimeHandler = "io.containerd.cri.runtime-handler"

@mikebrow @fuweid @kevpar @AkihiroSuda please feel free to comment if there's objection/concerns for the naming, thanks!

@shuaichang shuaichang force-pushed the ISSUE6657-support-runtime-snapshotter branch from 6b9307a to 74adc65 Compare June 2, 2022 22:59
Signed-off-by: shuaichang <shuai.chang@databricks.com>

Updated annotation name
@shuaichang shuaichang force-pushed the ISSUE6657-support-runtime-snapshotter branch from 74adc65 to 7b9f1d4 Compare June 2, 2022 23:30
Copy link
Member

@fuweid fuweid left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

Copy link
Member

@estesp estesp left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@estesp estesp merged commit 2b661b8 into containerd:main Jun 3, 2022
@anakrish
Copy link

anakrish commented Oct 4, 2022

@shuaichang Will this feature be available in release 1.6.9? We are interested in the ability to specify a different snapshotter per runtime, but would like to avail of this feature without having to build containerd ourselves.

@anakrish anakrish mentioned this pull request Oct 4, 2022
imeoer added a commit to nydusaccelerator/containerd that referenced this pull request Aug 14, 2023
Related with containerd#6899
The patch fixs the handle of sandbox run and container create.

Signed-off-by: Yan Song <imeoer@linux.alibaba.com>
imeoer added a commit to nydusaccelerator/containerd that referenced this pull request Aug 14, 2023
Related with containerd#6899
The patch fixs the handle of sandbox run and container create.

Signed-off-by: Yan Song <imeoer@linux.alibaba.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.