Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Create Pod Sandbox #6

Open
mrunalp opened this issue Jul 15, 2016 · 14 comments
Open

Create Pod Sandbox #6

mrunalp opened this issue Jul 15, 2016 · 14 comments

Comments

@mrunalp
Copy link
Owner

mrunalp commented Jul 15, 2016

This is a rough flow that I have in mind.

  1. Check if a configured image for the sandbox exists in local repo. For e.g. /var/lib/oci/images/sandboximage
  2. If not, then pull using containers/image library
  3. Use containers/storage to create rootfs /var/lib/oci/containers/container-id/storage-type/sandboximage. For cases such as sandboximage, we need not even use container id as this could be shared by all pods.
    The storage API should take parameters to allow such use cases.
  4. Use ocitools generate library to create a template from the parameters specified in Request object and merge config from the image.
  5. Launch runc using the rootfs and config.json
  6. Monitor the sandbox container (there are various sub tasks here that we can drill into later like managing logs and handling cgroups, etc).

This would require ocid to take as a flag the name of the sandbox image type.
A flag will be needed to pick default storage and a way to override to share a readonly rootfs as described above.

@mrunalp
Copy link
Owner Author

mrunalp commented Jul 15, 2016

@rhatdan @runcom @nalind PTAL.

@runcom
Copy link
Contributor

runcom commented Jul 15, 2016

4.1 merge config from image

The rough flow seems fine to me. What is really missing which is somehow still part of this flow is the CAS storage where images are pulled, indexed, cached etc etc. And from where libcow is kicking in. @nalind does it make sense?

@mrunalp
Copy link
Owner Author

mrunalp commented Jul 15, 2016

@runcom Updated to add your suggestion of merging config. We do need to figure out how much of the image logic will be in the daemon and how much in the library.

@runcom
Copy link
Contributor

runcom commented Jul 15, 2016

re: flag for image type - we would leverage the abstraction made in containers/image where an image type has a prefix defining the technology/transport (such as I want to run a container based on the docker busybox image, so containers/image firstly download from Docker registry and then store it into the image storage)

@rhatdan
Copy link

rhatdan commented Jul 16, 2016

Yes the question of storage is key here. In storage we want to be able to support the "networked storage case". If I go to run the "foobar" container and I have the "foobar" rootfs available via NFS I want to use this rather then pull the image to the host. So we need the storage layer to be smart enough to understand the configuration of the image store(s).

I believe we have four different components interacting to make this happen.

  • storage (Currently cowman)
  • image (skopeo)
  • runtime (runc)
  • management /API (ocid)

In the quick design you defined above, I think it would be helpful if we broke down, which component was responsible for each action.

  1. Check if a configured image for the sandbox exists in local repo. For e.g. /var/lib/oci/images/sandboximage (storage)
  2. If not, then pull using containers/image library (image)
  3. Use containers/storage to create rootfs /var/lib/oci/containers/container-id/storage-type/sandboximage. For cases such as sandboximage, we need not even use container id as this could be shared by all pods. The storage API should take parameters to allow such use cases. (storage, management/API)
  4. Use ocitools generate library to create a template from the parameters specified in Request object and merge config from the image. (management/API, ocitools)
  5. Launch runc using the rootfs and config.json
    Monitor the sandbox container (there are various sub tasks here that we can drill into later like managing logs and handling cgroups, etc). (runc)

@runcom
Copy link
Contributor

runcom commented Jul 17, 2016

As part of downloading docker images to be run as OCI runc containers, I've came across registries credential handling and opened containers/image#41.

@mrunalp I'm not familiar with k8s at all, I've a question though.

In a normal Pod creation workflow how does one pass credentials to authenticate against a registry when pulling an image (cli or yaml)?
I guess the question is, in our new docker-less scenario, how do we ask or retrieve credentials to user to authenticate against registries given we don't have access to ~/.docker/config.json file? Should ocid provide a way of handling credentials like the docker daemon does and with which kubelet can interact?

@mrunalp
Copy link
Owner Author

mrunalp commented Jul 17, 2016

@runcom Yes, I think we should define our own config for accessing docker and other registries. Also, I think this code in kubernetes may be relevant https://github.com/kubernetes/kubernetes/blob/f2ddd60eb9e7e9e29f7a105a9a8fa020042e8e52/pkg/credentialprovider

@runcom
Copy link
Contributor

runcom commented Jul 17, 2016

Right, that code's relevant but it's assuming .docker/config.json (or the old one) to be present on the host (skopeo does that as well in a similar manner).

BTW, I believe this is already possible when creating the yaml pod specification (based on this reply http://stackoverflow.com/a/36280670).
This way the CRI can receive the AuthConfig (https://github.com/kubernetes/kubernetes/pull/25899/files#diff-b99b84f6471ccf2077dedc93530a51a2R401) populated and containers/image can use that struct in OCID to retrieve username/password to authenticate.

@mrunalp
Copy link
Owner Author

mrunalp commented Jul 18, 2016

@runcom Yep, we should be able to use that.

@mrunalp
Copy link
Owner Author

mrunalp commented Jul 18, 2016

@rhatdan I would imagine that we need some config per node as well as per pod decorations to configure storage. Flags like preferNFS could be in the config and shareReadOnly could be passed down through the API. WDYT?

@rhatdan
Copy link

rhatdan commented Jul 19, 2016

SGTM

@runcom
Copy link
Contributor

runcom commented Jul 19, 2016

  1. If not, then pull using containers/image library

@mrunalp does containers/image kicks in as part of the RuntimeService or the ImageService?
What I mean is, doesn't point number 1) already know that and the answer is YES because the kubelet already pulled the image as part of the ImageService pull operation? and we already have that image available in the node.

at least based on https://github.com/kubernetes/kubernetes/pull/17048/files#diff-822f0e081c10d8b83d7c2ad1391d55f7R85

@mrunalp
Copy link
Owner Author

mrunalp commented Jul 19, 2016

@runcom Yes, we could write a test wrapper that does the image pull before creating the sandbox or starting a container. This could work to simulate kubelet integration till kubelet client changes are done.

@runcom
Copy link
Contributor

runcom commented Jul 19, 2016

I'd love to split the first 1) and 2) points to go into the ImageService which as I understand it, it's total different service which the kubelet queries to work on images. The other points belong to the RuntimeService instead. I'll generate a stub for ImageService to begin with.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants