"System Containers" mechanism design #298

Closed
cgwalters opened this Issue Feb 6, 2016 · 55 comments

Projects

None yet

7 participants

@cgwalters
Member

I was talking with Giuseppe who is working on https://github.com/giuseppe/flannel-etcd-containerized
and there were a few things here that can be optimized better. We were thinking of having atomic install-spc docker.io/projectatomic/flannel which would do a docker fetch | ostree commit, then we use ostree to check out in /var/spc/$app.[01] (basically a versioned directory).

We also know how to generate systemd unit files (possibly from template upstream?) The unit files would use either runc or systemd-nspawn or whatever.

Then atomic upgrade-spc would know how to do the re-fetch, unpack a new fs root, and restart the systemd unit file pointing to it.

The advantages of this over atomic install/run are that we're not tied to the Docker daemon, which causes all sorts of pain for SPCs that sometimes (like flannel) want to configure Docker itself. And we don't need an UNINSTALL label because we're properly tracking installed files.

@cgwalters
Member

Or maybe reuse the existing atomic install verb but add new labels like:

LABEL INSTALLUNIT /usr/lib/systemd/system/flannel.service

Should we try to template the systemd unit file to support multiple parallel installs? Not sure...likely not needed for flannel.

Then we just reuse the existing atomic install/uninstall verbs.

@rhatdan
Member
rhatdan commented Feb 8, 2016

Lets setup a design meeting for this. Interesting ideas and might work with the some of the other containers in production ideas we have been discussing. IE Running lots of containers with runc/rocker under systemd.

@giuseppe
Member

I have done some experimenting on how to add this to atomic.

I have a work-in-progress branch here:

https://github.com/giuseppe/atomic/commits/giuseppe/install-spc

It is still incomplete, it lacks integration

One of the design problems I had is how to embedded the systemd service file and the OCI specs file.

In my wip, I have used two new labels: OCI and INSTALLUNIT.

Then I have changed the etcd container Dockerfile to look like:

FROM fedora

MAINTAINER Giuseppe Scrivano <gscrivan@redhat.com>

ENV container=docker

RUN dnf -y install etcd hostname && \
    dnf clean all

ADD etcd_container_template.service /etc/systemd/system/etcd_container_template.service
ADD etcd-env.sh /usr/bin/etcd-env.sh
ADD install.sh  /usr/bin/install.sh
ADD uninstall.sh /usr/bin/uninstall.sh

EXPOSE 4001 7001 2379 2380

LABEL INSTALLUNIT W1VuaXRdCkRlc2NyaXB0aW9uPUV0Y2QgU2VydmVyCkFmdGVyPW5ldHdvcmsudGFyZ2V0CgpbU2VydmljZV0KRXhlY1N0YXJ0PS9iaW4vcnVuYyAtLWlkIGV0Y2QKUmVzdGFydD1vbi1mYWlsdXJlCldvcmtpbmdEaXJlY3Rvcnk9JERFU1QKCltJbnN0YWxsXQpXYW50ZWRCeT1tdWx0aS11c2VyLnRhcmdldAoK

LABEL OCI ewogICAgInZlcnNpb24iOiAicHJlLWRyYWZ0IiwKICAgICJwbGF0Zm9ybSI6IHsKCSJvcyI6ICJsaW51eCIsCgkiYXJjaCI6ICJhbWQ2NCIKICAgIH0sCiAgICAicHJvY2VzcyI6IHsKCSJ0ZXJtaW5hbCI6IGZhbHNlLAoJInVzZXIiOiB7CgkgICAgInVpZCI6IDAsCgkgICAgImdpZCI6IDAsCgkgICAgImFkZGl0aW9uYWxHaWRzIjogbnVsbAoJfSwKCSJhcmdzIjogWwoJICAgICIvdXNyL2Jpbi9ldGNkLWVudi5zaCIsCiAgICAgICAgICAgICIvdXNyL2Jpbi9ldGNkIgoJXSwKCSJlbnYiOiBbCgkgICAgIlBBVEg9L3Vzci9sb2NhbC9zYmluOi91c3IvbG9jYWwvYmluOi91c3Ivc2JpbjovdXNyL2Jpbjovc2JpbjovYmluIiwKCSAgICAiVEVSTT14dGVybSIKCV0sCgkiY3dkIjogIiIKICAgIH0sCiAgICAicm9vdCI6IHsKCSJwYXRoIjogInJvb3RmcyIsCgkicmVhZG9ubHkiOiB0cnVlCiAgICB9LAogICAgImhvc3RuYW1lIjogImV0Y2QiLAogICAgIm1vdW50cyI6IFsKCXsKCSAgICAidHlwZSI6ICJwcm9jIiwKCSAgICAic291cmNlIjogInByb2MiLAoJICAgICJkZXN0aW5hdGlvbiI6ICIvcHJvYyIsCgkgICAgIm9wdGlvbnMiOiAiIgoJfSwKCXsKCSAgICAidHlwZSI6ICJ0bXBmcyIsCgkgICAgInNvdXJjZSI6ICJ0bXBmcyIsCgkgICAgImRlc3RpbmF0aW9uIjogIi9kZXYiLAoJICAgICJvcHRpb25zIjogIm5vc3VpZCxzdHJpY3RhdGltZSxtb2RlPTc1NSxzaXplPTY1NTM2ayIKCX0sCgl7CgkgICAgInR5cGUiOiAiZGV2cHRzIiwKCSAgICAic291cmNlIjogImRldnB0cyIsCgkgICAgImRlc3RpbmF0aW9uIjogIi9kZXYvcHRzIiwKCSAgICAib3B0aW9ucyI6ICJub3N1aWQsbm9leGVjLG5ld2luc3RhbmNlLHB0bXhtb2RlPTA2NjYsbW9kZT0wNjIwLGdpZD01IgoJfSwKCXsKCSAgICAidHlwZSI6ICJ0bXBmcyIsCgkgICAgInNvdXJjZSI6ICJzaG0iLAoJICAgICJkZXN0aW5hdGlvbiI6ICIvZGV2L3NobSIsCgkgICAgIm9wdGlvbnMiOiAibm9zdWlkLG5vZXhlYyxub2Rldixtb2RlPTE3Nzcsc2l6ZT02NTUzNmsiCgl9LAoJewoJICAgICJ0eXBlIjogIm1xdWV1ZSIsCgkgICAgInNvdXJjZSI6ICJtcXVldWUiLAoJICAgICJkZXN0aW5hdGlvbiI6ICIvZGV2L21xdWV1ZSIsCgkgICAgIm9wdGlvbnMiOiAibm9zdWlkLG5vZXhlYyxub2RldiIKCX0sCgl7CgkgICAgInR5cGUiOiAic3lzZnMiLAoJICAgICJzb3VyY2UiOiAic3lzZnMiLAoJICAgICJkZXN0aW5hdGlvbiI6ICIvc3lzIiwKCSAgICAib3B0aW9ucyI6ICJub3N1aWQsbm9leGVjLG5vZGV2IgoJfSwKCXsKCSAgICAidHlwZSI6ICJjZ3JvdXAiLAoJICAgICJzb3VyY2UiOiAiY2dyb3VwIiwKCSAgICAiZGVzdGluYXRpb24iOiAiL3N5cy9mcy9jZ3JvdXAiLAoJICAgICJvcHRpb25zIjogIm5vc3VpZCxub2V4ZWMsbm9kZXYscmVsYXRpbWUscm8iCgl9LAoJewoJICAgICJ0eXBlIjogImJpbmQiLAoJICAgICJzb3VyY2UiOiAiL3Zhci9saWIiLAoJICAgICJkZXN0aW5hdGlvbiI6ICIvdmFyL2xpYiIsCgkgICAgIm9wdGlvbnMiOiAicmJpbmQscncsbW9kZT03NTUiCgl9CiAgICBdLAogICAgImhvb2tzIjogewoJInByZXN0YXJ0IjogbnVsbCwKCSJwb3N0c3RvcCI6IG51bGwKICAgIH0sCiAgICAibGludXgiOiB7CgkidWlkTWFwcGluZ3MiOiBudWxsLAoJImdpZE1hcHBpbmdzIjogbnVsbCwKCSJybGltaXRzIjogbnVsbCwKCSJzeXNjdGwiOiBudWxsLAoJIm5hbWVzcGFjZXMiOiBbCgkgICAgewoJCSJ0eXBlIjogInBpZCIsCgkJInBhdGgiOiAiIgoJICAgIH0sCgkgICAgewoJCSJ0eXBlIjogImlwYyIsCgkJInBhdGgiOiAiIgoJICAgIH0sCgkgICAgewoJCSJ0eXBlIjogInV0cyIsCgkJInBhdGgiOiAiIgoJICAgIH0sCgkgICAgewoJCSJ0eXBlIjogIm1vdW50IiwKCQkicGF0aCI6ICIiCgkgICAgfQoJXSwKCSJjYXBhYmlsaXRpZXMiOiBbCgkgICAgIkFVRElUX1dSSVRFIiwKCSAgICAiS0lMTCIsCgkgICAgIk5FVF9CSU5EX1NFUlZJQ0UiCgldLAoJImRldmljZXMiOiBbCgkgICAgewoJCSJwYXRoIjogIi9kZXYvbnVsbCIsCgkJInR5cGUiOiA5OSwKCQkibWFqb3IiOiAxLAoJCSJtaW5vciI6IDMsCgkJInBlcm1pc3Npb25zIjogInJ3bSIsCgkJImZpbGVNb2RlIjogNDM4LAoJCSJ1aWQiOiAwLAoJCSJnaWQiOiAwCgkgICAgfSwKCSAgICB7CgkJInBhdGgiOiAiL2Rldi9yYW5kb20iLAoJCSJ0eXBlIjogOTksCgkJIm1ham9yIjogMSwKCQkibWlub3IiOiA4LAoJCSJwZXJtaXNzaW9ucyI6ICJyd20iLAoJCSJmaWxlTW9kZSI6IDQzOCwKCQkidWlkIjogMCwKCQkiZ2lkIjogMAoJICAgIH0sCgkgICAgewoJCSJwYXRoIjogIi9kZXYvZnVsbCIsCgkJInR5cGUiOiA5OSwKCQkibWFqb3IiOiAxLAoJCSJtaW5vciI6IDcsCgkJInBlcm1pc3Npb25zIjogInJ3bSIsCgkJImZpbGVNb2RlIjogNDM4LAoJCSJ1aWQiOiAwLAoJCSJnaWQiOiAwCgkgICAgfSwKCSAgICB7CgkJInBhdGgiOiAiL2Rldi90dHkiLAoJCSJ0eXBlIjogOTksCgkJIm1ham9yIjogNSwKCQkibWlub3IiOiAwLAoJCSJwZXJtaXNzaW9ucyI6ICJyd20iLAoJCSJmaWxlTW9kZSI6IDQzOCwKCQkidWlkIjogMCwKCQkiZ2lkIjogMAoJICAgIH0sCgkgICAgewoJCSJwYXRoIjogIi9kZXYvemVybyIsCgkJInR5cGUiOiA5OSwKCQkibWFqb3IiOiAxLAoJCSJtaW5vciI6IDUsCgkJInBlcm1pc3Npb25zIjogInJ3bSIsCgkJImZpbGVNb2RlIjogNDM4LAoJCSJ1aWQiOiAwLAoJCSJnaWQiOiAwCgkgICAgfSwKCSAgICB7CgkJInBhdGgiOiAiL2Rldi91cmFuZG9tIiwKCQkidHlwZSI6IDk5LAoJCSJtYWpvciI6IDEsCgkJIm1pbm9yIjogOSwKCQkicGVybWlzc2lvbnMiOiAicndtIiwKCQkiZmlsZU1vZGUiOiA0MzgsCgkJInVpZCI6IDAsCgkJImdpZCI6IDAKCSAgICB9CgldLAoJImFwcGFybW9yUHJvZmlsZSI6ICIiLAoJInNlbGludXhQcm9jZXNzTGFiZWwiOiAiIiwKCSJzZWNjb21wIjogewoJICAgICJkZWZhdWx0QWN0aW9uIjogIiIsCgkgICAgInN5c2NhbGxzIjogbnVsbAoJfSwKCSJyb290ZnNQcm9wYWdhdGlvbiI6ICIiCiAgICB9Cn0K

CMD ["/usr/bin/etcd-env.sh", "/usr/bin/etcd"]

The OCI and INSTALLUNIT labels are base64 encoded.

The etcd.service file looks like:

[Unit]
Description=Etcd Server
After=network.target

[Service]
ExecStart=/bin/runc --id etcd
Restart=on-failure
WorkingDirectory=$DEST

[Install]
WantedBy=multi-user.target

Where $DEST is replaced by atomic to point to the destination directory.

If we decide to template the .service file, we will need a way to specify dependencies too, as in the flannel case, it depends from etcd.

With OStree we could move the container data to /rootfs/ and leave these two files under /.

Any comments?

@cgwalters
Member

Ah...rather than base64, why not stick this in the image in a well-known place? xdg-app has a directory called /exports (https://wiki.gnome.org/Projects/SandboxedApps), we could just use that too? So the systemd unit would be in /exports/usr/lib/systemd/system ?

@giuseppe
Member

that is a good idea. It solves also the problem of rolling back to the previous version and have the right versions of these two files.

@cgwalters
Member

One thing I think we need to do in this new design too is kill the concept of the UNINSTALL script. The system should automatically track what files are installed and understand how to remove them.

If we don't need to template files for install, this could be as easy as simply overlaying /export onto the host FS? (So then /etc/systemd/system instead of /usr/lib ?)

If we can't get away from having a per-host install script, then I think we should have a model where atomic install runs install.sh /var/tmp/container-install.XXXXXX, allowing the container to dump whatever it wants there, then we snapshot that file list into an OSTree commit itself, then use that commit to overlay the host.

Then when we want to uninstall, we just walk that tree and delete files that match it. (We wouldn't delete empty dirs but oh well) ?

@giuseppe
Member

I have changed it to use /exports in the container rootfs. I still used /exports{service.template,config.json} as these two files need to be treated in a different way:

  1. service.template is used as template and $DEST is replaced with the correct value.
  2. config.json needs to go under /var/spc/$IMAGE.0/.

Any file under /exports/rootfs/FOO/BAR will be copied as /FOO/BAR.

@jberkus
jberkus commented Feb 15, 2016

Can someone explain why flannel and etcd need to be running before Docker comes up?

@rhatdan
Member
rhatdan commented Feb 15, 2016

Docker uses these services for configuring its networks I believe.

@giuseppe
Member

yes, docker uses Flannel and Flannel needs etcd. CoreOS is using another instance of Docker (early-docker) to start these services.

@jberkus
jberkus commented Feb 15, 2016

So, how do people handle these on a pure-Docker-tools stack?

I'm asking these questions because I see mixing RunC and Docker as a major usablity issue for our users.

@rhatdan
Member
rhatdan commented Feb 15, 2016

Coreos is not crazy about using early-docker either. As long as we just establish these as services, I am not sure why our users should care.

I think we should be moving more towards running runc containers for production use cases, since docker has a lot of shortcomings when it comes to running production containers.

@jberkus
jberkus commented Feb 15, 2016

When users go to "docker ps" and can't see the etcd, flannel, etc. containers, it will be a support/doc/UX issue. It may be the least worst issue we can offer, but it will be an issue nevertheless.

@rhatdan
Member
rhatdan commented Feb 15, 2016

Currently they don't see other containers, libvirt, nspawn, rkt, VM's when they do docker ps. We are trying to fix all of this by adding machinectl support to docker and runc.

Then they would see all of these under the same tool chain.

@jberkus
jberkus commented Feb 15, 2016

That'll help, certainly.

Still means we should add a FAQ item. Hmmm, is there an Atomic Host FAQ?

@cgwalters
Member

When users go to "docker ps" and can't see the etcd, flannel, etc. containers, it will be a support/doc/UX issue.

On the other hand, they will be systemd units.

@jberkus
jberkus commented Feb 16, 2016

Right, that's why things will get much better when we can list docker containers as systemd machines. That gives the admin one place to see all containers. In the initial implementation, some containers will only be visible with machinectl, and some will only be visible with docker ps.

@rhatdan
Member
rhatdan commented Feb 16, 2016

Well when docker-1.10 ships we plan on having all docker/runc/rkt/nspawn/libvirt-lxc/lxc containers as well as VMs listed under machinectl.

@giuseppe
Member

these are the containers I used for Flannel and Etcd: https://github.com/giuseppe/flannel-etcd-containerized/tree/wip-runc/

It is required a new Atomic Host image which includes runc.

Once the two containers etcd and flannel are built, it is possible to run them as:

# atomic install-spc etcd
# atomic install-spc flannel

The operations uninstall-spc and upgrade-spc are also supported.

The upgrade for now does not check if the files are changed, this will be possible once it is integrated with OSTree.

@rhatdan
Member
rhatdan commented Feb 18, 2016

Why can't we do this all with the standard atomic install/atomic run commands? I am not sure this is necessary to add new functionality.
Or could we do atomic install --spc or atomic install --ostree rather then add new commands.

@giuseppe
Member

I am not sure we can do it just with the standard install/uninstall commands, unless we require containers to duplicate a lot of logic or use an external program to do it.

We can move the new commands to be --spc for install/uninstall, but I am unsure about spc-upgrade. Should it be something like install --spc --upgrade?

I have few doubts:

Are we going to use runc more on Atomic to justify another dependency or should we use systemd-nspawn instead (this information anyway is in the container image and won't change the Atomic patches)?
Are there cases when we would like to run multiple containers from the same image? I am mostly wondering if the /var/spc/$NAME.$DEPLOYMENT directory structure is enough or we need to consider something else. We could probably add another option to install-spc/upgrade-spc to specify the name of the container so that the same image could be used more times:

atomic install-spc --name=flannel1 flannel
atomic install-spc --name=flannel2 flannel

and have two directories:
/var/spc/flannel1.$DEPLOYMENT
/var/spc/flannel2.$DEPLOYMENT

@rhatdan
Member
rhatdan commented Feb 18, 2016

We currently allow multiple containers on the same image, using the --name flag.

We do have an atomic update command already.

@mrunalp @baude You would probably be interested in this.

I do think runc should be come more prominent. We want to support the OCI specification.

@giuseppe
Member

ok thanks, then we can move all the new commands to be an --spc option for install/uninstall/update.

The issue I see with --name is how we track file installed on the host as multiple containers from the same image will copy the same file and uninstall on any container will remove these files.

@cgwalters: should we drop the feature to copy files from /exports/rootfs to the host and allow only files that we know how to handle like /exports/service.template for /usr/local/lib/systemd/system/$NAME.service and /exports/config.json for the runc configuration?

@giuseppe
Member

I pushed another branch that moves the new command to be a --spc option for install, uninstall and update:

https://github.com/giuseppe/atomic/tree/giuseppe/spc-option

I also made some other changes. Now --name is honored, and update --force will upgrade all the containers using the specified image.

@baude
Collaborator
baude commented Feb 19, 2016

I was just talking with @rhatdan and he suggested we sit down next week and hammer this out because we also have a similar idea for atomic (we think).

@giuseppe
Member

sure, that looks like a good plan. In what does your idea consist?

@rhatdan
Member
rhatdan commented Feb 19, 2016

Brent will be here next week and he wanted to talk about using atomic tool for generating of systemd unit files for running containers. Which is somewhat similar to what you are doing. We have been talking about how you would run containers out side of docker, (runc, nspawn, rkt) and how you could set up ordering. I believe this is the job of systemd, so how can we use the atomic tool to make it easier for admins to configure these types of containers.

I think having a bluejeans meeting while Brent is here with Colin and Me, would be good to iron out what this should look like.

@jlebon
Member
jlebon commented Feb 23, 2016

@rhatdan I didn't fully understand what you said in regards to splitting out the rootfs preparation (ostree) and the actual running of the container (runc/nspawn/rkt?). The atomic tool would still be the one taking care of the location of the rootfs and would create a systemd service file with the appropriate location.

So e.g. you could have atomic --ostree --runtime=runc and atomic --ostree --runtime=nspawn. Then you have classes to handle each type of runtime.

@giuseppe
Member

@jlebon at the moment the only dependency on the runc runtime is that the json configuration file if present under /exports is copied in the proper place. atomic doesn't directly use it. Replacing it with nspawn should not be difficult.

@jlebon
Member
jlebon commented Feb 23, 2016

@giuseppe That's awesome!

Re. /exports, instead of special-casing specific filenames, why not have it be a mini-rootfs as suggested earlier, but still provide a mechanism for templating (e.g. with a .template extension, or maybe a /exports.json file describing which files are template)? Then you could solve more generic issues. E.g. what if the service needs more than just one unit file (e.g. a service unit and a timer unit)?

@rhatdan
Member
rhatdan commented Feb 23, 2016

I am saying that we standardize on rootfs being something like

/var/lib/containers/atomic/NAME/rootfs

If I chroot, nspawn, runc I just use this path.

@rhatdan
Member
rhatdan commented Feb 23, 2016

I would say the metadata should go in /var/lib/containers/atomic/NAME/app.json

@giuseppe
Member

the problem I encountered with exporting files to the host is what will happen when using more containers from the same image. Let's say an image exports /usr/local/foo and you run two instances of it, then the same set of files will be copied twice to the host.
Should these files be overwritten or renamed somehow? What will happen when you remove one instance and leave another running?
IMHO, at least until there is not way to do otherwise, we should not suggest the possibility of copying arbitrary files to the host but export only a controlled set of files.

@rhatdan
Member
rhatdan commented Feb 24, 2016

I thought we were using ostree to make sure that there was only one copy of an image? I see no reason for copying the image again. Tooling should realize this and just use the existing image.

@giuseppe
Member

Correct, we use OSTree to do the checkout of an image, so only hard links will be created for those.

Let's say you create two etcd containers:

# atomic install --spc --name=etcd-foo etcd
# atomic install --spc --name=etcd-bar etcd

These cointainers will be checked out at:

/var/lib/containers/atomic/etcd-foo.0/rootfs
/var/lib/containers/atomic/etcd-bar.0/rootfs

And these symlinks:

/var/lib/containers/atomic/etcd-foo -> /var/lib/containers/atomic/etcd-foo.0
/var/lib/containers/atomic/etcd-bar -> /var/lib/containers/atomic/etcd-bar.0

I am not using a hardlink for installing the systemd unit file which is the only file at the moment that is not under /var/lib/containers/atomic/$NAME.0, and I cannot do that as I replace some values in the file itself.

If we decide to copy arbitrary files from the container rootfs/exports/ to '/' then tracking these files is going to be a problem (was the container etcd-foo or etcd-bar to install the file /usr/local/bar?)
For the systemd unit file that was an easy solution as atomic knows how to deal with it and renames it to $NAME.service, but for other kind of files I don't see the advantage of doing it. Is there going to be any container that needs any other kind of file installed on the host?

@rhatdan
Member
rhatdan commented Feb 25, 2016

I would say anything a container installs on the host needs to be namespaced based on the name of the container, if the container is just flinging random stuff on to the host, then it can only be run once or it is broken. I don't see how we fix this issue.

@giuseppe
Member

I've pushed a patch that allows atomic to contact directly a Docker v2 Registry to access the layers, the code is still very basic but functional. Accessing directly the registry removes the need to pull the image first and then save it from Docker.

Since I am using a v2 registry, I adapted the code to use the blob checksum as the branch name in OSTree: dockerimg-$blobSum. Now dockerimg-$APP points to an empty OSTree commit which has the image manifest as metadata. This information is needed for doing the checkout of a container.

@rhatdan
Member
rhatdan commented Feb 25, 2016

@giuseppe We have a tool called skopeo written by @runcom to do this. Could you see if his tool would satisfy your needs?

@runcom
Member
runcom commented Feb 25, 2016

if I understand correctly @giuseppe needs to know the hash of each layer - which is in the image manifest itself (under FSLayers IIRC).
If my understanding is correct, skopeo doesn't still support this but I see this as a good enhancement to the tool - @giuseppe let me know if you need this, it should be trivial to be added to skopeo

for reference https://github.com/runcom/skopeo

@giuseppe
Member

yes, exactly, but I also need to retrieve the layers afterwards.

For storing an image in OSTree: I retrieve the manifest and each layer that is not already imported will be fetched and imported into the OSTree repo.

For the checkout: I go again through the manifest which at this point is stored in an OSTree commit and export each layer to the container rootfs.

I didn't know about skopeo, thanks for pointing me at it. Is it going to be integrated with atomic somehow? The main reason I am proposing the registry.py class in Atomic instead of using an external tool (like docker-fetch) was to avoid mixing Python and Go code.

@runcom
Member
runcom commented Feb 25, 2016

I didn't know about skopeo, thanks for pointing me at it. Is it going to be integrated with atomic somehow? The main reason I am proposing the registry.py class in Atomic instead of using an external tool (like docker-fetch) was to avoid mixing Python and Go code.

It will be shipped with atomic - also if you need to download layers you can use https://github.com/runcom/cifetch (which is similar to docker-fetch but it supports v2 registries)

WRT mixing python and go - I'm not sure, go binaries tend to be small and reusable. Btw, up to you, I can modify cifetch to have a flag to expose layers and another to download them since I think you're just interested in registry v2. How does this sound?

@giuseppe
Member

if it is going to be shipped with atomic then probably it won't make sense to duplicate functionalities here. I think it still make sense to store the entire manifest in the OSTree repository, so I can use skopeo for retrieving it, and cifetch to fetch the new layers. What do you think?

@runcom
Member
runcom commented Feb 25, 2016

if it is going to be shipped with atomic then probably it won't make sense to duplicate functionalities here. I think it still make sense to store the entire manifest in the OSTree repository, so I can use skopeo for retrieving it, and cifetch to fetch the new layers. What do you think?

totally fine with me - note cifetch just supports v2 registries but I guess that's fine to you

@runcom
Member
runcom commented Feb 25, 2016

also note cifetch isn't yet into fedora rpms (will add it eventually) - and pls @giuseppe pls feel free to bother me anytime for bugs in skopeo and cifetch (they're still relatively new)

@rhatdan
Member
rhatdan commented Feb 25, 2016

Lets hold off on putting this in Fedora, we could just add it to the atomic package. Not sure we want to support this externally. Want to talk to @vbatts about it also to figure out how we can use this.

@giuseppe
Member

can we set another meeting this week?

@rhatdan
Member
rhatdan commented Feb 29, 2016

Sure I am available at 9:00 AM EST everyday this week.

@cgwalters
Member

@runcom Have you talked with @vbatts at all? He was actively working on v2 bits in https://github.com/vbatts/docker-utils/commits/registry_v2_support I saw.

@runcom
Member
runcom commented Feb 29, 2016

@cgwalters we synced up early today - cifetch isn't exposing a docker load'able tar, just fetch raw blobs from the registry, we could probably merge our libraries (but we still need to finish to talk about this)

@cgwalters
Member

For this, the fetch tool doesn't need support docker load directly note - the goal is import into OSTree. I'd like to support export from that of course. Though it's quite easy to do save though at least for V1, but I haven't looked at v2.

@giuseppe giuseppe referenced this issue in runcom/skopeo Mar 7, 2016
Closed

[WIP] Skopeo fetcher #8

@cgwalters cgwalters changed the title from thoughts on a new SPC mechanism to "System Containers" SPC mechanism design Mar 9, 2016
@giuseppe
Member

I created a small video on the current status of the system containers:

https://asciinema.org/a/4ibfia1brecr473l3337y44pr

@giuseppe
Member
giuseppe commented Apr 1, 2016

I've done some refactoring, I think it is ok to start the review process and agree on the interface:

#334

@rhatdan rhatdan changed the title from "System Containers" SPC mechanism design to "System Containers" mechanism design Apr 4, 2016
@rhatdan
Member
rhatdan commented Oct 11, 2016

Cleaning up Atomic issues.
@cgwalters @giuseppe Can we close this issue?

@giuseppe
Member

I think this issue can be closed.

@rhatdan rhatdan closed this Oct 11, 2016
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment