Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

bundle: filesystem metadata format #11

Closed
philips opened this issue Jun 25, 2015 · 30 comments
Closed

bundle: filesystem metadata format #11

philips opened this issue Jun 25, 2015 · 30 comments

Comments

@philips
Copy link
Contributor

philips commented Jun 25, 2015

@stevvooe and I caught-up in person about our digest discussion and the need for serialize file-system metadata. If you want to read my attempt it is found here: #5 (comment)

Problem: a rootfs for a container bundle sitting on-disk may not reflect the exact intended state of the bundle when it was copied to its current location. Possible causes might include: running on filesystems with varying levels of metadata support (nfs w/o xattrs), accidental property changes (chown -R), or purposeful changes (xattrs added to enforce local policies).

Obviously the files contents will be identical so that isn't a concern.

Solution: If we hope to create a stable digest of the bundle in the face of these likely scenarios we should store the intended filesystem metadata into a file itself. This can be done in a variety of ways and this issue is a place to discuss pros/cons. As a piece of prior-art @vbatts has implemented https://github.com/vbatts/tar-split and we have the linux package managers with tools to verify and restore filesystem metadata from a database with rpm -a --setperms and rpm -V.

@stevvooe
Copy link

After some thought, we need a format that does the following:

  1. A manifest enumerates files in a bundle.
    1. Provides a content hash.
    2. Provides a path.
    3. Provides a file type.
    4. Provides standard file mode.
    5. Provides xattr.
    6. Provides an extension mechanism.
      1. Geared towards multiple OS support.
      2. Not infinitely extendable but should be easy to add new field.
  2. Bundle contents attributes can be reset to contents of file manifest.
    1. Bundle is scanned and any differences from manifest are rectified.
    2. Unames/Gnames/Uid/Gid can be mapped during "reset".
  3. Bundle contents can be verified against manifest.
    1. Content hash can be checked.
    2. Attributes can be checked.
      1. Certain attributes can be checked against machine-local mapping (uid/gid, etc.).
    3. Manifest can optionally be signed.

Meeting the above three use cases with this format puts us above the test for Cost problems with the ideas from #5. Requirement 2 above has a common use case, by avoiding placing unreasonable requirements on transports. Requirement 3 above gives us the functionality of #5 with the extra benefits of Requirement 2.

@shykes @philips @crosbymichael

@philips
Copy link
Contributor Author

philips commented Jun 30, 2015

@stevvooe I am having a hard time parsing this sentence: "Meeting the above three use cases with this format puts us above the test for Cost problems with the ideas from #5."

I agree with all three needs overall despite my confusion above.

@stevvooe
Copy link

@philips That is a poor sentence where I've shoved in a lot of meaning.

What I'm saying is that, given the goals of #5 (cryptographic verification), the cost of scanning a bundle is not warranted. Given these new goals, the cost is warranted and it makes bundles portable over different transports. Basically, we have a solid reason for filesystem scanning that could also be used as a signable target.

@philips
Copy link
Contributor Author

philips commented Jun 30, 2015

@stevvooe Ack. So the next step is a .proto file?

@stevvooe
Copy link

No better way to get started than with a straw man:

syntax = "proto3";

package ocf.bundle;

// BundleManifest specifies the entries in a container bundle, keyed and
// sorted by path.
message BundleManifest {

    message Entry {
        // path specifies the path from the bundle root
        string path = 1;

        // NOTE(stevvooe): Need to define clear precedence for user/group/uid/gid precedence.

        string user = 2;
        string group = 3;

        uint32 uid = 4;
        uint32 gid = 5;

        // mode defines the file mode and permissions. We've used the same
        // bit-packing from Go's os package,
        // http://golang.org/pkg/os/#FileMode, since they've done the work of
        // creating a cross-platform layout.
        uint32 mode = 6;

        // NOTE(stevvooe): Beyond here, we start defining type specific fields.

        // digest specifies the content digest of the target file. Only valid for
        // regular files. The strings are formatted as <alg>:<digest hex bytes>.
        // The digests are added in order of precedence favored by the 
        // generating party.
        repeated string digest = 7;

        // target defines the target of a hard or soft link, relative to the
        // bundle root.
        string target = 8;

        // specifies major and minor device numbers for charactor and block devices.
        string major = 9;
        string minor = 10;

        message XAttr {
            string name = 1;
            string value = 2;
        }

        // xattr provides storage for extended attributes for the target resource.
        repeated XAttr xattr = 11;

        // AlternateDataStream represents NTFS Alternate Data Streams for 
        // the targeted resource.
        message AlternateDataStream {
            string name = 1;
            bytes value = 2;
        }

        // ads stores one or more alternate data streams for the given resource.
        repeated AlternateDataStream ads = 12;
    }

    repeated Entry entries = 1;
}

Changes:

  • digest field is now repeated
  • Added description for AlternateDataStream type, formerly ADS

@philips philips added this to the draft-next milestone Jun 30, 2015
@philips
Copy link
Contributor Author

philips commented Jul 1, 2015

@stevvooe looks pretty good. Two questions:

  • What is an ADS?
  • Should digest be repeated so we can deprecate old hashes and upgrade to new ones over time?

@stevvooe
Copy link

stevvooe commented Jul 1, 2015

What is an ADS?

This is NTFS equivalent of extended attributes (sort of), known as "Alternate Data Streams". The semantics are slightly different, so I've pulled it out into a separate type. Notice the use of type bytes for the value, instead of string. I'd like to get some feedback from a Windows expert to see if this is sufficient.

Should digest be repeated so we can deprecate old hashes and upgrade to new ones over time?

In this case, I don't see why not.

I've updated the comment in-line.

@stevvooe
Copy link

stevvooe commented Jul 1, 2015

@philips We may also want to define an exclusion operator to the manifest specification, since it operates at the bundle level.

enum Op {
    // EXCLUDE specifies that the matched files should be explicitly exlcuded
    // from the manifest. They may be still part of the bundle.
    EXCLUDE = 0;

    // INCLUDE specifies that the resource should be include in the manifest.
    // This has the act of "pinning" the resource. For example, if the resouce
    // is later matched by an exclude statement, it will still be included.
    INCLUDE = 1;
}

message PathSpec {
    Op operation = 1 [default=EXCLUDE];

    // path specifies a path relative to the bundle root. If the path is a
    // directory, the entire tree will be excluded.
    string path = 2;

    // pattern specifies a glob to match resources and apply the operation.
    string pattern = 3;
}

// path specifies the bundle paths covered by the manifest. Specifications are
// order by precedence. For a given path, only the first matching
// specification applies. During processing, it is important to fully process
// all resources, even if a directory is excluded, since child resources may
// first match an inclusion.
repeated PathSpec pathSpec;

Benefits:

  • Very explicit for recalculation
  • Allows us to catch files that may have showed up and don't belong

Cost:

  • More processing time
  • Subtle include/exclusion behavior to minimally specify file sets
  • Can contribute to instability of file -- equivalent manifests may have different specs

Another possibility is to allow this to be specified on the command line when first building the manifest. That doesn't allow us to catch "extra" files, but that may not be that important and likely doesn't warrant the extra complexity.

@bitshark
Copy link

bitshark commented Jul 3, 2015

Okay, so after reading this, I think this makes a bit more sense to me . . . I think the area where I'm having trouble is understanding the scope vis a vis the goals...

I think what I've read so far is well thought out and reasoned, so props to everyone , heh. Forgive me ahead of time, but I wanted to share some thoughts. Discard if you don't think they are useful.

There seems a basic consensus -- in the limited info I've read on the container crypto goals --that everyone would probably be on board with the following ideas /in principle/ , certainly as options... These are labeled as the 'Basic Themes' below .. areas where there's agreement:

Basic Themes

  • Containers may need to be secured against A-MITM in-transit (integrity)
  • Containers may need to be secured against A-MITM during distribution (integrity)
  • We need a way of validating and identifying who is the author of a container (authentication, non-repudiation, integrity)
  • Containers do NOT need their own transport layer, since we are assuming TLS / OpenSSH will take care of that .
  • <--- Additional reasons go here --->

Just as general thought, first I'm going to put these here just as problems I have run into myself with design of cryptography... You guys may know this already but writing this down helps me organize my thoughts.

I'll be back with specifics tomorrow or Saturday.

Statements in general I've found true in crypto engineering. May be useful in this context.

  • Never reinvent the wheel in crypto -- this is the 'once upon a time' for systems that get owned
  • If the full description & implementation of yr system can withstand the scrutiny of a 16-year old Norwegian hacker named Jon , such that it cannot be cracked in 8 lines of Perl, then you are off to a decent start. (unlike DeCSS)
  • Always aim for zero-knowledge even though it's near impossible to do right... The line of thinking provides insight regardless.
  • Never put new / novel cryptography and/or standards directly into production -- if necessary , only do so if it's backed by scientific research and a rock-solid implementation
  • Don't do an implementation from scratch -- always use an open-source well-tested library like NaCl, PyCrypto, etc.
  • Side channel attacks are not theoretical (Heartbleed, WEP, Lucky13, etc). Leaky system borders are where side channels live.
  • Padding oracle attacks like Heartbleed from crappy solutions in SSL/TLS RFCs for how to authenticate packets... Do not repeat, heh.
  • Work at the highest level possible, in primitives/concepts rather than in methods
  • Use nonces, timestamps, & digital signatures wherever possible to avoid replay attacks
  • Correct forward operation is Compress, Encrypt, then MAC/Authenticate (never the opposite)... The correct reverse operation is MAC/Authenticate, Decrypt, then Decompress. ("Cryptographic Doom Principle" for side channels)
    http://www.thoughtcrime.org/blog/the-cryptographic-doom-principle/
  • Study existing protocols... always adapt an existing format or protocol instead of writing one from scratch
  • Scope creep in crypto can be brutal, it goes on forever

- Define our threat model before we do anything else. Who are we trying to protect against?

What are we trying to protect (data, secrets, access, etc)? What is our tolerance in terms of a threat vs it's complexity/likelyhood?

I've got a series of specific questions in regards to the proposed standards here (which I think are pretty awesome )... but I'm tired at the moment -- I'll post the specific questions / comments tomorrow: I hope this helps in the time being . . . This is my take on how to think about crypto engineering.

General Questions:

  • What is our threat model? Who is the attacker? What are their capabilities/
  • What does a passive attack look like (if one is possible here)?
  • What form is an active attack? What is the weakest link in the existing thinking?
  • Do we need Digests? What threat do they address and how?
  • Do we need Digital Sigs / MACs? How are Digital Sigs keyed? What threat do they address?
  • Do we have a line of thinking for the key distribution model? WoT / CA etc. Here it's often there's no choice but that we must piggyback on an existing implementation like OpenPGP (like Ubuntu PPA)
  • Based on the threat model, what are the appropriate countermeasures (aka cryptogrpahic primitives)?
  • Do we need to support encryption as a use-case, or can we cut it with just hashes / digital sigs?

Specific Questions (assuming key distribution is solved, heh... like with OpenPGP):

  • Should we deploy Digital Signatures to secure 'single-file' compressed container images (like rkt does now)?
  • Do we know if containers will always (or at least almost all of the time) be distributed on the Internets as some sort of compressed single-file archive ( exact format is irrelevant)?
  • If containers are distributed as a single-file archive, and these are secured properly with digital sigs and/or GPG etc, why is it necessary (or beneficial) to secure the data-at-rest in an uncompressed container?
  • Has a container format been finalized or standardized beyond the json manifest? What is certain about the container archive format, and what is up in the air? What will be up to the implementation, and what is defined by the spec?
  • What are the advantages of securing data-at-rest inside the container vs simply securing a container archive image file? Do we gain security here? If so, what specific attack does this prevent?
  • What are the pitfalls in scope creep and/or other impacts of securing data beyond a single-file container archive?
  • Are there any compromise solutions that lie between these two ideas?
  • What is the threat model we are considering?
  • Is a threat where someone tricks me into downloading a malicious container?
  • Or does someone poison the ARP table at a starbucks while I am downloading the container image, such that I download their container image?
  • Are we protecting against a 'runaway app' inside a container? A malicious actor on the host system which has done privilege escalation? Industrial espionage by ceiling cat?
  • What sort of primitives do we need to achieve stated level of security?
  • What can we re-use to minimize the effort and make it easy (or certainly at least straightforward) for others to implement this standard?
  • What can we re-use to avoid implementation , testing, and possible exploit headaches?

BTW these questions arent meant to all have answers obviously. They are more like engineering food .

Anyway -- thanks, good work, and good luck gentleman. Looking forward to writing about the details tomorrow if time permits, as you all have some really good ideas here, and I'm sure you'll sort this all out.

@vbatts
Copy link
Member

vbatts commented Aug 13, 2015

this fileset would be a binary packed format, like CrAU?

@stevvooe
Copy link

stevvooe commented Sep 2, 2015

@vbatts https://github.com/stevvooe/continuity has been opened up to continue this research.

@duglin
Copy link
Contributor

duglin commented Jan 13, 2016

Probably related to #302. Will need to be considered as part of that.

@cgwalters
Copy link

I think the https://github.com/GNOME/ostree format has a lot of advantages. It was designed from the start to be checksummed. If implementing anything else, at least study it.

@cgwalters
Copy link

For example:

  • It intentionally does not include device files, because why would you have devices in container images?
  • It doesn't include timestamps per file, because immutable containers don't need them. (And if you do need timestamps, just do what git does and derive them from the commit object timestamp).
  • xattrs are part of the per-file checksum (Although I think container images shouldn't include xattrs, we should drop setuid binaries and file caps for more secure containers)

@stevvooe
Copy link

@cgwalters We have been researching a number of approaches while working on https://github.com/stevvooe/continuity. The big difference is that continuity does not prescribe a distribution format while keep metadata transport consistent.

It intentionally does not include device files, because why would you have devices in container images?

We've found that having an opinion here will work the system into odd chicken-egg problems. For example, if we rely on runc to create a device, how do we specify the ownership parameters in the archive format? We'd have to call into runc to create the devices, then call back out to the archiver to apply the metadata, then back into runc for runtime.

There are also other filesystem objects, such as sockets and named pipes, that may need to be serialized when migrating a process.

It doesn't include timestamps per file, because immutable containers don't need them.

We've gone back and forth on this requirement. The main issue here is that if you want stable regeneration, you cannot have timestamps in the metadata. However, let's say you want to pick a compilation process mid build and then resume it on another node. Modification times are very important here. When you start examining this, there are a number of applications that would behave in odd manners when all of the timestamps are from the extraction time.

Mostly, we can obviate this need by not trying to regenerate an expanded artifact. IMHO, it imposes challenging requirements on the transport format that don't ultimately serve the user while introducing security problems in the pursuit of hash stability (see: tarsum).

(And if you do need timestamps, just do what git does and derive them from the commit object timestamp).

Interesting. I did not know this. Very cool!

xattrs are part of the per-file checksum (Although I think container images shouldn't include xattrs, we should drop setuid binaries and file caps for more secure containers)

We have this in continuity to some degree. There are lots of applications that cannot work correctly without xattrs, in addition to setups that require setuid.

In the past few weeks of development and experimentation, we've actually found the right model is to have continuity collect as much information as possible, then provide tools to selectively apply metadata and verify the on disk data.

@cgwalters
Copy link

On Fri, Feb 12, 2016, at 03:00 PM, Stephen Day wrote:

We've found that having an opinion here will work the system into odd
chicken-egg problems. For example, if we rely on runc to create a
device, how do we specify the ownership parameters in the archive
format? We'd have to call into runc to create the devices, then call
back out to the archiver to apply the metadata, then back into runc
for runtime.

Any non-privileged container should only see the "API" devices
(/dev/null etc.)  Any privileged container is, well, privileged and
can create the device nodes itself.  Why would you ship pre-created
device nodes in an in an image?

There are also other filesystem objects, such as sockets and named
pipes, that may need to be serialized when migrating a process.

Migration is data, not images.  Use tar or whatever for that.  And
data should be cleanly separated in storage from the image.

It doesn't include timestamps per file, because immutable containers
don't need them.

We've gone back and forth on this requirement. The main issue here is
that if you want stable regeneration, you cannot have timestamps in
the metadata. However, let's say you want to pick a compilation
process mid build and then resume it on another node. Modification
times are very important here. When you start examining this, there
are a number of applications that would behave in odd manners when all
of the timestamps are from the extraction time.

Again, that's a data case, not immutable images.  I think using
container images as a backup format doesn't make sense.  A vast
amount of backup software already exists.  Yes, one needs to cleanly
separate code from data, but that's a fundamental requirement for
upgrades anyways.

@stevvooe
Copy link

@cgwalters I am not sure if you saw it, but I made the following point at the bottom of my comment:

we've actually found the right model is to have continuity collect as much information as possible, then provide tools to selectively apply metadata and verify the on disk data.

This approach is compatible with all of the points identified, while not limiting the capability of containers.

In general, images are data, as well. Indeed, a large number of backup software and distribution channels for filesystem images do already exist. Why not make an archive format that is compatible with all of them? Conversely, why require a backup solution in addition to the ability to snapshot and archive containers? Both are acceptable use cases at either end of a continuum. It would be unfortunate to disallow one based on an arbitrary opinion, even if well-grounded.

Ultimately, deciding what a container or image archive can and cannot do just isn't productive. Shipping metadata is inexpensive and the user can always choose to unpack them or not.

@cgwalters
Copy link

In one view, sure it's all "just files". But I think there's a strong argument to have separate tools and data formats for different problem domains (source code, binaries, database backups) that share ideas rather than trying to do one format for everything. git is already good for source code and text, etc.

Don't underestimate the cost of inventing a new file format for things like mirroring, versioning, language bindings for parsers, etc.

@cgwalters
Copy link

Going back to the top of the motivation here:

Problem: a rootfs for a container bundle sitting on-disk may not reflect the exact intended state of the
bundle when it was copied to its current location.

I'd say the correct solution here is for the container runtime to work with the storage layer to ensure immutability. See http://www.spinics.net/lists/linux-fsdevel/msg75085.html for a proposal there. It'd require plumbing through from the filesystem to the block level, but I think the end result would be simply better than classic tools like tripwire and IMA, as well as whatever verification is invented here. (Yes, that proposal doesn't cover xattrs, we'd want a way to freeze specific xattrs too likely)

@stevvooe
Copy link

@cgwalters Is there a windows port for OSTree?

@wking
Copy link
Contributor

wking commented Feb 25, 2016

On Thu, Feb 25, 2016 at 01:30:26PM -0800, Colin Walters wrote:

Going back to the top of the motivation here:

Problem: a rootfs for a container bundle sitting on-disk may not
reflect the exact intended state of the bundle when it was copied
to its current location.

I'd say the correct solution here is for the container runtime to
work with the storage layer to ensure immutability.

The O_OBJECT proposal you link is about preserving filesystem content
after it lands on the filesystem, but it looks like @philips initial
concern was about landing it on the filesystem in the first place.
For example, “my FAT-16 filesystem doesn't support POSIX permissions,
so my rootfs/foo/bar seems to have 0777 instead of the source's 0600”.

@cgwalters
Copy link

I struggle to understand a scenario where one would reasonably want to unpack container content onto FAT-16 and expect to run it. Maybe inspection, but even then, you can do that from userspace easily enough with libarchive or whatever. If you have a Linux container, you have Linux...hence you have xfs/ext4/etc.

@wking
Copy link
Contributor

wking commented Feb 25, 2016

On Thu, Feb 25, 2016 at 03:28:36PM -0800, Colin Walters wrote:

I struggle to understand a scenario where one would reasonably want
to unpack container content onto FAT-16…

A poor choice of example, but @philips was pointing out that not all
filesystems support the same attributes (he pointed out NFS without
xattrs, among other things). Regardless of the specific examples,
unpacking into a local filesystem (what @philips was talking about
1) and maintaining content after that unpacking (what you were
talking about 2) are two separate things.

@cgwalters
Copy link

Anyways my goal here is to try to ensure sharing of ideas, not necessarily code in this area - OSTree is certainly not going to take over the world as a way to get content from A to B any more than other projects in this area. Another good project to look at is Clear Linux: https://lists.clearlinux.org/pipermail/dev/2016-January/000159.html

A good example of a mistake in OSTree - I've come to realize the git-like Merkle tree model was a mistake for binary content, because it's really common with software updates for one "package" to change multiple paths (due to /usr/bin and /usr/share etc.) For git and source code it's a lot more common to only change one subdirectory.

So the Clear Linux manifest makes sense - there's no streaming, but that's fine because we aren't storing huge amounts of content to tape drives.

Also, OSTree not including the size in the tree metadata was really dumb but that's papered over with static deltas.

Speaking of deltas...that's another area where Docker really lacks, and for OSTree I ended up taking a ton of inspiration from http://dev.chromium.org/chromium-os/chromiumos-design-docs/filesystem-autoupdate. For more on that see https://ostree.readthedocs.org/en/latest/manual/formats/

@cgwalters
Copy link

Regarding NFS...sure, but how does it help a user/admin to determine after the fact that things are broken? Basically the system is either going to munch the fscaps on /bin/ping or not, a system that tells you "hey the fscaps are missing" may lead you Google faster but that's about it...

fscap binaries can be pretty easily worked around in an NFS root scenario by copying them into tmpfs or something on boot. Yes, it's ugly, see: https://bugzilla.redhat.com/show_bug.cgi?id=648654#c19

@cgwalters
Copy link

Also, I'd like to go on a crusade to kill off setuid binaries in containers - they're legacy, and in a container world we should always run with NO_NEW_PRIVS on. Use containers as a reason to leave behind the continual security issues of setuid, and just have them on the host until someone rewrites PAM and /sbin/unix_chkpwd etc.

@wking
Copy link
Contributor

wking commented Feb 26, 2016

On Thu, Feb 25, 2016 at 06:31:09PM -0800, Colin Walters wrote:

Regarding NFS...sure, but how does it help a user/admin to determine
after the fact that things are broken? Basically the system is
either going to munch the fscaps on /bin/ping or not, a system
that tells you "hey the fscaps are missing" may lead you Google
faster but that's about it...

Agreed if the goal is going image → filesystem → running container,
but I think @philips was concerned with round-tripping from image
files to filesystem bundles (image 1 → filesystem → image 2), since he
links tar-split which is focused on unpacking and repacking tarballs
while preserving the tarball's hash. Folks that are interested in
round-tripping through the filesystem would be concerned about
mismatches between attributes represented in the filesystem and
attributes represented in the image file, but not about freezing
content once it's on the filesystem. And folks that want to
round-trip in the face of limited filesystems can write tools that
stash the unsupported attributes elsewhere and pull them back in when
checking for changes, so they can do better than failing fast.

Personally, I don't think round-tripping is particularly useful,
because:

  • Folks who just want to verify a bundle can fetch the original image
    [1](e.g. by caching it locally) so they don't have to regenerate
    the original image from the filesystem.

  • Folks who want to generate a new image that reuses content
    addressable objects from an earlier image (e.g. adding a few files
    to a stock Debian image to create a new image) can handle that
    locally (e.g. with something like Git's staging area to bless
    changes they're interested in). There's no need to address this at
    the protocol / file-format level.

    Subject: Re: OCI Bundle Digests Summary
    Date: Thu, 15 Oct 2015 16:52:42 -0700
    Message-ID: 20151015235242.GD28418@odin.tremily.us

@philips
Copy link
Contributor Author

philips commented Apr 6, 2016

I am closing this out. The image format work is now part of the OCI Image Format project: https://github.com/opencontainers/image-spec

@advancedwebdeveloper
Copy link

@thaJeztah
Copy link
Member

@advancedwebdeveloper source code of that package is in https://github.com/containerd/continuity

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

10 participants