You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Right now when we fetch container images we parse them and:
Drop on the floor things like files in /var
Rename /etc
Pass the tar stream to ostree commit, which will also discard e.g. mtimes
And other bits.
This breaks the ability to re-synthesize the tar stream; basically what I think there are some use cases for is ostree container push. This also relates to #273
I think the solution to this is to take a similar approach as https://github.com/vbatts/tar-split
Although, I think what I'd propose is an algorithm like this:
let mut tar_meta_entries = Vec::new();
for entry in tar {
let (metadata, raw_metadata_bytes) = entry.metadata();
let checksum = write_ostree_object(entry);
tar_meta_entries.push((raw_metadata_bytes, checksum));
}
Then we serialize tar_meta_entries...somewhere. In the commit metadata is an obvious place, but would need some size analysis. We could store it in another ref as data, but just mildly ugly to have two refs per layer.
To reliably reconstitute the bit for bit uncompressed tar stream, we just need to load tar_meta_entries, write the metadata bytes, then load the corresponding ostree object and append its raw data.
The text was updated successfully, but these errors were encountered:
A key feature this would enable is ostree container push from the running OS. (Obviously, it also kind of scope creeps this whole thing into an ostree-based container storage backend)
Right now when we fetch container images we parse them and:
/var
/etc
ostree commit
, which will also discard e.g. mtimesAnd other bits.
This breaks the ability to re-synthesize the tar stream; basically what I think there are some use cases for is
ostree container push
. This also relates to #273I think the solution to this is to take a similar approach as https://github.com/vbatts/tar-split
Although, I think what I'd propose is an algorithm like this:
Then we serialize
tar_meta_entries
...somewhere. In the commit metadata is an obvious place, but would need some size analysis. We could store it in another ref as data, but just mildly ugly to have two refs per layer.To reliably reconstitute the bit for bit uncompressed tar stream, we just need to load
tar_meta_entries
, write the metadata bytes, then load the corresponding ostree object and append its raw data.The text was updated successfully, but these errors were encountered: