-
Notifications
You must be signed in to change notification settings - Fork 17.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
archive/tar: add support for writing tar containing sparse files #13548
Comments
/cc @dsnet who's been going crazy on the archive/tar package in the Go 1.6 tree ("master" branch) |
This isn't a bug per-say, but more of a feature request. Sparse file support is only provided for tar.Reader, but not tar.Writer. Currently, it's a bit asymmetrical, but supporting sparse files on tar.Writer requires API change, which may take some time to think about. Also, this is mostly unrelated to #12594. Although, that bug should definitely be fixed before any attempt at this is made. For the time being, I recommend putting this in the "unplanned" milestone, I'll revisit this issue when the other tar bugs are first fixed. |
@dsnet should I keep this here as a feature request, or is there another preferred format for those? |
The issue tracker is perfect for that. So this is just fine. |
This my proposed addition to the tar API to support sparse writing. First, we modify tar.Header to have an extra field: type Header struct {
...
// SparseHoles represents a sequence of holes in a sparse file.
//
// The regions must be sorted in ascending order, not overlap with
// each other, and not extend past the specified Size.
// If len(SparseHoles) > 0 or Typeflag is TypeGNUSparse, then the file is
// sparse. It is optional for Typeflag to be set to TypeGNUSparse.
SparseHoles []SparseHole
}
// SparseEntry represents a Length-sized fragment at Offset in the file.
type SparseEntry struct {
Offset int64
Length int64
} On the reader side, nothing much changes. We already support sparse files. All that's being done is that we're now exporting information about the sparse file through the SparseHoles field. On the writer side, the user must set the SparseHoles field if they intend to write a sparse file. It is optional for them to set Typeflag to TypeGNUSparse (there are multiple formats to represent sparse files so this is not important). The user then proceeds to write all the data for the file. For sparse holes, they will be required to write Length zeros for that given hole. It is a little inefficient writing zeros for the holes, but I decided on this approach because:
I should note that the tar format represents sparse files by indicating which regions have data, and treating everything else as a hole. The API exposed here does the opposite; it represents sparse files by indicating which regions are holes, and treating everything else as data. The reason for this inversion is because it fits the Go philosophy that the zero value of some be meaningful. The zero value of SparseHoles indicates that there are no holes in the file, and thus it is a normal file; i.e., the default makes sense. If we were to use SparseDatas instead, the zero value of that indicates that there is no data in the file, which is rather odd. It is a little inefficient requiring that users write zeros and the bottleneck will be the memory bandwidth's ability to transfer potentially large chunks of zeros. Though not necessary, the following methods may be worth adding as well: // Discard skips the next n bytes, returning the number of bytes discarded.
// This is useful when dealing with sparse files to efficiently skip holes.
func (tr *Reader) Discard(n int64) (int64, error) {}
// FillZeros writes the next n bytes by filling it in with zeros.
// It returns the number of bytes written, and an error if any.
// This is useful when dealing with sparse files to efficiently skip holes.
func (tw *Writer) FillZeros(n int64) (int64, error) {} Potential example usage: https://play.golang.org/p/Vy63LrOToO |
If Reader and Writer support sparse files transparently, why export SparseHoles? Is the issue that when writing you don't want to introduce a sparse hole that the caller did not explicitly request? |
The Reader expands sparse files transparently. The Writer is "transparent" in the sense that a user can just do io.Copy(tw, sparseFile) and so long as the user already specified where there sparse holes are, it will avoid writing the long runs of zeros. Purely transparent sparse files for Writer cannot easily done since the tar.Header is written before the file data. Thus, the Writer cannot know what sparse map to encode in the header prior to seeing the data itself. Thus, Writer.WriteHeader needs to be told where the sparse holes are. I don't think tar should automatically create sparse files (for backwards compatibility). As a data point, the tar utilities do not automatically generate sparse files unless the -S flag is passed in. However, it would be nice if the user didn't need to come up with the SparseHoles themselves. Unfortunately, I don't see an easy solution to this. There are three main ways that sparse files may be written:
I looked at the source for GNU and BSD tar to see what they do:
I'm not too fond of the OS specific things that they do to detect holes (granted archive/tar already has many OS specific things in it). I think it would be nice if tar.Writer provided a way to write spares files, but I think we should delegate detection of sparse holes to the user for now. If possible, we can try and get sparse info during FileInfoHeader, but I'm not sure that os.FileInfo has the necessary information to do the queries that are needed. |
@dsnet Design SGTM (non-binding), do you plan to implement that feature? |
I'll try and get this into the Go 1.9 cycle. However, a major refactoring of the tar.Writer implementation needs to happen first. |
That being said, for all those interested in this feature, can you mention what your use case is? For example, are you only interested in being able to write a sparse file where you have to specify explicitly where the holes in the file are? Or do you expect to pass an |
My use is I'd like to leave zero specification up to the user of my library. |
My use case is to solve a Docker's issue moby/moby#5419 (comment) , which leads |
We (Hashicorp) run Packer builds for customers on our public SaaS, Atlas. We offer up an Artifact Store for Atlas customers so that they can store their created Vagrant Boxes, VirtualBox (ISO, VMX), QEMU, or other builds inside our infrastructure. If the customer specifies using the Many of the resulting QEMU, VirtualBox, and VMware builds can be fairly large (10-20GB), and we've had a few customers sparse the resulting disk image, which can lower the resulting artifacts size to ~500-1024MB. This, of course allows for faster downloads, less bandwidth usage, and a better experience overall. We first start to create the archive from the Atlas Post-Processor in Packer (https://github.com/mitchellh/packer/blob/master/post-processor/atlas/post-processor.go#L154). In this case, we wouldn't know explicitly where the holes in the file are, and would have to rely on |
@dsnet the use-case is largely around the container images. So the Reader design you proposed SGTM, though it would be nice if the tar reader also provider io.Seeker to accommodate the SparseHoles, but that is not a terrible issue just less than ideal. |
Running `useradd` without `--no-log-init` risks triggering a resource exhaustion issue: moby/moby#15585 moby/moby#5419 golang/go#13548
Sorry this got dropped in Go1.9, I have a working solution out for review for Go1.10. |
Change https://golang.org/cl/56771 mentions this issue: |
Came across this issue looking for sparse-file support in Golang. API looks good to me and certainly fits my usecase :). Is there no |
On Unix OSes that support sparse files, seeking past EOF and writing or resizing the file to be larger automatically produces a sparse file. |
Cool, so it detects that you've skipped past a block and not written anything to it and automatically assumes its sparse? Nice 👍 |
Change https://golang.org/cl/78030 mentions this issue: |
This CL removes the following APIs: type SparseEntry struct{ ... } type Header struct{ SparseHoles []SparseEntry; ... } func (*Header) DetectSparseHoles(f *os.File) error func (*Header) PunchSparseHoles(f *os.File) error func (*Reader) WriteTo(io.Writer) (int, error) func (*Writer) ReadFrom(io.Reader) (int, error) This API was added during the Go1.10 dev cycle, and are safe to remove. The rationale for reverting is because Header.DetectSparseHoles and Header.PunchSparseHoles are functionality that probably better belongs in the os package itself. The other API like Header.SparseHoles, Reader.WriteTo, and Writer.ReadFrom perform no OS specific logic and only perform the actual business logic of reading and writing sparse archives. Since we do know know what the API added to package os may look like, we preemptively revert these non-OS specific changes as well by simply commenting them out. Updates #13548 Updates #22735 Change-Id: I77842acd39a43de63e5c754bfa1c26cc24687b70 Reviewed-on: https://go-review.googlesource.com/78030 Reviewed-by: Russ Cox <rsc@golang.org>
Unfortunately, the code had to be reverted and will not be part of 1.10 anymore. This bug should probably be reopened. |
- Determinitist GID and UID - Docker recommends using `--no-log-init` until [this issue](golang/go#13548) gets resolved.
Dear Go heros, please try to get sparse support into tar.Writer. Thanks! |
With the proposed changes the time required to run `./bin/compose setup` is being reduced from ~18 minutes down to ~7 minutes on my machine. In addition a workaround is applied to reduce the size of the images. == Changes === Speed-Up `bundle install` The time spent withing `bundle install` takes a significant amount time during the `./bin/compose setup`. We could make use of two improvements, which both allows us to utitlize multiple CPU cures: * Make use of the bundle `--jobs` argument * Make use of the lesser known/used `MAKE` environment variable A significant amount of time spent during `bundle install` is actually compiling C-extensions, that's why the usage of the `MAKE` variable will drastically improve performence. === `useradd --no-log-init` Unfortunately there is a nasty bug when running `useradd` for a huge `uid`, which could result in excessive image sizes. See attached links for more information. === BuildKit BuildKit is the default builder toolkit for Docker on Windows and DockerDesktop on Macs. Using BuildKit will greatly improve performance when building docker images. == Links === Speed-Up `bundle install` * [One Weird Trick That Will Speed Up Your Bundle Install](https://build.betterup.com/one-weird-trick-that-will-speed-up-your-bundle-install/) === BuildKit * [Build images with BuildKit](https://docs.docker.com/develop/develop-images/build_enhancements/) * [Faster builds in Docker Compose 1.25.1 thanks to BuildKit Support](https://www.docker.com/blog/faster-builds-in-compose-thanks-to-buildkit-support/) === `useradd --no-log-init` * Best practices for writing Dockerfiles: [User](https://docs.docker.com/develop/develop-images/dockerfile_best-practices/#user) * golang/co: [archive/tar: add support for writing tar containing sparse files](golang/go#13548)
With the proposed changes the time required to run `./bin/compose setup` is being reduced from ~18 minutes down to ~7 minutes on my machine. In addition a workaround is applied to reduce the size of the images. == Changes === Speed-Up `bundle install` The time spent withing `bundle install` takes a significant amount time during the `./bin/compose setup`. We could make use of two improvements, which both allows us to utitlize multiple CPU cures: * Make use of the bundle `--jobs` argument * Make use of the lesser known/used `MAKE` environment variable A significant amount of time spent during `bundle install` is actually compiling C-extensions, that's why the usage of the `MAKE` variable will drastically improve performence. === `useradd --no-log-init` Unfortunately there is a nasty bug when running `useradd` for a huge `uid`, which could result in excessive image sizes. See attached links for more information. === BuildKit BuildKit is the default builder toolkit for Docker on Windows and DockerDesktop on Macs. Using BuildKit will greatly improve performance when building docker images. == Links === Speed-Up `bundle install` * [One Weird Trick That Will Speed Up Your Bundle Install](https://build.betterup.com/one-weird-trick-that-will-speed-up-your-bundle-install/) === BuildKit * [Build images with BuildKit](https://docs.docker.com/develop/develop-images/build_enhancements/) * [Faster builds in Docker Compose 1.25.1 thanks to BuildKit Support](https://www.docker.com/blog/faster-builds-in-compose-thanks-to-buildkit-support/) === `useradd --no-log-init` * Best practices for writing Dockerfiles: [User](https://docs.docker.com/develop/develop-images/dockerfile_best-practices/#user) * golang/co: [archive/tar: add support for writing tar containing sparse files](golang/go#13548)
TLDR: Passing `--no-log-init` to `useradd` prevents an issue where the Docker image size would potentially increase to hundreds of gigabytes when passed a "large" UID or GID. This is apparently a side effect of how `useradd` creates the user fail logs. The issue is explained in more detail at docker/docs#4754. The root cause is apparently combination of the following: 1. `useradd` by default allocates space for the faillog and lastlog for "all" users: https://unix.stackexchange.com/q/529827. If you pass it a high UID, e.g. 414053617, it will reserve space for all those 414053617 user logs, which amounts to more than 260GB. 2. The first bullet wouldn't be a problem if Docker would recognize the sparse file and compress it efficiently. However, there is an unresolved issue in the Go archive/tar package's (which Docker uses to package image layers) handling of sparse files: golang/go#13548 Eight years unresolved and counting! Passing `--no-log-init` to `useradd` avoids allocating space for the faillog and lastlog and fixes the issue. Finding out the root cause for this bug drove me loco. Reader, enjoy :-)
TLDR: Passing `--no-log-init` to `useradd` prevents an issue where the Docker image size would potentially increase to hundreds of gigabytes when passed a "large" UID or GID. This is apparently a side effect of how `useradd` creates the user fail logs. The issue is explained in more detail at docker/docs#4754. The root cause is apparently combination of the following: 1. `useradd` by default allocates space for the faillog and lastlog for "all" users: https://unix.stackexchange.com/q/529827. If you pass it a high UID, e.g. 414053617, it will reserve space for all those 414053617 user logs, which amounts to more than 260GB. 2. The first bullet wouldn't be a problem if Docker would recognize the sparse file and compress it efficiently. However, there is an unresolved issue in the Go archive/tar package's (which Docker uses to package image layers) handling of sparse files: golang/go#13548 Eight years unresolved and counting! Passing `--no-log-init` to `useradd` avoids allocating space for the faillog and lastlog and fixes the issue.
is this bug still present? |
I've created a Github Repo with all the needed steps for reproducing this on Ubuntu 12.04 using Go1.5.1. I've also verified that using Go1.5.2 still experiences this error.
Run
vagrant create
thenvagrant provision
from repository root.Expected Output:
Actual Output:
The Vagrantfile supplied in the repository runs the following shell steps:
truncate -s 512M sparse.img
ls -lash sparse.img
compress.go
viago run compress.go
compress.go
viatar -xf
ls -lash sparse.img
tar -Scf sparse.tar sparse.img
tar -xf sparse.tar
ls -lash sparse.img
This is somewhat related to #12594.
I could also be creating the archive incorrectly, and have tried a few different methods for creating the tar archive, each one however, did not keep the sparse files intact upon extraction of the archive. This also cannot be replicated in OSX as HGFS+ does not have a concept of sparse files, and instantly destroys any file sparseness, hence the need for running and testing the reproduction case in a vagrant vm.
Any thoughts or hints into this would be greatly appreciated, thanks!
The text was updated successfully, but these errors were encountered: