Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

archive/tar: add support for writing tar containing sparse files #13548

Open
grubernaut opened this issue Dec 9, 2015 · 43 comments
Open

archive/tar: add support for writing tar containing sparse files #13548

grubernaut opened this issue Dec 9, 2015 · 43 comments
Assignees
Milestone

Comments

@grubernaut
Copy link

@grubernaut grubernaut commented Dec 9, 2015

I've created a Github Repo with all the needed steps for reproducing this on Ubuntu 12.04 using Go1.5.1. I've also verified that using Go1.5.2 still experiences this error.

Run vagrant create then vagrant provision from repository root.

vagrant create
vagrant provision

Expected Output:

$ vagrant provision
==> default: Running provisioner: shell...
    default: Running: inline script
==> default: stdin: is not a tty
==> default: go version go1.5.2 linux/amd64
==> default: Creating Sparse file
==> default: Proving file is truly sparse
==> default: 0 -rw-r--r-- 1 root root 512M Dec  9 15:26 sparse.img
==> default: Compressing in Go without sparse
==> default: Compressing in Go with sparse
==> default: FileInfo File Size: 536870912
==> default: Proving non-sparse in Go gained size on disk
==> default: 512M -rw-r--r-- 1 root root 512M Dec  9 15:26 non_sparse/sparse.img
==> default: Proving sparse in Go DID keep file size on disk
==> default: 0 -rw-r--r-- 1 root root 0 Dec  9 15:26 sparse/sparse.img
==> default: Compressing via tar w/ Sparse Flag set
==> default: Proving sparse via tar DID keep file size on disk
==> default: 0 -rw-r--r-- 1 root root 512M Dec  9 15:26 tar/sparse.img

Actual Output:

$ vagrant provision
==> default: Running provisioner: shell...
    default: Running: inline script
==> default: stdin: is not a tty
==> default: go version go1.5.2 linux/amd64
==> default: Creating Sparse file
==> default: Proving file is truly sparse
==> default: 0 -rw-r--r-- 1 root root 512M Dec  9 15:35 sparse.img
==> default: Compressing in Go without sparse
==> default: Compressing in Go with sparse
==> default: Proving non-sparse in Go gained size on disk
==> default: 513M -rw-r--r-- 1 root root 512M Dec  9 15:35 non_sparse/sparse.img
==> default: Proving sparse in Go DID NOT keep file size on disk
==> default: 512M -rw-r--r-- 1 root root 512M Dec  9 15:35 sparse/sparse.img
==> default: Compressing via tar w/ Sparse Flag set
==> default: Proving sparse via tar DID keep file size on disk
==> default: 0 -rw-r--r-- 1 root root 512M Dec  9 15:35 tar/sparse.img

The Vagrantfile supplied in the repository runs the following shell steps:

  • Installs Go
  • Creates a sparse file via truncate -s 512M sparse.img
  • Proves that the file is sparse via ls -lash sparse.img
  • Runs compress.go via go run compress.go
  • Untars the archives created by compress.go via tar -xf
  • Verifies that the extracted files did not maintain sparse files, both with and without the sparse type set in the tar file's header. ls -lash sparse.img
  • Uses GNU/Tar to compress the sparse file with the sparse flag set tar -Scf sparse.tar sparse.img
  • Extracts the archive created by GNU/Tar tar -xf sparse.tar
  • Proves that GNU/Tar maintained sparse files ls -lash sparse.img

This is somewhat related to #12594.

I could also be creating the archive incorrectly, and have tried a few different methods for creating the tar archive, each one however, did not keep the sparse files intact upon extraction of the archive. This also cannot be replicated in OSX as HGFS+ does not have a concept of sparse files, and instantly destroys any file sparseness, hence the need for running and testing the reproduction case in a vagrant vm.

Any thoughts or hints into this would be greatly appreciated, thanks!

@bradfitz
Copy link
Contributor

@bradfitz bradfitz commented Dec 9, 2015

/cc @dsnet who's been going crazy on the archive/tar package in the Go 1.6 tree ("master" branch)

@dsnet
Copy link
Member

@dsnet dsnet commented Dec 9, 2015

This isn't a bug per-say, but more of a feature request. Sparse file support is only provided for tar.Reader, but not tar.Writer. Currently, it's a bit asymmetrical, but supporting sparse files on tar.Writer requires API change, which may take some time to think about.

Also, this is mostly unrelated to #12594. Although, that bug should definitely be fixed before any attempt at this is made. For the time being, I recommend putting this in the "unplanned" milestone, I'll revisit this issue when the other tar bugs are first fixed.

@grubernaut
Copy link
Author

@grubernaut grubernaut commented Dec 9, 2015

@dsnet should I keep this here as a feature request, or is there another preferred format for those?

@dsnet
Copy link
Member

@dsnet dsnet commented Dec 9, 2015

The issue tracker is perfect for that. So this is just fine.

@rsc rsc changed the title archive/tar: Writing a tarfile does not maintain sparse files archive/tar: no support for writing tar containing sparse files Dec 28, 2015
@rsc rsc added this to the Unplanned milestone Dec 28, 2015
@rsc rsc changed the title archive/tar: no support for writing tar containing sparse files archive/tar: add support for writing tar containing sparse files Dec 28, 2015
@dsnet
Copy link
Member

@dsnet dsnet commented Feb 26, 2016

This my proposed addition to the tar API to support sparse writing.

First, we modify tar.Header to have an extra field:

type Header struct {
    ...

    // SparseHoles represents a sequence of holes in a sparse file.
    //
    // The regions must be sorted in ascending order, not overlap with
    // each other, and not extend past the specified Size.
    // If len(SparseHoles) > 0 or Typeflag is TypeGNUSparse, then the file is
    // sparse. It is optional for Typeflag to be set to TypeGNUSparse.
    SparseHoles  []SparseHole
}

// SparseEntry represents a Length-sized fragment at Offset in the file.
type SparseEntry struct {
    Offset int64
    Length int64
}

On the reader side, nothing much changes. We already support sparse files. All that's being done is that we're now exporting information about the sparse file through the SparseHoles field.

On the writer side, the user must set the SparseHoles field if they intend to write a sparse file. It is optional for them to set Typeflag to TypeGNUSparse (there are multiple formats to represent sparse files so this is not important). The user then proceeds to write all the data for the file. For sparse holes, they will be required to write Length zeros for that given hole. It is a little inefficient writing zeros for the holes, but I decided on this approach because:

  • It is symmetrical with how tar.Reader already operates (which transparently expands a sparse file).
  • It is more representative of what the "end result" really looks like. For example, it allows a user to write a sparse file by just doing io.Copy(tarFile, sparseFile) and not worry about where the holes are (assuming they already populated the SparseHoles field).

I should note that the tar format represents sparse files by indicating which regions have data, and treating everything else as a hole. The API exposed here does the opposite; it represents sparse files by indicating which regions are holes, and treating everything else as data. The reason for this inversion is because it fits the Go philosophy that the zero value of some be meaningful. The zero value of SparseHoles indicates that there are no holes in the file, and thus it is a normal file; i.e., the default makes sense. If we were to use SparseDatas instead, the zero value of that indicates that there is no data in the file, which is rather odd.

It is a little inefficient requiring that users write zeros and the bottleneck will be the memory bandwidth's ability to transfer potentially large chunks of zeros. Though not necessary, the following methods may be worth adding as well:

// Discard skips the next n bytes, returning the number of bytes discarded.
// This is useful when dealing with sparse files to efficiently skip holes.
func (tr *Reader) Discard(n int64) (int64, error) {}

// FillZeros writes the next n bytes by filling it in with zeros.
// It returns the number of bytes written, and an error if any.
// This is useful when dealing with sparse files to efficiently skip holes.
func (tw *Writer) FillZeros(n int64) (int64, error) {}

Potential example usage: https://play.golang.org/p/Vy63LrOToO

@ianlancetaylor
Copy link
Contributor

@ianlancetaylor ianlancetaylor commented Feb 26, 2016

If Reader and Writer support sparse files transparently, why export SparseHoles? Is the issue that when writing you don't want to introduce a sparse hole that the caller did not explicitly request?

@dsnet
Copy link
Member

@dsnet dsnet commented Feb 26, 2016

The Reader expands sparse files transparently. The Writer is "transparent" in the sense that a user can just do io.Copy(tw, sparseFile) and so long as the user already specified where there sparse holes are, it will avoid writing the long runs of zeros.

Purely transparent sparse files for Writer cannot easily done since the tar.Header is written before the file data. Thus, the Writer cannot know what sparse map to encode in the header prior to seeing the data itself. Thus, Writer.WriteHeader needs to be told where the sparse holes are.

I don't think tar should automatically create sparse files (for backwards compatibility). As a data point, the tar utilities do not automatically generate sparse files unless the -S flag is passed in. However, it would be nice if the user didn't need to come up with the SparseHoles themselves. Unfortunately, I don't see an easy solution to this.


There are three main ways that sparse files may be written:

  1. In the case of writing a file from the filesystem (the use case that spawned this issue is of this), I'm not aware of any platform independent way to easily query for all the sparse holes. There is a method to do this on Linux and Solaris with SEEK_DATA and SEEK_HOLE (see my test in CL/17692), but I'm not aware of ways to do this on other OSes like Windows or Darwin.
  2. In the case of a round-trip read-write, a tar.Header read from Reader.Next and written to Writer.WriteHeader will work just fine as expected since tar.Header will have the SparseHoles field populated.
  3. In the case of writing a file from a memory, the user will need to write their own zero detection scheme (assuming they don't already know where the holes are).

I looked at the source for GNU and BSD tar to see what they do:

  • (Source) BSD tar attempts to use FIEMAP first, then SEEK_DATA/SEEK_HOLE, then (it seems) it avoids sparse files altogether.
  • (Source) GNU tar attempts to use SEEK_DATA/SEEK_HOLE, then falls back on brute-force zero block detection.

I'm not too fond of the OS specific things that they do to detect holes (granted archive/tar already has many OS specific things in it). I think it would be nice if tar.Writer provided a way to write spares files, but I think we should delegate detection of sparse holes to the user for now. If possible, we can try and get sparse info during FileInfoHeader, but I'm not sure that os.FileInfo has the necessary information to do the queries that are needed.

@AkihiroSuda
Copy link

@AkihiroSuda AkihiroSuda commented Nov 29, 2016

@dsnet Design SGTM (non-binding), do you plan to implement that feature?

@dsnet
Copy link
Member

@dsnet dsnet commented Dec 1, 2016

I'll try and get this into the Go 1.9 cycle. However, a major refactoring of the tar.Writer implementation needs to happen first.

@dsnet dsnet modified the milestones: Go1.9Maybe, Unplanned Dec 1, 2016
@dsnet
Copy link
Member

@dsnet dsnet commented Dec 7, 2016

That being said, for all those interested in this feature, can you mention what your use case is?

For example, are you only interested in being able to write a sparse file where you have to specify explicitly where the holes in the file are? Or do you expect to pass an os.FileInfo and have the tar package figure it out (I'm not sure this is possible)?

@willglynn
Copy link

@willglynn willglynn commented Dec 8, 2016

My use is go_ami_tools/aws_bundle, a library which makes machine images for Amazon EC2. The inside of the Amazon bundle format is a sparse tar, which is a big advantage for machine images since there's usually lots of zeroes. go_ami_tools currently writes all the zeroes and lets them get compressed away, but a spare tar would be better.

I'd like to leave zero specification up to the user of my library. ec2-bundle-and-upload-image – my example tool – would read zeroes straight from the host filesystem, but someone could just as easily plug the go_ami_tools library to a VMDK or QCOW reader in which case the zeroes would be caller-specified.

@AkihiroSuda
Copy link

@AkihiroSuda AkihiroSuda commented Dec 8, 2016

My use case is to solve a Docker's issue moby/moby#5419 (comment) , which leads docker build to ENOSPC when the container image contains a sparse file.

@grubernaut
Copy link
Author

@grubernaut grubernaut commented Dec 8, 2016

We (Hashicorp) run Packer builds for customers on our public SaaS, Atlas. We offer up an Artifact Store for Atlas customers so that they can store their created Vagrant Boxes, VirtualBox (ISO, VMX), QEMU, or other builds inside our infrastructure. If the customer specifies using the Atlas post-processor during a Packer build, we first create an archive of the resulting artifact, and then we create a POST to Atlas with the resulting archive.

Many of the resulting QEMU, VirtualBox, and VMware builds can be fairly large (10-20GB), and we've had a few customers sparse the resulting disk image, which can lower the resulting artifacts size to ~500-1024MB. This, of course allows for faster downloads, less bandwidth usage, and a better experience overall.

We first start to create the archive from the Atlas Post-Processor in Packer (https://github.com/mitchellh/packer/blob/master/post-processor/atlas/post-processor.go#L154).
We then archive the resulting artifact directory, and walk the directory. Finally, we write the file headers, and perform an io.Copy: (https://github.com/hashicorp/atlas-go/blob/master/archive/archive.go#L381).

In this case, we wouldn't know explicitly where the holes in the file are, and would have to rely on os.FileInfo or something similar to generate the sparsemap of the file; although I'm not entirely sure that this is possible.

@vbatts
Copy link
Contributor

@vbatts vbatts commented Apr 24, 2017

@dsnet the use-case is largely around the container images. So the Reader design you proposed SGTM, though it would be nice if the tar reader also provider io.Seeker to accommodate the SparseHoles, but that is not a terrible issue just less than ideal.
For the Writer, either passing the FileInfo, or some way quick detection and perhaps an io.Writer wrapper with a type assertion?
Both sides would be useful though. Thanks for your work on this.

@dsnet dsnet modified the milestones: Go1.10, Go1.9Maybe May 22, 2017
mistyhacks added a commit to docker/docker.github.io that referenced this issue Jun 2, 2017
Running `useradd` without `--no-log-init` risks triggering a resource exhaustion issue:

    moby/moby#15585
    moby/moby#5419
    golang/go#13548
@dsnet
Copy link
Member

@dsnet dsnet commented Aug 18, 2017

Sorry this got dropped in Go1.9, I have a working solution out for review for Go1.10.

@gopherbot
Copy link

@gopherbot gopherbot commented Aug 18, 2017

Change https://golang.org/cl/56771 mentions this issue: archive/tar: refactor Reader support for sparse files

@dsnet
Copy link
Member

@dsnet dsnet commented Aug 24, 2017

Another possibility is to have io.Seeker to seek to 1-before the last byte in the last fragment and write a single byte.

My evaluation of the approaches:


Seek to 1-byte before the last hole and write a single zero byte.

For consistency, Writer.ReadFrom can also do the same 1-byte before EOF technique to ensure the file really is that long (since you can Seek to arbitrary offsets and most io.Seeker won't tell you it is past EOF).

I don't think byte-for-byte reproduction (in terms of where the hole regions are) of sparse files is necessary. So I'm okay if this implicitly causes a single block to be allocated at the end of the file. The reality is that the sparse file generated is still at the whim of the underlying filesystem, which may not be able to exactly respect the hole regions from the original tar file (the source FS may have 4KiB blocks, and the target FS may have a different block size and can't represent holes at offsets from the original FS).

Special-case Reader.WriteTo for os.File rather than io.WriteSeeker

For consistency, WriteTo/ReadFrom should both use os.File then.
The upside to this approach is that it more clearly optimized for os.File, which has stronger guarantees about the behavior of seeking past EOF.
The downside, you can't use a wrapper around os.File that does hole-punching yourself (in the case of Windows).

Special-case Reader.WriteTo for io.WriteSeeker, plus tell the users that they need to call Truncate themselves.

The downside is this a very subtle requirement for the user.

Special-case Reader.WriteTo for io.WriteSeeker and os.File (or a non-idiomatic Truncater interface).

The downside is more special-casing. There is value in having as few special-cases as possible.


My first vote goes to "seek 1-byte before" technique. My second vote is special-casing for "os.File" only.

@dsnet
Copy link
Member

@dsnet dsnet commented Aug 29, 2017

This bug seems interesting to what we're trying here: #21681

@dsnet dsnet removed their assignment Aug 31, 2017
@dsnet
Copy link
Member

@dsnet dsnet commented Sep 1, 2017

@rasky, have you started working on B yet? I have a working version of it using the "seek 1-byte before" technique.

@gopherbot
Copy link

@gopherbot gopherbot commented Sep 1, 2017

Change https://golang.org/cl/60871 mentions this issue: archive/tar: add Header.DetectSparseHoles

@gopherbot
Copy link

@gopherbot gopherbot commented Sep 1, 2017

Change https://golang.org/cl/60872 mentions this issue: archive/tar: add Reader.WriteTo and Writer.ReadFrom

gopherbot pushed a commit that referenced this issue Sep 18, 2017
To support the efficient packing and extracting of sparse files,
add two new methods:
	func Reader.WriteTo(io.Writer) (int64, error)
	func Writer.ReadFrom(io.Reader) (int64, error)

If the current archive entry is sparse and the provided io.{Reader,Writer}
is also an io.Seeker, then use Seek to skip past the holes.
If the last region in a file entry is a hole, then we seek to 1 byte
before the EOF:
	* for Reader.WriteTo to write a single byte
	to ensure that the resulting filesize is correct.
	* for Writer.ReadFrom to read a single byte
	to verify that the input filesize is correct.

The downside of this approach is when the last region in the sparse file
is a hole. In the case of Reader.WriteTo, the 1-byte write will cause
the last fragment to have a single chunk allocated.
However, the goal of ReadFrom/WriteTo is *not* the ability to
exactly reproduce sparse files (in terms of the location of sparse holes),
but rather to provide an efficient way to create them.

File systems already impose their own restrictions on how the sparse file
will be created. Some filesystems (e.g., HFS+) don't support sparseness and
seeking forward simply causes the FS to write zeros. Other filesystems
have different chunk sizes, which will cause chunk allocations at boundaries
different from what was in the original sparse file. In either case,
it should not be a normal expectation of users that the location of holes
in sparse files exactly matches the source.

For users that really desire to have exact reproduction of sparse holes,
they can wrap os.File with their own io.WriteSeeker that discards the
final 1-byte write and uses File.Truncate to resize the file to the
correct size.

Other reasons we choose this approach over special-casing *os.File because:
	* The Reader already has special-case logic for io.Seeker
	* As much as possible, we want to decouple OS-specific logic from
	Reader and Writer.
	* This allows other abstractions over *os.File to also benefit from
	the "skip past holes" logic.
	* It is easier to test, since it is harder to mock an *os.File.

Updates #13548

Change-Id: I0a4f293bd53d13d154a946bc4a2ade28a6646f6a
Reviewed-on: https://go-review.googlesource.com/60872
Run-TryBot: Joe Tsai <thebrokentoaster@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
@gopherbot gopherbot closed this in 1eacf78 Sep 20, 2017
@vbatts
Copy link
Contributor

@vbatts vbatts commented Sep 20, 2017

@rasky
Copy link
Member

@rasky rasky commented Sep 20, 2017

Now that support has been added, it would be great if people interested in this feature would provide feedback at least on the API, before it gets shipped and can’t be changed anymore.

Have a look at https://tip.golang.org/pkg/archive/tar/

@grubernaut
Copy link
Author

@grubernaut grubernaut commented Sep 22, 2017

cc: @mwhooker, as the original case for this issue came from an end-user requiring sparse support inside of Atlas-Go after creating a sparse image via Packer. More detail and function to be patched linked here: #13548 (comment)

@astromechza
Copy link

@astromechza astromechza commented Sep 23, 2017

Came across this issue looking for sparse-file support in Golang. API looks good to me and certainly fits my usecase :). Is there no sysSparsePunch needed for unix?

@dsnet
Copy link
Member

@dsnet dsnet commented Sep 23, 2017

On Unix OSes that support sparse files, seeking past EOF and writing or resizing the file to be larger automatically produces a sparse file.

@astromechza
Copy link

@astromechza astromechza commented Sep 23, 2017

Cool, so it detects that you've skipped past a block and not written anything to it and automatically assumes its sparse? Nice 👍

@gopherbot
Copy link

@gopherbot gopherbot commented Nov 16, 2017

Change https://golang.org/cl/78030 mentions this issue: archive/tar: partially revert sparse file support

gopherbot pushed a commit that referenced this issue Nov 16, 2017
This CL removes the following APIs:
	type SparseEntry struct{ ... }
	type Header struct{ SparseHoles []SparseEntry; ... }
	func (*Header) DetectSparseHoles(f *os.File) error
	func (*Header) PunchSparseHoles(f *os.File) error
	func (*Reader) WriteTo(io.Writer) (int, error)
	func (*Writer) ReadFrom(io.Reader) (int, error)

This API was added during the Go1.10 dev cycle, and are safe to remove.

The rationale for reverting is because Header.DetectSparseHoles and
Header.PunchSparseHoles are functionality that probably better belongs in
the os package itself.

The other API like Header.SparseHoles, Reader.WriteTo, and Writer.ReadFrom
perform no OS specific logic and only perform the actual business logic of
reading and writing sparse archives. Since we do know know what the API added to
package os may look like, we preemptively revert these non-OS specific changes
as well by simply commenting them out.

Updates #13548
Updates #22735

Change-Id: I77842acd39a43de63e5c754bfa1c26cc24687b70
Reviewed-on: https://go-review.googlesource.com/78030
Reviewed-by: Russ Cox <rsc@golang.org>
@rasky
Copy link
Member

@rasky rasky commented Nov 17, 2017

Unfortunately, the code had to be reverted and will not be part of 1.10 anymore. This bug should probably be reopened.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Linked pull requests

Successfully merging a pull request may close this issue.

None yet
You can’t perform that action at this time.