Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

archive/tar: re-add sparse file support #22735

Open
rsc opened this issue Nov 15, 2017 · 5 comments
Open

archive/tar: re-add sparse file support #22735

rsc opened this issue Nov 15, 2017 · 5 comments

Comments

@rsc
Copy link
Contributor

@rsc rsc commented Nov 15, 2017

Hi @dsnet. Thanks for all the Go 1.10 archive/tar work. It's really an amazing amount of cleanup, and it's very well done.

The one change I'm uncomfortable with from an API point of view is the sparse hole support.

First, I worry that it's too complex to use. I get lost trying to read Example_sparseAutomatic - 99% of it seems to have nothing to do with sparse files - and I have a hard time believing that we expect clients to write all this code. Despite the name, nothing about the example strikes me as “automatic.”

Second, I worry that much of the functionality here does not belong in archive/tar. Tar files are not the only time that a client might care about where the holes are in a file or about creating a new file with holes, and yet somehow this functionality is expressed in terms of tar.Header and a new tar.SparseHole structure instead of tar-independent operations. Tar should especially not be importing and using such subtle bits of syscall as it is in sparse_windows.go.

It's too late to redesign this for Go 1.10, so I suggest we pull out this new API and revisit for Go 1.11.

For Go 1.11, I would suggest to investigate (1) what an appropriate API in package os would be, and (2) how to make archive/tar take advantage of that more automatically.

For example, perhaps it would make sense for package os to add

// Regions returns the boundaries of data and hole regions in the file.
// The result slice can be read as pairs of offsets indicating the location
// of initialized data in the file or, ignoring the first and last element,
// as pairs of offsets indicating the location of a hole in the file.
// The first element of the result is always 0, and the last element is
// always the size of the file.
// For example, if f is a 4-kilobyte file with data written only to the
// first and last kilobyte (and therefore a 2-kilobyte hole in the middle),
// Regions would return [0, 1024, 3072, 4096].
//
// On operating systems that do not support files with holes or do
// not support querying the location of holes in files,
// Regions returns [0, size].
//
// Regions may temporarily change the file offset, so it should not
// be executed in parallel with Read or Write operations.
func (f *File) Regions() ([]int64, error)

That would avoid archive/tar's current DetectParseHoles and SparseEntry, and the tar.Header only need to add a new field Regions []int64. (Regions is not a great name; better names are welcome.) Note that using a simple slice of offsets avoids the need for a special invertSparseEntries function entirely: you just change whether you read pairs starting at offset 0 or 1.

As for "punching holes", it suffices on Unix (as you know) to simply truncate the file (which Create does anyway) and then not write to the holes. On Windows it appears to be necessary to set the file type to sparse, but I don't see why the rest of sparsePunchWindows is needed. It seems crazy to me that it could possibly be necessary to pre-declare every hole location in a fresh file. The FSCTL_SET_ZERO_DATA looks like it is for making a hole in an existing file, not a new file. It seems like it should suffice to truncate the target file, mark it as sparse, set the file size, and then write the data. What's left should be automatically inferred as holes. If we were to add a new method SetSparse(bool) to os.File, then I would expect it to work on all systems to do something like:

f = Create(file)
f.SetSparse(true) // no-op on non-Windows systems, FSCTL_SET_SPARSE (only) on Windows
for each data chunk {
	f.WriteAt(data, offset)
}
f.Truncate(targetSize) // in case of final hole, or write last byte of file

Finally, it seems like handling this should not be the responsibility of every client of archive/tar. It seems like it would be better for this to just work automatically.

On the tar.Reader side, WriteTo already takes care of not writing to holes. It could also call SetSparse and use Truncate if present as an alternative to writing the last byte of the file.

On the tar.Writer side, I think ReadFrom could also take care of this. It would require making WriteHeader compute the header to be written to the file but delay the actual writing until the Write or ReadFrom call. (And that in turn might make Flush worth keeping around not-deprecated.) Then when ReadFrom is called to read from a file with holes, it could find the holes and add that information to the header before writing out the header. Both of those combined would make this actually automatic.

At the very least, it seems clear that the current API steps beyond what tar should be responsible for. I can easily see developers who need to deal with sparse files but have no need for tar files constructing fake tar headers just to use DetectSparseHoles and PunchSparseHoles. That's a strong signal that this functionality does not belong in tar as the primary implementation. (A weaker but still important signal is that to date the tar.Header fields and methods have not mentioned os.File explicitly, and it should probably stay that way.)

Let's remove this from Go 1.10 and revisit in Go 1.11. Concretely, let's remove tar.SparseEntry, tar.Header.SparseHoles, tar.Header.DetectSparseHoles, tar.Header.PunchSparseHoles, and the deprecation notice for tar.Writer.Flush.

Thanks again.
Russ

@rsc rsc added this to the Go1.10 milestone Nov 15, 2017
@dsnet
Copy link
Member

@dsnet dsnet commented Nov 15, 2017

(2) how to make archive/tar take advantage of that more automatically.

Automatic creation of sparse tar archives should actually be a non-goal. That is, we should not generate sparse archives with zero changes to user code. Instead, creation of sparse archives should occur with only 1 simple change to their code if they want sparse archives. In other words, make sparse support an easy opt-in, but not automatic.

The two most common implementations, GNU tar and BSD tar, both understand sparse headers. The problem lies with a long-tail of other tar implementations that do not understand sparse headers. For example, dpkg does not understand sparse files and it would be terrible if we started creating sparse archives against user expectation.

On the tar.Writer side, I think ReadFrom could also take care of this. It would require making WriteHeader compute the header to be written to the file but delay the actual writing until the Write or ReadFrom call.

For the reason I stated earlier I'm opposed to ReadFrom being fully automatic for creating sparse archives.

Also, I feel uncomfortable that WriteHeader is a lazy write. First, it's contrary to the naming of the method. Second, it feels weird to me that the implementation of Writer makes assumptions about OS specific functionality. Third, I have heard anecdotally of users relying on WriteHeader being non-lazy because they care about what parts of an archive are "headers" and what parts are "data".

On the tar.Reader side, WriteTo already takes care of not writing to holes. It could also call SetSparse and use Truncate if present as an alternative to writing the last byte of the file.

Automatic (attempt at) creation of sparse files on the filesystem sounds fine, since OS or FS that don't support them still write a valid file (except for NaCl; see #21728), but I do have hesitation about calling OS specific methods in WriteTo.


(1) what an appropriate API in package os would be

I support having all of the OS-specific sparse logic in package os. I share your concern about users abusing Header.DetectSparseHoles and Header.PunchSparseHoles to get at the OS functionality without caring about tar archive.

you just change whether you read pairs starting at offset 0 or 1.

I like how easy it is to convert between the two semantics, but I would feel more comfortable if there was some bit of type safety. We can discuss further when discussing the change to os.

Note that using a simple slice of offsets avoids the need for a special invertSparseEntries function entirely

invertSparseEntries is an implementation details. It has the useful property that it normalizes the offsets, so we would still need something like it. That is, if there are two holes adjacent to each other, invertSparseEntries combines them.

Loading

@gopherbot
Copy link

@gopherbot gopherbot commented Nov 15, 2017

Change https://golang.org/cl/78030 mentions this issue: archive/tar: partially revert sparse file support

Loading

@rsc
Copy link
Contributor Author

@rsc rsc commented Nov 15, 2017

Thanks for the CL.

I see your point about the Writer not doing it automatically. I'm OK with that.

Reader.WriteTo doesn't seem like it really needs any OS-specific stuff at all. Assuming a new SetSparse method, it just has to sniff for SetSparse+Truncate+Seek. And actually Truncate is basically optional, it's just SetSparse and Seek.

Loading

@dsnet
Copy link
Member

@dsnet dsnet commented Nov 15, 2017

I still think of SetSparse as OS-specific functionality, while Seek (even though it originates from fseek) is a common enough paradigm from the io package that it's not all that OS-specific.

Are you going to propose changes to os.File or should I write that up?

Loading

gopherbot pushed a commit that referenced this issue Nov 16, 2017
This CL removes the following APIs:
	type SparseEntry struct{ ... }
	type Header struct{ SparseHoles []SparseEntry; ... }
	func (*Header) DetectSparseHoles(f *os.File) error
	func (*Header) PunchSparseHoles(f *os.File) error
	func (*Reader) WriteTo(io.Writer) (int, error)
	func (*Writer) ReadFrom(io.Reader) (int, error)

This API was added during the Go1.10 dev cycle, and are safe to remove.

The rationale for reverting is because Header.DetectSparseHoles and
Header.PunchSparseHoles are functionality that probably better belongs in
the os package itself.

The other API like Header.SparseHoles, Reader.WriteTo, and Writer.ReadFrom
perform no OS specific logic and only perform the actual business logic of
reading and writing sparse archives. Since we do know know what the API added to
package os may look like, we preemptively revert these non-OS specific changes
as well by simply commenting them out.

Updates #13548
Updates #22735

Change-Id: I77842acd39a43de63e5c754bfa1c26cc24687b70
Reviewed-on: https://go-review.googlesource.com/78030
Reviewed-by: Russ Cox <rsc@golang.org>
@dsnet dsnet changed the title archive/tar: remove SparseHoles for Go 1.10 + revisit in Go 1.11 archive/tar: re-add sparse file support Nov 16, 2017
@dsnet dsnet self-assigned this Nov 16, 2017
@dsnet dsnet removed this from the Go1.10 milestone Nov 16, 2017
@dsnet dsnet added this to the Go1.11 milestone Nov 16, 2017
@rasky
Copy link
Member

@rasky rasky commented Nov 17, 2017

Reader.WriteTo doesn't seem like it really needs any OS-specific stuff at all. Assuming a new SetSparse method, it just has to sniff for SetSparse+Truncate+Seek. And actually Truncate is basically optional, it's just SetSparse and Seek.

AFAICT, on Windows, you can't create sparse zero areas by seeking, as the MSDN documentation clearly states:

https://msdn.microsoft.com/it-it/library/windows/desktop/aa365566%28v=vs.85%29.aspx
https://blogs.msdn.microsoft.com/oldnewthing/20110922-00/?p=9573/

The blog post hints that you can set the file sparse and create immediately a full-size sparse span, so that later writers+seeks would basically fragment it but leaving sparse areas under the seeks. I have no clue if there is an impact on performance with this approach, and also it doesn't really belong to a os.File.SetSparse API I would say.

Note that this was discussed at length in #13548, where also your proposal of lazy header writing was analyzed and discarded.

Loading

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Linked pull requests

Successfully merging a pull request may close this issue.

None yet
5 participants