Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

concatenation of *xz files and then decompression using pixz #89

Closed
justinmccrary opened this issue Nov 15, 2020 · 2 comments · Fixed by #90
Closed

concatenation of *xz files and then decompression using pixz #89

justinmccrary opened this issue Nov 15, 2020 · 2 comments · Fixed by #90

Comments

@justinmccrary
Copy link

I have many, many files which were compressed as part of a long-standing real-time loop using pixz.

Many are big and so I want to decompress them individually. But others are small and I expect efficiencies from concatenating them and then pass that concatenation to pixz for decompression.

So at root I am looking to do something like:
cat small_files*.txt.xz > file.txt.xz
pixz -d -p 4 > file.txt < file.txt.xz

Even better would be something like
pixz -d -p 4 > file.txt < small_files*.txt.xz

@vasi
Copy link
Owner

vasi commented Nov 15, 2020

This is an interesting bug! Here's what's happening:

  • We read the first file's blocks successfully, and get to the index of the first file
  • When reading the index, we read in big chunks of data: https://github.com/vasi/pixz/blob/master/src/read.c#L328 . This leaves our read buffer with a lot of data in it
  • Reading the index + footer of the first file, and header and first block of the second file, just consume from the read buffer. That's fine
  • But when we dispatch the first block of the second file to a decompressor threads, we pass along the entire read buffer, even though it contains too much data. Then the next block doesn't get that data and blows up.

So this bug only happens when there's small, concatenated files. Fun!

@vasi
Copy link
Owner

vasi commented Nov 15, 2020

Should be fixed, please give it a try. You can do cat f1.xz f2.xz f3.xz | pixz -d > outut.txt.

Note that pixz isn't really better than xz for compression/decompressing lots of individual small files. We can only really use parallelization with large files (including tarballs that contain lots of small files).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants