Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature: min packsize flag #3731

Merged
merged 15 commits into from
Aug 7, 2022

Conversation

metalsp0rk
Copy link
Contributor

@metalsp0rk metalsp0rk commented Apr 30, 2022

What is the purpose of this change? What does it change?

Would like to add the following in order to tune restic rather than hardcoding it:

--pack-size

Was the change discussed in an issue or in the forum before?

https://forum.restic.net/t/control-the-minimal-pack-files-size/617

Supersedes #2750
Closes #2291

Checklist

  • I have read the Contribution Guidelines
  • I have enabled maintainer edits for this PR
  • I have added tests for all changes in this PR
  • I have added documentation for the changes (in the manual)
  • There's a new file in changelog/unreleased/ that describes the changes for our users (template here)
  • I have run gofmt on the code in all commits
  • All commit messages are formatted in the same style as the other commits in the repo
  • I'm done, this Pull Request is ready for review

@metalsp0rk metalsp0rk force-pushed the feature/min-packsize-flag branch 3 times, most recently from 2ef6b1c to 14bbd80 Compare April 30, 2022 22:25
cmd/restic/global.go Outdated Show resolved Hide resolved
@metalsp0rk metalsp0rk force-pushed the feature/min-packsize-flag branch 2 times, most recently from 0629472 to dc673d1 Compare May 1, 2022 21:23
cmd/restic/global.go Outdated Show resolved Hide resolved
cmd/restic/cmd_prune.go Outdated Show resolved Hide resolved
@@ -362,7 +364,7 @@ func prune(opts PruneOptions, gopts GlobalOptions, repo restic.Repository, usedB
// if this is a data pack and --repack-cacheable-only is set => keep pack!
keep(p)

case p.unusedBlobs == 0 && p.duplicateBlobs == 0 && p.tpe != restic.InvalidBlob:
case p.unusedBlobs == 0 && p.duplicateBlobs == 0 && p.tpe != restic.InvalidBlob && (!opts.RepackSmall || packSize >= int64(repo.MinPackSize())):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd like the --repack-small option to primarily focus on migrating to a larger pack size. Collecting too many small files should just happen automatically. My idea would be to repack all packs which are less than 10% of the expected pack size. But only if at least one hundredth of the files in a repository fall into that category. That would require collecting a separate list of packs in addition to repackCandidates which then has to be checked to contain a sufficient number of packs to be useful. What do you think?

@GilGalaad
Copy link

GilGalaad commented Jun 9, 2022

Hello everyone, and thanks to @metalsp0rk for this great job that I would love to make its way into the main branch.
I apologize if this has been asked before, but it's now almost 2 year that i follow this PR, and a lot of things have been changed in the meanwhile, I could not find anything in the comments of the previous discussion by the way.
I am planning for the second time to change the pack size from the default used in my current repo with a more approriate one (for my workload), and all the tests done so far show no problem, but I have a question about a behavior that is, from my perspective, counterintuitive.

In the following examples I will use official restic package from FreeBSD
restic 0.13.1 compiled with go1.18 on freebsd/amd64
and self compiled version from this branch (actually cross compiled on a Windows machine)
restic 0.13.1 (v0.9.6-1597-gdc673d1d) compiled with go1.18.3 on freebsd/amd64
and I am starting from a regularily pruned repository with default values (so 5% max unused space).
Side note: restic-wrapper.py is just a simple python wrapper that selects the binary I want to use and injects the repository coordinates and some rclone params into the environment, nothing more.

This is the output of official restic:

[root@DarkSun:/store/maintenance/restic]# ./restic-wrapper.py prune -n
repository 70565c28 opened successfully, password is correct
loading indexes...
loading all snapshots...
finding data that is still in use for 50 snapshots
[1:17] 100.00%  50 / 50 snapshots...
searching used packs...
collecting packs for deletion and repacking
[21:45] 100.00%  2209246 / 2209246 packs processed...

to repack:             0 blobs / 0 B
this removes:          0 blobs / 0 B
to delete:             0 blobs / 0 B
total prune:           0 blobs / 0 B
remaining:       9165439 blobs / 10.233 TiB
unused size after prune: 345.298 GiB (3.30% of remaining size)

So far so good.

Now i'm switching to custom built restic from this PR, I'm setting by default a greater pack size

[root@DarkSun:/store/maintenance/restic]# ./restic-wrapper.py prune -n --min-packsize 256
repository 70565c28 opened (repo version 1) successfully, password is correct
loading indexes...
loading all snapshots...
finding data that is still in use for 50 snapshots
[1:17] 100.00%  50 / 50 snapshots...
searching used packs...
collecting packs for deletion and repacking
[22:09] 100.00%  2209246 / 2209246 packs processed...

to repack:             0 blobs / 0 B
this removes:          0 blobs / 0 B
to delete:             0 blobs / 0 B
total prune:           0 blobs / 0 B
remaining:       9165439 blobs / 10.233 TiB
unused size after prune: 345.298 GiB (3.30% of remaining size)

Exact same output, as expected, the unused space is under threshold, and I'm not telling him to repack.

Now I'm telling to repack:

[root@DarkSun:/store/maintenance/restic]# ./restic-wrapper.py prune -n --min-packsize 256 --repack-small
repository 70565c28 opened (repo version 1) successfully, password is correct
loading indexes...
loading all snapshots...
finding data that is still in use for 50 snapshots
[1:18] 100.00%  50 / 50 snapshots...
searching used packs...
collecting packs for deletion and repacking
[22:01] 100.00%  2209246 / 2209246 packs processed...

to repack:          7419 blobs / 5.555 GiB
this removes:          0 blobs / 0 B
to delete:             0 blobs / 0 B
total prune:           0 blobs / 0 B
remaining:       9165439 blobs / 10.233 TiB
unused size after prune: 345.298 GiB (3.30% of remaining size)

This is where the unexcpected comes.
I would have supposed to see the repacking of all the files in repository, but this didn't happen.

To trigger the repack of everything i must force max-unused to 0

[root@DarkSun:/store/maintenance/restic]# ./restic-wrapper.py prune -n --min-packsize 256 --repack-small --max-unused 0
repository 70565c28 opened (repo version 1) successfully, password is correct
loading indexes...
loading all snapshots...
finding data that is still in use for 50 snapshots
[1:20] 100.00%  50 / 50 snapshots...
searching used packs...
collecting packs for deletion and repacking
[24:09] 100.00%  2209246 / 2209246 packs processed...

to repack:       9165439 blobs / 10.233 TiB
this removes:     273492 blobs / 345.298 GiB
to delete:             0 blobs / 0 B
total prune:      273492 blobs / 345.298 GiB
remaining:       8891947 blobs / 9.895 TiB
unused size after prune: 0 B (0.00% of remaining size)

Am I missing something?
Just by reading the documentation, I would expect to make the repack happening with the 3rd command, not the 4th.
It's not a big deal anyway, I just want to be sure to understand the internals of this feature, and maybe suggest that the documentation could explain better this scenario, wich I suppose will be very common when this feature will be generally available.

Thank you if you had the patience to read this wall of text :)

@MichaelEischer
Copy link
Member

MichaelEischer commented Jun 11, 2022

tl;dr --min-packsize should allow a range of 4-128MB and default to 16MB.

While thinking about the --min-packsize parameter I ended up wondering which benefit packs offer in general and why putting everything in a single large pack isn't a good idea either.

Why not store individual blobs?

  1. Without packs, accessing lots of blobs would have a higher overhead (in terms of request size and processing costs) as each blob would have to be accessed individually.
  2. For small blobs that would require a very high number of concurrent requests for fast processing, this is especially a problem when reading/writing has several milliseconds of latency or more. Writing also requires flushing the data to disk which increases the latency drastically.
  3. For remote backends nearly all blobs are too small to reach the maximum TCP transfer speed.
  4. File systems don't like millions of small objects. We'd also pay a significant price in terms of space necessary to store all the file metadata.

Why not put everything into a single large pack file?

  1. No overlap of backup and upload (at least without rebuilding how uploading works)
  2. During the upload we have to keep a local copy of the pack to support retries. This could result in requiring scratch space with a size similar to the backup data set.
  3. An interrupted upload of a gigantic pack would probably loose everything that was already uploaded, thereby preventing restarts.
  4. Garbage collection with gigantic packs would require rewriting large parts of a repository.
  5. Using very large pack files would always require us to write temporary pack files during backup. The larger these are the more likely it is that these are actually written to disk.

In a nutshell, the pack size should be as large as possible while only requiring a short time to upload and while keeping the local resource usage limited.

During backup we'll need at least packsize*connections of temporary space, which can be significant for large pack files. I also hope to eventually get rid of writing temporary pack files while uploading. However, this requires keeping all temporary packs in memory. The repository index also assumes that packs are below 4GB (for memory efficiency reasons) which provides a hard upper limit. Increasing fundamental properties (i.e. the pack size) by more than an order of magnitude can uncover unexpected problems, which means that we should be careful when increasing the max pack size.

So I think we should for now allow a range of 4-128MB with a default of 16MB for the min pack size.

4MB is the current hardcoded pack size, which seems to limit backup throughput e.g. due to frequent flushing using the local backend. 16MB is a somewhat random size, but should already drastically reduce flushing related performance problems. In addition it is small enough that 5 (the default connection count) of these pack files can be uploaded in a bit over a minute using a network connection with roughly 1MB/s upload (which still isn't that uncommon). The upper limit of 128MB can be uploaded in a second via a Gigabit connection but still doesn't consume gigantic amounts of memory. In addition it is already larger than the current pack size by a factor 32, such that I think that even larger limits are not a good idea for now.

[Edit]The downsides of storing individual blobs are essentially all solved by packs of about a hundred MB. The only thing remaining is that it would be possible to reduce the number of files a bit more. Thus, even larger pack sizes primarily increase the resource consumption of restic without much of a benefit[/Edit]

I'm also aware of two potential problems which make larger pack files somewhat problematic: Until #3489 is merged, it is possible to end up with several small packs from each backup run. Pack streaming (used by restore and prune) downloads everything between the first and last blob in a pack file. It's probably a good idea to split that range if there's a multi-MB gap between blobs.

@MichaelEischer MichaelEischer force-pushed the feature/min-packsize-flag branch 2 times, most recently from 257ae5c to f1d127e Compare June 11, 2022 21:15
@MichaelEischer
Copy link
Member

I've rebased the PR such that now there's compression and large pack support. Setting the min pack size is now done when creating a new repository object which also ensures that the test suite uses a min pack size larger than 0 bytes.

@GilGalaad --repack-small should now work without setting --max-unused 0. However, it is still fairly aggressive and will repack every pack file that is smaller than the limit even if there is just a single such pack file.

@Slind14
Copy link

Slind14 commented Jul 2, 2022

We tested this version and saw a huge negative impact. Even without changing the size.

  • this PR without change -> 600M
  • this PR with 128 min-packsize -> 800M
  • without this PR -> 2G

image

@MichaelEischer
Copy link
Member

@Slind14 Thanks for testing. The numbers are quite unexpected. What was the exact test setup? Is "without this PR", restic 0.13.1 or the current master? Judging from the 2G best-case, I guess you're uploading to the rest-server? Does setting --min-packsize 4 (or --pack-size 4 with the latest commits) restore the performance to the previous level? My expectation would be yes, as with a target pack size of 4MiB, this PR should behave exactly as the current master.

This PR now also includes #3489 and #3611 which probably changes some of the performance tradeoffs.

But nevertheless the graph sort of looks like the uploads are stalling at some point. I've no real idea why that would happen. Just as a wild guess, is $TMPDIR (or /tmp if not set) a tmpfs mount or stored directly on disk?

@Slind14
Copy link

Slind14 commented Jul 3, 2022

I think I was unclear/misleading.

We are using the async PR and merged this one into it. So we compared the async version with async + packsize.

The temp dir is on an NVME raid. Disk util is not the bottleneck.

@Slind14
Copy link

Slind14 commented Jul 3, 2022

With setting it to 4 it looks about the same. Slightly less but that is not significant enough and could be from other factors.

@MichaelEischer
Copy link
Member

With setting it to 4 it looks about the same. Slightly less but that is not significant enough and could be from other factors.

@Slind14 Could you test whether the current code still has the upload performance problem with 16MB chunks? For comparison chunk sizes of 4 and 8 MB would also be highly interesting. Did you run the upload performance tests using the rest-server or the S3 backend? For the latter I've noticed that file chunks above 16MB lead to multipart uploads which could be detrimental to performance. These multipart uploads are now disabled below 200MB.

doc/manual_rest.rst Outdated Show resolved Hide resolved
cmd/restic/global.go Outdated Show resolved Hide resolved
cmd/restic/cmd_prune.go Outdated Show resolved Hide resolved
doc/047_tuning_backup_parameters.rst Outdated Show resolved Hide resolved
doc/047_tuning_backup_parameters.rst Outdated Show resolved Hide resolved
doc/047_tuning_backup_parameters.rst Outdated Show resolved Hide resolved
@MichaelEischer
Copy link
Member

Rebased to resolve conflicts. StreamPacks now loads a pack file in multiple parts if it would skip more than 4 MB.

@JsBergbau
Copy link
Contributor

Are there plans to include this in the next release? Would be very great because when converting to repo v2 with compression you could also increase packsize. This probably would also make conversation/compression of uncompressed packs a bit faster with bigger packsizes.

@MichaelEischer MichaelEischer force-pushed the feature/min-packsize-flag branch 2 times, most recently from 6f1fcd6 to 987e80d Compare July 30, 2022 14:55
@MichaelEischer
Copy link
Member

Rebased to fix merge conflicts.

@vejnar
Copy link

vejnar commented Aug 5, 2022

Thanks Michael for working on getting this merged. As reported in this page by multiple people, different setup lead to different performance. I don't think this is avoidable and kind of the user's responsibility if changing these parameters. That said, I have been using Restic with a MaxPackSize of 1024MB for a while, as I have very large repos (1TB to >30TB). Is there a reason to limit MaxPackSize to 128MB?

@MichaelEischer
Copy link
Member

As reported in this page by multiple people, different setup lead to different performance. I don't think this is avoidable and kind of the user's responsibility if changing these parameters.

The part that's bugging me is that I don't see a reason why 4MB packfiles could be faster during a backup than 16MB ones. My hope is/was that the increase will lead to improvements across the board or at least maintain the performance. That's why I'd prefer to see a few more test results first to get a good idea whether performance regressions are likely to occur for some users or not. Depending on these results it might be useful to add a corresponding note to the documentation. But yes, the fall back is to ask those affected by regressions to just specify --pack-size 4.

That said, I have been using Restic with a MaxPackSize of 1024MB for a while, as I have very large repos (1TB to >30TB). Is there a reason to limit MaxPackSize to 128MB?

The reason to limit it to 128MB for now is that it is already larger than the current target pack size by a factor of 32. Usually increasing some parameter by more than an order of magnitude can lead to surprises or scaling problems. In particular, I'd like to get rid of the temporary files currently created by backup and just keep them in memory. But for 1GB pack files this likely would lead to memory usage problems. The difference between 128MB and 1GB shouldn't have much influence on performance, but probably just decrease the number of files in a repository.

@Slind14
Copy link

Slind14 commented Aug 6, 2022

Regarding the performance question: I have no information on the methodology of this test, but I've got a lot of questions. @Slind14 would you be able to describe the system configuration for the test in which you observed poor performance? I'm curious if I can reproduce it.

Data:

  • usually 500MB to 10GB files (tens of TBs)
  • already compressed (restic compression can be disabled)

Server to Backup:

  • disks that can sustain +10G, in our case 10x NVMe Raid 6
  • 10 G network

Server to Backup to:

  • Wasabi location with 10G (concurrent upload across multiple entry points)
  • or custom S3 with 10G network -> in this case +14 HDDs Raid 10 + NVMe write cache/buffer -> to be able to handle the write
  • ^ we test both of these variants: same datacenter park and cross-continent

@Slind14
Copy link

Slind14 commented Aug 7, 2022

Only tested with S3 in the same datacenter park with a build on the latest commit of this branch from the 1st of August:

restic s3.connection FILE_READ_CONCURRENCY Pack-size result
async_read_concurrent 64 3 - 2.9G
async_read_concurrent 64 4 - 3.6G
async_read_concurrent 256 6 - 4.4G
async_read_concurrent 8 2 - 1.6G
async_read_concurrent 8 4 - 1.6G
packsize_latest 8 3 default 2.1G
packsize_latest 8 3 4 1.6G
packsize_latest 8 3 8 2.0G
packsize_latest 8 3 16 2.3G
packsize_latest 8 3 64 2.7G
packsize_latest 8 3 128 3.0G
packsize_latest 8 4 128 2.7G
packsize_latest 64 3 64 3.1G
packsize_latest 256 3 64 3.1G
packsize_latest 256 4 64 3.1G

Is it possible that this branch does not support FILE_READ_CONCURRENCY because we bottleneck at 3.1G no matter what?

@MichaelEischer
Copy link
Member

Is it possible that this branch does not support FILE_READ_CONCURRENCY because we bottleneck at 3.1G no matter what?

Yep, the file read concurrency parameter is only include in #2750. This PR was intended to discuss and implement just the pack-size configuration. I'll trim down #2750 after merging this one.

@Slind14 Thanks a lot for testing. The results look what I was hoping for, so that we can go ahead and merge the pack-size flag 🥳

Copy link
Member

@MichaelEischer MichaelEischer left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. Thanks a lot for the discussions, testing and tons of patience to get this merged!

@MichaelEischer MichaelEischer merged commit 2930a10 into restic:master Aug 7, 2022
@GilGalaad
Copy link

this is a great news, we have been waiting for so long <3

@JsBergbau
Copy link
Contributor

Thanks for merging this into master.
I've build restic from current master branch and tried this feature with a local repository and packsize of 64 MB on Windows.
This leads to writing out the temporary files on SSD in temp directory like reported here for checking the data #3375
This also applies with new default 16 MB packsize.
Environment Variables

set RESTIC_PACK_SIZE=64
set RESTIC_COMPRESSION=max

This should at least be documented, because you should know that you wear your SSD when backing up with large packfiles.

I've checked with 4 MB packsize. There also some data is written to temp directory, but much less than with bigger packsize.

On linux with a small machine (Raspberry PI) same behaviour, except that for 4 MB packsize much less is written to temp directory compared to windows.

So I would add in the docs

The side effect of increasing the pack size is requiring more disk space for temporary pack
files created before uploading.  The space must be available in the system default temp
directory, unless overwritten by setting the ``$TMPDIR`` environment variable.
Another side effect is SSD wear. The bigger the pack sizes the bigger the likelihood packs
get written to disk. With 4 MB packs this happens quite seldom whereas with default
16 MB packsize most packs will be written to your SSD first.

Or something like that.

@metalsp0rk
Copy link
Contributor Author

I've checked with 4 MB packsize. There also some data is written to temp directory, but much less than with bigger packsize.

@JsBergbau The writes with 4MB packsize are the same as the writes with 16MB or 128MB packsize. The only difference is the number of files written. Because the temp directory stores the packs only while the pack is in flight, it may appear that the total writes are lower, but that is not the case. (in fact, with a 4MB packsize, there might be a minuscule increase in writes due to more file metadata being written, due to the larger file count)

I thought I had that in the documentation somewhere, at least in the original #2750 PR there was mention of it.

@MichaelEischer Thank you very much for merging this!

@JsBergbau
Copy link
Contributor

JsBergbau commented Aug 7, 2022

@metalsp0rk Sorry thats no criticism on your pullrequest. I'm really glad that it finally made it into master.

When files are smaller they don't seem to get written to disk. Here a screenshot from Windows taskmanager using 16 MB and 4 MB packsize. Note taskmanager caps data transfer rate, so in fact it is much higher than 250 MB/s, so in fact there is much more data written to SSD with 16 MB packs than 4 MB packs.

Backup data is read and written to D:

grafik

EDIT: Also you see "system" process and not restic writing these pack files.
grafik

@MichaelEischer
Copy link
Member

The writes with 4MB packsize are the same as the writes with 16MB or 128MB packsize. The only difference is the number of files written. Because the temp directory stores the packs only while the pack is in flight, it may appear that the total writes are lower, but that is not the case. (in fact, with a 4MB packsize, there might be a minuscule increase in writes due to more file metadata being written, due to the larger file count)

That direct relationship only holds when ignoring the page cache. Disk I/O is first buffered in memory and either written after some delay or when flushing. Restic doesn't call flush for it's temp files, so we're in the case where the data is written after some delay. If a temporary file is deleted before being flushed to disk, the corresponding data can also be discarded. As larger pack files are around for much longer, this also drastically increases the chances of being written to disk.

For Windows we explicitly ask the OS not to write the data to disk (#3610) and also ensure that the file is deleted no matter what happens. But apparently Windows has decided to nevertheless write the temp files to disk. So it looks like we should try to keep the whole pack file in memory if possible.

@JsBergbau
Copy link
Contributor

JsBergbau commented Aug 8, 2022

I think something has changed in the behaviour of windows. Maybe this is even a bug in Windows? I remember when I began using restic I was checking if restic writes out the data to disk and this didn't happen. I knew that behaviour from Duplicati https://forum.duplicati.com/t/attention-ssd-users-if-your-temp-directory-isnt-ram-backed-duplicati-will-prematurely-wear-your-ssd/5779 so I explicitly checked if restic is having the same behaviour and needs a RAM disk like Duplicati needed. But that wasn't the case.

I also downloaded restic 0.9.6 to verify, but also with 0.9.6 data gets written to SSD. So I'm quite sure that there has been some change in the behaviour of Windows.

Edit: Found this post #2679 (comment) where I confirmed that restic doesn't write the data to temp drive.

@JsBergbau
Copy link
Contributor

Some side note: When you've forgot to set compression=max and you want to compress later with maximum you can use this PR to increase min packsize. That also leads to re-compression at maximum level. Is this behaviour intented or by accident? If it is by intent I would create a pullrequest and update the documentation accordingly, because this feature is really useful. On the downside just changing packsize requires some time because compression is also re-applied. But probably you can't just byte-copy the packs but you have to uncompress and recompress the content.

@JsBergbau
Copy link
Contributor

Some data while converting v1 repo to v2 with pruning with maximum compression and 64 MB packsize on linux via ./restic prune --repack-uncompressed --repack-small --no-cache. Rest-Server is on the same machine so no cache is needed.

grafik

First graph is where I forgot export TMPDIR=/dev/shm

According to datarate about 8 % of tempfiles were written out to SSD. System has 32 GB of RAM where only 3,3 GB are used with restic using more than 2 GB of this memory. It might be possible the more RAM is available the less tempfiles get written to disk.

@junqfisica
Copy link

Only tested with S3 in the same datacenter park with a build on the latest commit of this branch from the 1st of August:

@Slind14 First thank you for showing a test to S3. Currently, we are trying to tune restic performance to backup 85TB of data into S3 (Scality ring). Our files are all images with an average size of 60MB.

Would you mind explaining what s3.connection is? Is this an --option parameter from restic?

I'm also wondering if we can use a bigger pack-size (64 or 128) to improve performance. Also, is there a problem with changing this parameter after an initial backup?

@MichaelEischer
Copy link
Member

@junqfisica Yes you would specify the connections parameters as restic -o s3.connections=10 [...], for other options have a look at the output of restic options. The documentation now also includes some advice on performance tuning, have look at https://restic.readthedocs.io/en/latest/047_tuning_backup_parameters.html . Note that this describes the behavior of the current beta version of restic (which will be 0.14.0 once it is released). restic 0.13.1 does not support most of these parameters nor does the performance scale well.

@junqfisica
Copy link

@MichaelEischer Thank you so much for your reply. Do you know when a stable version (0.14.0) will be released on epel-release?

@MichaelEischer
Copy link
Member

We first have to release 0.14.0, which shouldn't be too far out anymore. I can't tell how long it will take to be available in other repositories.

@legg33
Copy link

legg33 commented Aug 25, 2022

I just wanted to confirm, that windows seems to be very eager to write these temp files to disk. On my 64GB RAM machine with only 10GB used, while backing up I see that almost all temp files are written to disk, even though they only live for less than a second (backup to external disk with ~100MB/s and I see about 100MB/s written to my internal NVME also). The pack size seems to not make a noticeable difference here.

For now I'm creating a RAM-disk pre-backup, but of course it would be nice if we could tell restic to keep the packs in memory itself.

Thanks to all the devs for all the great work so far!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Ability to tweak chunk pack size