Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use fragment size for autoGC capacity calculation #3562

Merged
merged 1 commit into from May 6, 2020
Merged

Conversation

@pikajude
Copy link
Contributor

pikajude commented May 4, 2020

Our Darwin builders at work weren't running auto-GC despite being configured to, so I had to step in here and figure out what's going on.

Turns out that in struct statvfs, f_bsize and f_frsize are basically always the same on Linux, while on macOS, f_bsize measures a completely different statistic (optimal I/O read size), see the man page here.

On my relatively new MBP running Catalina on APFS, f_bsize is 1MB while f_frsize is 4KB (the actual block size). Since we use f_bsize in the size calculation requirement, the capacity checks Nix does on Darwin are off by a factor of 256, and since we had max-free configured, the machine was always considered to have ample space.

The gc-auto.sh test can't catch this since it stubs the reported FS capacity, and also I don't think there's any reasonable way for us to catch this in a test.

I don't think there should be anything wrong with using f_frsize for both platforms since the two fields seem to always be the same on Linux, see StackOverflow.

@edolstra edolstra merged commit 02c5914 into NixOS:master May 6, 2020
2 checks passed
2 checks passed
tests (ubuntu-18.04)
Details
tests (macos)
Details
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Linked issues

Successfully merging this pull request may close these issues.

None yet

3 participants
You can’t perform that action at this time.