New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
test failure: linux::tests::test_reflink #43
Comments
We can work around this in check() {
cd $pkgname-$pkgver
local tmp ref
tmp=$(mktemp --tmpdir="$PWD")
cp --reflink=always "$tmp" "$tmp.ref" || ref=no
rm -f "$tmp" "$tmp.ref"
if [[ $ref == no ]]; then
cargo test --release --locked --features test_no_reflink
else
cargo test --release --locked
fi
} but it would be far better to detect this in the test suite, if at all possible. |
I think limiting package build verification to those baseline features supported by lowest common denominator Linux-native filesystems (i.e. ext4) is OK, as you don't know anything about the user's build environment. So you could probably just disable reflink testing and it should work on all reasonable systems. The other option is to disable almost all tests, as you never know if someone is building on ext2 or fat or whatever. It's worth noting that we run an extensive test suite against a number of different filesystems in CI, so re-running everything shouldn't be necessary. (There is related issue where we don't enforce passing these tests before I perform a release, I'm having a think about how to do this while also keeping the release process as simple as possible.) |
For the Arch Linux PKGBUILD, instead of dropping tests entirely, might be better to detect file system with Looks like CI does what I was thinking, running tests specific to each filesystem. When the enforcement issue is resolved, would bring peace of mind with respect to data preservation. Would it be beneficial for tests to check filesystem to decide whether it's supposed to pass or fail? |
Arch Linux. Building on
xcp 0.16.0
on a build server. I believe file system is ext4, so the failure is probably expected.Is there a way to check the file system and mark it as an expected failure, allowing the test suite as a whole to be considered a success? Or maybe run the tests in an image file that uses a file system that supports the tested features?
The text was updated successfully, but these errors were encountered: