Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Switch default pool from LVM to BTRFS-Reflink #6476

Open
DemiMarie opened this issue Mar 22, 2021 · 60 comments
Open

Switch default pool from LVM to BTRFS-Reflink #6476

DemiMarie opened this issue Mar 22, 2021 · 60 comments
Labels
C: storage P: default Priority: default. Default priority for new issues, to be replaced given sufficient information. T: enhancement Type: enhancement. A new feature that does not yet exist or improvement of existing functionality.

Comments

@DemiMarie
Copy link

DemiMarie commented Mar 22, 2021

The problem you're addressing (if any)

In R4.0, the default install uses LVM thin pools. However, LVM appears to be optimized for servers, which results in several shortcomings:

  • Space exhaustion is handled poorly, requiring manual recovery. This recovery may sometimes fail.
  • It is not possible to shrink a thin pool.
  • Thin pools slow down system startup and shutdown.

Additionally, LVM thin pools do not support checksums. This can be achieved via dm-integrity, but that does not support TRIM.

Describe the solution you'd like

I propose that R4.3 use BTRFS+reflinks by default. This is a proposal ― it is by no means finalized.

Where is the value to a user, and who might that user be?

BTRFS has checksums by default, and has full support for TRIM. It is also possible to shrink a BTRFS pool without a full backup+restore. BTRFS does not slow down system startup and shutdown, and does not corrupt data if metadata space is exhausted.

When combined with LUKS, BTRFS checksumming provides authentication: it is not possible to tamper with the on-disk data (except by rolling back to a previous version) without invalidating the checksum. Therefore, this is a first step towards untrusted storage domains. Furthermore, BTRFS is the default in Fedora 33 and openSUSE.

Finally, with BTRFS, VM images are just ordinary disk files, and the storage pool the same as the dom0 filesystem. This means that issues like #6297 are impossible.

Describe alternatives you've considered

None that are currently practical. bcachefs and ZFS are long-term potential alternatives, but the latter would need to be distributed as source and the former is not production-ready yet.

Additional context

I have had to recover manually from LVM thin pool problems (failure to activate, IIRC) on more than one occasion. Additionally, the only supported interface to LVM is the CLI, which is rather clumsy. The LVM pool requires nearly twice the amount of code as the BTRFS pool, for example.

Relevant documentation you've consulted

man lvm

Related, non-duplicate issues

#5053
#6297
#6184
#3244 (really a kernel bug)
#5826
#3230 ― since reflink files are ordinary disk files we could just rename them without needing a copy
#3964
everything in https://github.com/QubesOS/qubes-issues/search?q=lvm+thin+pool&state=open&type=issues

Most recent benchmarks: #6476 (comment)

@DemiMarie DemiMarie added T: enhancement Type: enhancement. A new feature that does not yet exist or improvement of existing functionality. P: default Priority: default. Default priority for new issues, to be replaced given sufficient information. labels Mar 22, 2021
@iamahuman
Copy link

It might be a good idea to compare performance (seq read, rand read, allocation, overwrite, discard) between the three backends. See: #3639

@GWeck
Copy link

GWeck commented Apr 15, 2021

With regard to VM boot time, LVM storage pool was slightly faster than BTRFS, but this may be still within the margin of error (LVM: 7.43 s versus BTRFS: 8.15 s for starting a debian-10-minimal VM).

@DemiMarie DemiMarie changed the title R4.1: switch default pool from LVM to BTRFS-Reflink [RFC] R4.1: switch default pool from LVM to BTRFS-Reflink Apr 15, 2021
@DemiMarie
Copy link
Author

Marking as RFC because this is by no means finalized.

@tlaurion
Copy link
Contributor

@DemiMarie following comment I'm posting deconstructed thoughts here.

No problem with QubesOS searching the best FS to switch for on 4.1 release, and questioning partition scheme, but i'm a bit lost on the direction of QubesOS 4.1 and the goals here (stability? performance? backups? portability? security?)

I was kind of against having dom0 having seperate LVM pool for space constrains resulting of the change, but agreed and accepted that the pool metadata exhaustion possibility was a real tangible issue that hit me a lot before, for which resolution is sketchy and still not advertised in widget correctly for users simply upgrading and being hit with.

The fix in new install resolved the issue, while QubesOS decided to split the dom0 pool out of main pool, so fixing pool issues on the system would be more easy for the end user or non existent.

I am just not so sure why switching filesystem is on point now, where LVM thin provisioning seems to fit the goal, but willing to hear more about the advantages.

I am interested into the reasoning for such a switch, and the probabilities of doing so, since I am really interested into pushing wyng-backups farther, inside/outside of Heads inside/outside of QubesOS, of grant/self funding the work so that QubesOS metadata would be included in wyng-backups, permitting restore/verification/fresh deployment/revert from local(oem recovery VM)/remote source, just applying diff where required from ssh remote red only mountpoint.

This filesystem choice seems to be less relevant then what can make those changes consume dom0 LVM which should be excludedof dom0 so that dmverity can be setuped under Heads/Safeboot. But this is irrelevant to this ticket.

@DemiMarie
Copy link
Author

I am just not so sure why switching filesystem is on point now, where LVM thin provisioning seems to fit the goal, but willing to hear more about the advantages.

The advantages are listed above. In short, a BTRFS pool is more flexible, and it offers possibilities (such as whole-system snapshots) that I do not believe are possible with LVM thin provisioning. BTRFS also offers flexible quotas, and can always recover from out of space conditions provided that a small amount of additional storage (such as a spare partition set aside for the purpose) is available. Furthermore, BTRFS checksumming and scrubbing appear to be useful. Finally, new storage can be added to and removed from a BTRFS pool at any time, and the pool can be shrunk as well.

BTRFS also has disadvantages: its throughput is worse than LVM, and there are reports of bad performance on I/O heavy workloads such as QubesOS. Benchmarks and user feedback will be needed to determine which is better, which is why this is an RFC.

I am interested into the reasoning for such a switch, and the probabilities of doing so, since I am really interested into pushing wyng-backups farther, inside/outside of Heads inside/outside of QubesOS, of grant/self funding the work so that QubesOS metadata would be included in wyng-backups, permitting restore/verification/fresh deployment/revert from local(oem recovery VM)/remote source, just applying diff where required from ssh remote red only mountpoint.

I believe that btrfs send and btrfs receive offer the same functionality as wyng-backups, but am not certain as I never used either. As far as the probability: this is currently only a proposal, and I am honestly not sure if switching this close to the R4.1 release date is a good idea. In any case, LVM will continue to be fully supported ― this just flips the default in the installer.

@tasket
Copy link

tasket commented Apr 18, 2021

@DemiMarie There are many questions swirling around advanced storage on Linux, but I think the main ones applicable here are about reliability and performance. Btrfs and Thin LVM appear to offer trade-offs on those qualities, and I don't think its necessarily a good move to switch the Qubes default for a slower storage scheme at this point; storage speed is critical for Qubes' usability and large disk image files with random write patterns are Btrfs' weakest point.

Running out of space is probably Thin LVM's weakest point, although this can be pretty easily avoided. For one, dom0 root is moving to a dedicated pool in R4.1, which will keep admin working in most situations. Adding more protections to the domU pool can also be done with some pretty simple userland code. (For those who are skeptical, note that this is the general approach taken by Stratis.)

The above mentioned Btrfs checksums is a nice-to-have feature against accidental damage, but it unfortunately does not come close to providing authentication. To my knowledge, no CRC mode can do that even if its encrypted. Any attacker able to induce some calculated change in an encrypted volume would probably find the malleability of encrypted CRCs to be little or no obstacle. IMHO, the authentication aspect of the proposal is a non-starter. (BTW, it looks like dm-integrity may be able to do this now along with discard support, if its journal mode supports internal tags.)

As for backups, Wyng basically exists because tools like btrfs send are constrained to using the same back end (Btrfs with admin privileges) which severely narrows the user's options for backup destinations. Wyng can also be adapted to any storage source that can create snapshots and report their deltas (Btrfs included).

The storage field also continues to evolve in interesting ways: Red Hat is creating Stratis while hardware manufacturers implemented NVMe objects and enhanced parallelism. Stratis appears to be based on none other than Thin LVM's main components (dm-thin, etc) in addition to dm-integrity, with XFS on top; all the layers are tied together to respond cohesively from a single management interface. This is being developed to avoid Btrfs maintenance and performance pitfalls.

I think some examination of Btrfs development culture may also be in order, as it has driven Red Hat to exasperation and a decision to drop Btrfs. I'm not sure just what it is about accepting Btrfs patches that presents a problem, but it makes me concerned that too much trust has been eroded and that Btrfs may become a casualty in 'storage wars' between an IBM / Red Hat camp and what I'd call an Oracle-centric camp.


FWIW, I was one of the first users to show how Qubes could take advantage of Btrfs reflinks for cloning and to request specific reflink support. Back in 2014, it was easy to assume Btrfs shortcomings would be addressed fairly soon, since those issues were so obvious. Yet they are still unresolved today.

My advice at this point is to wait and see – and experiment. There is an unfortunate dearth of comparison tests configured in a way that makes sense; they usually compare Btrfs to bare Ext4, for example, and almost always overlook LVM thin pools. So its mostly apples vs oranges. However, what little benchmarking I've seen of thin LVM suggests a performance advantage vs Btrfs that would be too large to ignore. There are also Btrfs modes of use we should explore, such as any performance gain from disabling CoW on disk images; if this were deemed desirable then the Qubes Btrfs driver would have to be refactored to use subvolume snapshots instead of reflinks. An XFS reflink comparison on Qubes would also be very interesting!

@DemiMarie
Copy link
Author

@DemiMarie There are many questions swirling around advanced storage on Linux, but I think the main ones applicable here are about reliability and performance. Btrfs and Thin LVM appear to offer trade-offs on those qualities, and I don't think its necessarily a good move to switch the Qubes default for a slower storage scheme at this point; storage speed is critical for Qubes' usability and large disk image files with random write patterns are Btrfs' weakest point.

In retrospect, I agree. That said (as you yourself mention below) XFS also supports reflinks and lacks this problem.

Running out of space is probably Thin LVM's weakest point, although this can be pretty easily avoided. For one, dom0 root is moving to a dedicated pool in R4.1, which will keep admin working in most situations. Adding more protections to the domU pool can also be done with some pretty simple userland code. (For those who are skeptical, note that this is the general approach taken by Stratis.)

Will it be possible to reserve space for use by discards? A user needs to be able to free up space even if they make a mistake and let the pool fill up.

The above mentioned Btrfs checksums is a nice-to-have feature against accidental damage, but it unfortunately does not come close to providing authentication. To my knowledge, no CRC mode can do that even if its encrypted. Any attacker able to induce some calculated change in an encrypted volume would probably find the malleability of encrypted CRCs to be little or no obstacle. IMHO, the authentication aspect of the proposal is a non-starter. (BTW, it looks like dm-integrity may be able to do this now along with discard support, if its journal mode supports internal tags.)

The way XTS works is that any change (by an attacker who does not have the key) will completely scramble a 128-bit block; my understanding is that a CRC32 with a scrambled block will only pass with probability 2⁻³². That said, BTRFS also supports Blake2b and SHA256, which would be better choices.

As for backups, Wyng basically exists because tools like btrfs send are constrained to using the same back end (Btrfs with admin privileges) which severely narrows the user's options for backup destinations. Wyng can also be adapted to any storage source that can create snapshots and report their deltas (Btrfs included).

Good to know, thanks!

The storage field also continues to evolve in interesting ways: Red Hat is creating Stratis while hardware manufacturers implemented NVMe objects and enhanced parallelism. Stratis appears to be based on none other than Thin LVM's main components (dm-thin, etc) in addition to dm-integrity, with XFS on top; all the layers are tied together to respond cohesively from a single management interface. This is being developed to avoid Btrfs maintenance and performance pitfalls.

I think some examination of Btrfs development culture may also be in order, as it has driven Red Hat to exasperation and a decision to drop Btrfs. I'm not sure just what it is about accepting Btrfs patches that presents a problem, but it makes me concerned that too much trust has been eroded and that Btrfs may become a casualty in 'storage wars' between an IBM / Red Hat camp and what I'd call an Oracle-centric camp.

My understanding (which admittedly comes from a comment on Y Combinator) is that BTRFS moves too fast to be used in RHEL. RHEL is stuck on one kernel for an entire release, and rebasing BTRFS every release became too difficult, especially since Red Hat has no BTRFS developers.

FWIW, I was one of the first users to show how Qubes could take advantage of Btrfs reflinks for cloning and to request specific reflink support. Back in 2014, it was easy to assume Btrfs shortcomings would be addressed fairly soon, since those issues were so obvious. Yet they are still unresolved today.


My advice at this point is to wait and see – and experiment. There is an unfortunate dearth of comparison tests configured in a way that makes sense; they usually compare Btrfs to bare Ext4, for example, and almost always overlook LVM thin pools. So its mostly apples vs oranges. However, what little benchmarking I've seen of thin LVM suggests a performance advantage vs Btrfs that would be too large to ignore. There are also Btrfs modes of use we should explore, such as any performance gain from disabling CoW on disk images; if this were deemed desirable then the Qubes Btrfs driver would have to be refactored to use subvolume snapshots instead of reflinks. An XFS reflink comparison on Qubes would also be very interesting!

That it would be, especially when combined with Stratis. The other major problem with LVM2 (and possibly dm-thin) seems to be snapshot and discard speeds; I expect XFS reflinks to mitigate most of those problems.

@tasket
Copy link

tasket commented Apr 18, 2021

Ah, new Btrfs feature... Great! I'd consider enabling one of its hashing modes as being able to support authentication.

I'd still consider the Stratis concept to be more interesting for now, as Qubes' current volume management is pretty similar but potentially even better and simpler due to having a privileged VM environment.

@DemiMarie
Copy link
Author

Ah, new Btrfs feature... Great! I'd consider enabling one of its hashing modes as being able to support authentication.

Agreed. While I am not aware of any way to tamper with a LUKS partition without invalidating a CRC, Blake2b is by far the better choice.

I'd still consider the Stratis concept to be more interesting for now, as Qubes' current volume management is pretty similar but potentially even better and simpler due to having a privileged VM environment.

I agree, with one caveat: my understanding is that LUKS/AES-XTS-512 + BTRFS/Blake2b-256 is sufficient to protect against even malicious block devices, whereas dm-integrity is not. dm-integrity is vulnerable to a partial rollback attack: it is possible to rollback parts of the disk without dm-integrity detecting it. Therefore, dm-integrity is not (currently) sufficient for use with untrusted storage domains, which is a future goal of QubesOS.

@DemiMarie
Copy link
Author

@tasket: what are your thoughts on using loop devices? That’s my biggest worry regarding XFS+reflinks, which seems to otherwise be a very good choice for QubesOS. Other approaches exist, of course; for instance, we could modify blkback to handle regular files as well as block devices.

@0spinboson
Copy link

0spinboson commented Apr 20, 2021

I really wish the FS's name wasn't a misogynistic slur. That aside, my only experience with it, under 4.0, had my Qubes installation become unbootable, and I found it very difficult to fix, relative to a system built on LVM. And that does strike as relevant to the question whether Qubes switches, while imo this is only partly addressable via improving the documentation (since the other part is the software we have to use to restore).

@DemiMarie
Copy link
Author

FS's name wasn't a misogynistic slur

@0spinboson would you mind clarifying which filesystem you are referring to?

@tasket
Copy link

tasket commented Apr 20, 2021

Will it be possible to reserve space for use by discards? A user needs to be able to free up space even if they make a mistake and let the pool fill up.

Yes, its simple to allocate some space in a pool using a non-zero thin lv. Just reserve the lv name in the system, make it inactive, and check that it exists on startup.

Further, it would be easy to use existing space-monitoring components to also pause any VMs associated with a nearly-full pool and then show an alert dialog to the user.

it is possible to rollback parts of the disk without dm-integrity detecting it.

I thought the journal mode would prevent that? I don't know it in detail, but something like a hash of the hashes of the last changed blocks, computed with the prior journal entry, would have to be in each journal entry.

what are your thoughts on using loop devices? That’s my biggest worry regarding XFS+reflinks

I forgot they were a factor... its been so long since I've used Qubes in a file-backed mode. But this should be the same for Btrfs, I think.

FWIW, the XFS reflink suggestion was more speculative, along the lines of "What if we benchmark it for accessing disk images and its almost as fast as thin LVM?". The regular XFS vs Ext4 benchmarks I'm seeing suggest it might be possible. Its also not aligned with the Stratis concept, as that is closer to thin LVM with XFS just providing the top layer. (Obviously we can't use Stratis itself unless it supports a mode that accounts for the top layer being controlled by domUs.)

Also FWIW: XFS historically supported a 'subvolume' feature for accessing disk image files, instead of loopdevs. It requires certain IO sched conditions are met before it can be enabled.

@0spinboson
Copy link

FS's name wasn't a misogynistic slur

@0spinboson would you mind clarifying which filesystem you are referring to?

'Butterface', was intentional, afaik.

@Rudd-O
Copy link

Rudd-O commented Oct 26, 2021

No, it was not. The file system is named btrfs because it means B-tree FS. That the name is often pronounced with a hilarious word may or may not be seen as a pun, but that is on the beholder's eye.

@dmoerner
Copy link

Basic question: If I install R4.1 with BTRFS by selecting custom, and then using Anaconda to automatically create the Qubes partitions with BTRFS, is that sufficient for the default pool to use BTRFS-Reflink? Or do I have to do something extra for the "Reflink" part?

@rustybird
Copy link

If I install R4.1 with BTRFS by selecting custom, and then using Anaconda to automatically create the Qubes partitions with BTRFS, is that sufficient for the default pool to use BTRFS-Reflink?

Yes

@noskb
Copy link

noskb commented Nov 29, 2021

@DemiMarie DemiMarie modified the milestones: Release 4.1, Release TBD Dec 23, 2021
@DemiMarie
Copy link
Author

FWIW, fstrim should generally be avoided. Its default discarding pattern is aggressive and not used by the filesystems themselves in discard mode for a reason, as it creates higher fragmentation and greater demand on metadata resources. In my experience, fstrim use can also contribute to or trigger a TLVM failure.

What do you recommend instead? Should the filesystem be mounted with the discard option? Not discarding at all is not an option because it causes space leaks.

@tasket
Copy link

tasket commented Aug 24, 2023

BTW, even when you want to run fstrim manually for some maintenance objective and you reduce the granularity with --minimum (which does help), it is still hitting metadata resources with new deltas in a very short period of time, and the volume(s) might not be small. Compare that to snapshot rotation, where deltas accumulate via routine user app use or system updates, and the sudden changes are mainly limited to removing deltas (freeing metadata space) when snapshots are removed.

All of the CoW/snapshot capable systems have issues with surges in metadata use. NTFS is famous for gradually degrading in performance and becoming unbearably slow for minutes or even days. Btrfs is a bit closer to NTFS in this, and I wonder if they have made a better trade-off for uptime vs performance. ZFS is reported to degrade as the number of snapshots increase. However, TLVM's issues haven't been tempered like these other systems; one gets strange inconsistencies that have to be diagnosed before applying a course of disjointed home remedies.

What do you recommend instead? Should the filesystem be mounted with the discard option?

@DemiMarie Definitely use the mount option, which I think is standard practice now. The mount option helps moderate fragmentation resulting from discards. I think fstrim should remain something that is only invoked manually when an admin wants to address a special case (such as having forgotten to mount with 'discard').

Some recommendations:

The Btrfs 'nodesize' setting affects contention and thus latency for metadata-intensive operations, of which our fragmented VM image files require in abundance. It is reasonable to assume the 16KB default is not optimized for active disk image files. Therefore, I suggest testing Qubes usage with 'nodesize' set to a smaller value like 8KB or 4KB. This can be taken together with other optimizations:

  • Btrfs 'no-holes' and 'skinny-metadata' options
  • 4KB guest fs block size
  • Use F2FS or XFS for guest private vols, both of which have considerably lower write amplification rates vs EXT4
  • For guest root fs, compression should be considered

Some experimentation could be done beyond that, however. To reduce the level of data fragmentation resulting from random writes (not just discards), ideally we would want to steak out a middle-ground between the 4KB minimum extent size that Btrfs normally uses and the 64KB (but usually >256KB) minimums that TLVM uses. Also, active defragmentation and deduplication (the two often having opposite effects) should not be completely discounted; periodically running them both with extent-size thresholds that complement each other could improve overall efficiency and responsiveness.

@Rudd-O
Copy link

Rudd-O commented Aug 25, 2023

Would love to see a TLVM / btrfs / ZFS (ashift 12) performance comparison now that we have the three drivers in upstream.

@tlaurion

This comment was marked as outdated.

@DemiMarie
Copy link
Author

@tlaurion links 2 and 3 are 404, and links 4, 6, and 8 are completely irrelevant. Also ZFS is not experimental; there are plenty of production workloads (like Let’s Encrypt’s main databases!) that use it.

@tlaurion
Copy link
Contributor

tlaurion commented Aug 26, 2023

Also ZFS is not experimental; there are plenty of production workloads (like Let’s Encrypt’s main databases!) that use it.

@DemiMarie ashift and alignment to disk blocks is interesting property there, as well as dodging complexity of having LUKS enclosed BRTFS or current LUKS->TLVM->volumes complexity and failures. I'm not a FS expert, but I have suffered from TLVM qubes defaults in the past and really looking forward for a switch for whatever better. TLVM is known to fail the user, it's not a question of how, which is answered everywhere, but when. Your praises for BRTFS were proven by testing by the community, your call was heard by other projects, now the question is where we go to and what will be proposed to the user next. TLVM still in 4.3 or ZFS/BRTFS.

Only testing under different use case scenarios will answer that question, where let's encrypt database scenario doesn't correspond to server Xen virtualization servers, which our use case more correspond to. **I wish we had some Xen infra admins of the sort chiming in to brain dump what failed and worked for them. That input would be significative. ** Other inputs are neither internally valid nor generalizable to quebesos, unfortunately, which is basically what internet answers to outside of quebesos direct use case testing today.

@andrewdavidwong
Copy link
Member

@tlaurion: FWIW, I don't think it's very useful to post large quantities of ChatGPT output in qubes-issues, unless you're willing to say, at minimum, that you've personally vetted the output and agree with it.

@DemiMarie
Copy link
Author

@tlaurion: FWIW, I don't think it's very useful to post large quantities of ChatGPT output in qubes-issues, unless you're willing to say, at minimum, that you've personally vetted the output and agree with it.

Exactly. Such vetting clearly was not done here, as shown by the broken and irrelevant links.

@Rudd-O
Copy link

Rudd-O commented Aug 29, 2023

Seconding Demi here. ZFS is not experimental — the only reason it's not included in mainline is because it's not GPL, not because it's "waiting for the bugs to shake out". Even though it's not in mainline, some distributors have been known to distribute ZFS in compliance to all licensing agreements (well, as compliant as it can be when the current practice is that distributing non-GPL code or derivatives is not okay, but loading non-GPL code into the kernel is long-standing practice that has given rise to no legal claims for decades).

@no-usernames-left
Copy link

Chiming in here to say that, IMHO, we should be aiming for ZFS instead of Btrfs, "Linux's perpetually half-finished filesystem".

Not only would this add robust data integrity guarantees (data+metadata checksumming as well as always being consistent on disk), it would make snapshots and backup/restore much easier and cleaner.

@tasket
Copy link

tasket commented Apr 13, 2024

I usually avoid ZFS for the same reasons Linus Torvalds does; I simply don't trust Oracle and large corporations are getting bolder with their open source IP rug-pulls (see Red Hat, not to mention Oracle's past attempt with Java APIs).

For people who do trust Oracle, I remain open-minded about the possibility of supporting OpenZFS in Wyng backup. But I also don't see a clear cut way to obtain image file metadata there; in that case Wyng only performs as well as a typical incremental backup. Maybe someone could enlighten me, but AFAICT ZFS is not currently providing features that would enable efficient metadata-driven backups to non-ZFS filesystems.


The idea Btrfs is "perpetually unfinished" doesn't ring true; the basic extent and file allocation format has been described as finished by the Btrfs devs, while they also state upfront that changes are still being made. None of the popular OS-running filesystems have been finalized – see XFS as a particularly old example where significant features are still being added.

The Btrfs format has some amazing qualities, for example the ability to describe a volume (or sub-volume) in a way that can replicate deduplication/reflinks of file data on a different Btrfs filesystem (e.g. btrfs-send knows what data is shared by reflinked files). The reason why is that everything a file/inode directly references in Btrfs is considered a logical entity, so its easy to transfer while retaining its original identity (and when files reference them more than once, it follows that is easy to replicate as well). Very promising stuff, IMO.

Finally, versus thin-lvm, Btrfs has been a quantum leap in reliability for me; I've used and provided support for both for as long as each has had a Qubes driver (longer than that for Btrfs) and there is no question in my mind about the difference.

@no-usernames-left
Copy link

I simply don't trust Oracle and large corporations are getting bolder with their open source IP rug-pulls (see Red Hat, not to mention Oracle's past attempt with Java APIs).

I don't trust Oracle either, but the fact is that OpenZFS was forked from before Oracle acquired Sun and closed the source; they have no claim to OpenZFS and therefore no trust in Oracle is required.

And what happened with CentOS? It was immediately forked when Red Hat pulled the rug, with no significant loss other than a change of name.

The fact that there is no such thing as fsck.zfs is itself huge; being always consistent on-disk is a total gamechanger.

@rustybird
Copy link

One exciting feature coming up in Btrfs is fscrypt support ("file-based encryption"). The reason I find it exciting in a Qubes OS context is because they're making it extent-based, i.e. not just individual files but individual fragments of files can have their own encryption keys. Reading the tea leaves on their mailing list, I get the impression that this design will - eventually, but probably not from day one - allow reflinking an unencrypted (or differently encrypted) source file into a destination file where subsequent writes will then be encrypted with a new key. Which would be exactly what is needed to support ephemeral snap_on_start volumes, solving #904 (comment).

@tasket
Copy link

tasket commented Apr 13, 2024

I think the facts show that the fate of OpenJDK and derivatives (what most mobile devices rely on) was left to the US courts – which are now very much in a mood to issue new rulings on supposedly settled case law. The whole point of Oracle's case was that forked and reverse-engineered projects could come under their control.

And CentOS was effectively disbanded. The replacements don't have access to RH "proprietary" fixes and modifications that they distribute to RHEL customers (a "new fact" about software that the legal world seems comfortable with). If corporations are ready to attempt de-facto nullification of the GPL then licensing is a top concern.

@no-usernames-left
Copy link

If corporations are ready to attempt de-facto nullification of the GPL then licensing is a top concern.

OpenZFS, somewhat infamously, is not GPL.

Another point in favour of OpenZFS is the ease of backing up with zfs send (and, thus, restoration with zfs recv).

@tlaurion
Copy link
Contributor

tlaurion commented Apr 13, 2024

From #6476 (comment) :

The previously mentioned tests (the lower time the better):

Both tests were on the same real laptop, and same software stack
(besides the partitioning). Furthermore, in another run, fstrim / timed
out on btrfs after 2min (that was after installing updates in all
templates, so there probably was quite a bit to trim, but still, that's
a ~200GB SSD, so not that big and not that slow). Seeing this results,
I've rerun it several times, but got similar results.

Different test, much less heavy on I/O:

So, at least not a huge regression, but not a significant improvement
either.

It is also worth noting that whatever benchmark that might have happened prior of kernel 6.1+ release might need to be redone, since whatever bottleneck on IO that affected the kernel prior has now vanished and penalized brtfs vs lvm which improvements is stalling.

Those were running on kernel-latest at that time, so at least 6.3.x. But even then, if only the very latest kernel version would start to work fine, it's not enough for switching the default. The LTS version needs to work well for that.
In any case, it's way to late for this for R4.2. We may re-test, and re-consider for R4.3.

I would love to focus on currently referred testing results (and options/versions tested) as being the base for not going forward with BRTFS.

@DemiMarie :
Maybe OP should refer to those test results, kernel versions to state clearly what could be the causes of discrepancies between qubesos forum post showing better performance gains simply by switching to brtfs from thin lvm?

@tasket suggested some optimizations on filesystems options to boost brtfs performance for better performance boost/comparison under qubesos.

@Rudd-O made really clear of performance and stability gains of openzfs vs TLVM and brtfs, where licensing still is an issue, even though Ubuntu dodged the problem altogether a while ago and never got sued. But Fedora won't follow.

My past attempts here were to recap on the current states of improvements of brtfs. This thread stated possible causes of performance degradation of TLVM over brtfs for large volumes, snapshot rotation, trimming etc.

It's currently hard to grasp what is the current state of things and how to get things forward to make it better. But it's unclear what was fixed, what is still problems and things that can't be fixed by tweaking accordingly.

The reason I insist on this is because downstream projects insist on cloning templates and diverging through salt recipes, which TLVM cannot support efficiently since no dedup is possible. Brtfs supports offline dedup, which bees permits to accomplish if deployed so that space consomption is highly reduced, backups can be restored without filling disks and compression reduces operation times, boots read times and offer performance gains that cannot be denied. Openzfs would fit the need even better by compressing and offering online deduplication, resolving space consomption extend life of SSD drives, which otherwise as been claimed again and again that qubesos over other OSes is a drive life shortener. But at the expense of higher memory usage for dom0, needing to keep track of all blocks to be able to live dedup. Bees does it more efficiently, but after the fact, meaning that wear of drives will not be optimized, while drive consumption and speed woukd be improved.

Users wants faster vm boot/shutdown time, proper backup/restore mechanisms (speed, space requirement upon restore not exploding), hardware life and battery efficiency.

Openzfs is known to be a ram hugger for live dedup but results in more stable, more efficient IO, reduced disk consumption, faster snapshot and faster IO. TLVM overhead is known, large disks qubes show that TLVM loses over brtfs, openzfs shines in all aspect but licence. Where to go next and how?

What is the current state of the art with current kernel versions and where/how to get this forward?

From what I get, brtfs proper filesystem creation and runtime options need to be defined properly to be compared correctly against current TLVM ones. Openzfs, if I understand well the situation, won't be part of the installer until Fedora does a move like Ubuntu did, otherwise QubesOS won't take the decision to do what Ubuntu did and take any risks for themselves.

So it leaves us to brtfs. So how to show without any doubt that brtfs can be a better candidate then TLVM (which most other OSes dodged being default installation deployments nowadays)?

If QubesOS stays with Fedora as dom0, which is not planned to change anytime soon unless I missed something, then how to reduce SSD wear, improve large vm shutdown/snapshot rotation time and embrace templates space deduplication from cloning/diverging/backup without switching to brtfs, while openzfs is out of scope?


Tldr: the discussion should resume from @tasket suggestions #6476 (comment)

Once again, if some openqa testing iso images were produced to ask testers to report output on same hardware having only fs changes from OS install, reporting perf diffs could be asked, and provided easily from those wanting to move this important subject forward.

I could spare time reporting and so others having multiple identical machines, isolating changes to the fs changes and tweaks alone.

@DemiMarie
Copy link
Author

@DemiMarie :
Maybe OP should refer to those test results, kernel versions to state clearly what could be the causes of discrepancies between qubesos forum post showing better performance gains simply by switching to brtfs from thin lvm?

Would it be possible to re-run benchmarks now? I’d rather refer to up-to-date benchmarks than ones known to be stale.

@no-usernames-left
Copy link

openzfs shines in all aspect but licence

This is the perfect tl;dr.

Canonical has shown that this is a solved problem. We should do what they did and get on with other pressing issues such as Wayland, seL4, etc.

If that means we dump Fedora, even better; their release cadence is far too quick for dom0 IMHO. Debian would likely be a much better choice... or we could roll our own slim release for dom0, which would result in both a slower release cadence and a slimming of the TCB, both of which would be good.

@no-usernames-left
Copy link

Would it be possible to re-run benchmarks now?

Excellent idea — but let's include ZFS this time.

@Rudd-O you're probably the best-equipped for this, no?

@no-usernames-left
Copy link

As an aside, this mess of kernel vs userland vs filesystem, and the clusterfuck which is licensing, makes me appreciate FreeBSD all the more.

@DemiMarie
Copy link
Author

DemiMarie commented Apr 13, 2024

We should do what they did and get on with other pressing issues such as Wayland, seL4, etc.

Wayland definitely needs to be implemented, and I’m going to be talking about GPU acceleration (which it will enable) at Xen Project Summit 2024. Right now, seL4 doesn’t provide sufficient protection against CPU vulnerabilities, so it isn’t an option.

@tlaurion
Copy link
Contributor

@DemiMarie :
Maybe OP should refer to those test results, kernel versions to state clearly what could be the causes of discrepancies between qubesos forum post showing better performance gains simply by switching to brtfs from thin lvm?

Would it be possible to re-run benchmarks now? I’d rather refer to up-to-date benchmarks than ones known to be stale.

@marmarek i guess this question from @DemiMarie was addressed to you for open-qa tests

@DemiMarie meanwhile, there is no cost at editing OP with past results so all people getting here are at least clear on the why's and versions related to brtfs having not been considered in the past because bad press, no?

@tlaurion
Copy link
Contributor

On my side, I postponed building https://github.com/tlaurion/qubes-bees directly from qubes-builder v2 (which didn;t work last time I attempted that goal) to build the rpm directly under fedora-37 to produce an installable bees rpm. Will update https://forum.qubes-os.org/t/bees-and-brtfs-deduplication/20526/6 later on with some meat to chew on when I have it.

Testing to see and compare gains of deduped space with/without bees, as a start.

For those interested to test dedup gains on test machines only, here is the produced RPM (gzipped because github constraints).: bees-0.10-1.fc37.x86_64.rpm.gz

@tlaurion
Copy link
Contributor

Some preliminary not so convincing bees results on x230 https://forum.qubes-os.org/t/bees-and-brtfs-deduplication/20526/10

@tlaurion
Copy link
Contributor

tlaurion commented Apr 15, 2024

The previously mentioned tests (the lower time the better):

Both tests were on the same real laptop, and same software stack
(besides the partitioning). Furthermore, in another run, fstrim / timed
out on btrfs after 2min (that was after installing updates in all
templates, so there probably was quite a bit to trim, but still, that's
a ~200GB SSD, so not that big and not that slow). Seeing this results,
I've rerun it several times, but got similar results.

Different test, much less heavy on I/O:

So, at least not a huge regression, but not a significant improvement
either.

It is also worth noting that whatever benchmark that might have happened prior of kernel 6.1+ release might need to be redone, since whatever bottleneck on IO that affected the kernel prior has now vanished and penalized brtfs vs lvm which improvements is stalling.

Those were running on kernel-latest at that time, so at least 6.3.x. But even then, if only the very latest kernel version would start to work fine, it's not enough for switching the default. The LTS version needs to work well for that.
In any case, it's way to late for this for R4.2. We may re-test, and re-consider for R4.3.

One really important question was not asked: this is real hardware testing. What hardware and related HCL those results were obtained from? @marmarek can you update that comment? @DemiMarie updated OP comment pointing to that text report. Would be nice that we understand once and for all what changed since then to replicate and understand the bottlenecks, if present explains something.

As stated in my last comment having tested bees in old hardware, old hardware ram speed <> pci <> SSD never bottlenecks to ssd. Recents experiments under heads with newer cryptsetup and dmsetup AND kernel changed a lot. So instead of chasing the white rabbit forever, we kinda need to know what was tested before planning retesting and understand what is wrong even in current and past defsukt configs to understand what went wrong and where.

I can only stare again that on old hardware, there is direct and massive improvement just by switching from TLVM to BTRFS but people prefer the defaults encouraged. But if those defaults are wrong in either case and not perceived on newer hardware... What exact are we testing and what are supposed to be experienced improvements?

  • fstrim called upon snapshot rotation in those tests? Not anymore?
  • discards enabled on fstab after those tests were made?
  • what were the luks/lvm passed down configs back then? Have they changed?
  • cryptsetup/kernel/lvm/dmsetup now uses async backend by default. Optimized defaults?

If changes benefit some hardware at the cost of others, we need to know.

An example of such improvement having made it's way upstream and downstream are Kernel Io queues being bypassed for read and write ops, from luks to lvm to kernel to ssd so that less overhead in IO overhead happens at each op https://blog.cloudflare.com/speeding-up-linux-disk-encryption/
Those changes landed in Linux kernel 5.9+ but luks needs to be configured to apply the quirk.

Anyway. I think all of this would happen in testing section of the forum with ISOs to test applying different default options so willing testers can just run a script and give needed output. Otherwise I feel ticket won't go anywhere without diversified testing in link with HCL.

@no-usernames-left
Copy link

Might dedup become unnecessary if layered templates become a thing?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
C: storage P: default Priority: default. Default priority for new issues, to be replaced given sufficient information. T: enhancement Type: enhancement. A new feature that does not yet exist or improvement of existing functionality.
Projects
None yet
Development

No branches or pull requests