-
Notifications
You must be signed in to change notification settings - Fork 6.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Request for comments: ZenFS, a RocksDB FileSystem for Zoned Block Devices #6961
Conversation
Summary: If a composite env is created with Env::Default, there is no way to guarantee that all threads have been joined by the time the composite env is destroyed. If threads that have been started by the composite env have a reference back to the composite env we risk a segfault since the composite environment might be destroyed before all the threads have been joined upon the destruction of Env::Default. I.e: in case of an exit() in the user program the base env must be destroyed, joining all running theads, before the composite env can be destroyed. So, to ensure that destruction can be done in the right order, add a new version of NewCompositeEnv, which lets the user order a base environment and its composite environment(s) as statics within its own compilation unit. The NewCompositeEnv requires changes to the default posix env constructor and destructor to allow for more than one global, static base environment.
Summary: Add the parameter --fs_uri to db_bench, creating a composite env combining the default env with a specified registered rocksdb file system.
Summary: Add the parameter --fs_uri to db_stress, creating a composite env combining the default env with a specified registered rocksdb file system.
Summary: Register the posix file system so it can be loaded with the uri posix:// This is useful when testing custom file system support in i.e. db_bench and db_stress.
Summary: Host managed Zoned Block Devices enables an application to do smart data placement by making informed decisions on how to place data into the storage media's erase units. This improves system write amplification and/or enables greater media capacity utilization. ZenFS is a simple file system that implements RockDBs FileSystem interface to place files into zones on a raw zoned block device. By separating files into zones and utilizing the write life time hints to co-locate data of simliar life times, the system write amplification is greatly reduced while keeping the ZenFS capacity overhead at a very resonable level. ZenFS depends on libzbd to and Linux kernel 5.4 to do zone management operations. Some of the ideas and concepts in ZenFS are based on earlier work done by Abutalib Aghayev and Marc Acosta. Files are mapped into into a set of extents: * Extents are block-aligned, continious regions on the block device * Extents do not span across zones * A zone may contain more than one extent * Extents from different files may share zones Log files and LOCK files are routed to a configurable path in the default file system. ZenFS is exceptionally lazy at current state of implementation and does not do any garbage collection whatsoever. As files gets deleted the used capacity zone counters drops and when it reaches zero the zone can be reset and reused. Metadata is stored in a rolling log in the first zones of the block device. Each valid meta data zone contains: * A superblock with the current sequence number and global filesytem metadata * At least one snapshot of all files in the file system The metadata format is currently experimental. More extensive testing is needed and support for differential updates is planned to be implemented before bumping up the version to 1.0.
Summary: This adds a file system management tool for ZenFS. It is required for setting up the metadata for the filesystem. It also allows the user to list files on the file system and can be extended in the future to do other management tasks like offline garbage collection, backups etc. Examples: 1. Create a zenfs file system on zoned block device /dev/nullb1 with auxiliary file storage (logfiles, lockfiles) under /tmp/zenfs_nullb1 and with a finishing threshold for zones of 5%. If the zone has less than finish_threshold capacity left, no additional extents will be mapped to the zone. ./zenfs mkfs --zbd=/dev/nullb1 --aux_path=/tmp/zenfs_nullb1 --finish_threshold=5 ZenFS file system created. Free space: 31744 MB 2. List files: ./zenfs list --zbd=/dev/nullb1 --path=/rocksdbtest/dbbench /rocksdbtest/dbbench/LOG /rocksdbtest/dbbench/LOCK /rocksdbtest/dbbench/000003.log /rocksdbtest/dbbench/CURRENT /rocksdbtest/dbbench/IDENTITY /rocksdbtest/dbbench/MANIFEST-000001 /rocksdbtest/dbbench/OPTIONS-000005
@yhr Thanks for the work, very interesting. Is this the implementation of the work you introduced on SDC last year? (https://www.snia.org/sites/default/files/SDC/2019/presentations/NVMe/Holmberg_Hans_Accelerating_RocksDB_with_Zoned_Namespaces.pdf) |
@zhichao-cao , yes it is a continuation of that work. Thanks for linking to the presentation, it's useful for understanding the big picture. |
what if zone on the device turns to RO or oflfine, how to sync the io_zones state in the memory ? |
@qihui81 , Devices with zone active excursions (which may transit e.g open zones to read-only) is not supported by Linux. The zenfs code checks the state of a zone after it is reset though the report zone ioctl - so zones going offline at that point is fine. |
Closing this RFC, as I've created a new pull request to pull in the code: #7626 |
This is a request for comments for a new file system for zoned block devices.
The pull request based on top of another open pull request : #6878 that enables db_bench and db_stress to be used with custom file systems.
With this pull request I am mainly asking for comments on the high-level architecture and feedback on the following:
What applicable testing is available?
I have mainly run smoke testing using db_bench and db_stress up till now and looking for ways to do i.e. recovery/crash/power fail testing.
Would a completely self-contained file system be preferable?
Currently ZenFS stores logs and lock files on the default file system. The reason for this if to avoid duplicating already working code and for easy access to the log files.
What kind of workloads are most interesting to optimize for?
Looking forward to feedback as I finish up my laundry list of todos and optimizations.
Thanks!
Overview
ZenFS is a simple file system that utilizes RockDBs FileSystem interface to place files into zones on a raw zoned block device. By separating files into zones and utilizing the write life time hints to co-locate data of similar life times, the system write amplification is greatly reduced(compared to conventional block devices) while keeping the ZenFS capacity overhead at a very reasonable level.
ZenFS is designed to work with host-managed zoned spinning disks as well as NVME SSDs with Zoned Namespaces.
Some of the ideas and concepts in ZenFS are based on earlier work done by Abutalib Aghayev and Marc Acosta.
Dependencies
ZenFS depends on libzbd and Linux kernel 5.4 or later to perform zone management operations.
Architecture overview
ZenFS implements the FileSystem API, and stores all data files on to a raw zoned block device. Log and lock files are stored on the default file system under a configurable directory. Zone management is done through libzbd and zenfs io is done through normal pread/pwrite calls.
Optimizing the IO path is on the TODO list.
Example usage
This example issues 100 million random inserts followed by as many overwrites on a 100G memory backed zoned null block device. Target file sizes are set up to align with zone size.
This graph below shows the capacity usage over time.
As ZenFS does not do any garbage collection the write amplification is 1.
File system implementation
Files are mapped into into a set of extents:
Reclaim
ZenFS is exceptionally lazy at current state of implementation and does not do any garbage collection whatsoever. As files gets deleted, the used capacity zone counters drops and when
it reaches zero, a zone can be reset and reused.
Metadata
Metadata is stored in a rolling log in the first zones of the block device.
Each valid meta data zone contains:
The metadata format is currently experimental. More extensive testing is needed and support for differential updates is planned to be implemented before bumping up the version to 1.0.