Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Allocate using vector of rotors. Proof-of-concept, not for production.
In a pool that consist of e.g. a small but fast SSD-based mirror and a large but long-latency HDD-based RAIDZn, it is useful to have the metadata, as well as very small files, stored on the SSD. This is handled in this patch by selecting the storage based on the size of the allocation. This is done by using a vector of rotors, each of which is associated with metaslab groups of each kind of storage. If the preferred group is full, attempts are made to fill slower groups. Better groups are not attempted - rationale is that an almost full filesystem shall not spill large-size data into the expensive SSD vdev, since that will not be reclaimable without deleting the files. Better then to consider the filesystem full when the large-size storage is full. One could also think of having e.g. a 3-level storage: Mirror SSD for really small records, mirror HDD for medium-size records and raidzn HDD for the bulk of data. Some performance numbers: Tested on three separate pools each consisting of a 20 GB SSD partition and a 100 GB HDD partition, from the same disks. The HDD is 2 TB in total.) SSD raw reads: 350 MB/s, HDD raw reads 132 MB/s. The filesystems were filled to ~60 % with a random directory tree, each with random 0-6 subdirectories and 0-100 files, maximum depth 8. The filesize was random 0-400 kB. The fill script was run with 10 instances in parallel, aborted at ~the same size. The performance variations below are much larger than the filesystem fill differences. The patch does not handle the 'inactive' case very well (it begins by filling nonrotating storage). Setting 0 is actually the original 7a27ad0 commit. Setting 8000 and 16000 is the value for zfs_mixed_slowsize_threshold, i.e. below which size data is stored using rotor[0] (nonrotating SSD), instead of rotor[1] (rotating HDD). - Setting 8000 Setting 16000 Setting 0 - ------------ ------------- ------------ Total # files 305666 304439 308962 Total size 75334 kB 75098 kB 75231 kB As per 'zfs iostat -v': Total alloc 71.8 G 71.6 G 71.7 G SSD alloc 3.34 G 3.41 G 3.71 G HDD alloc 68.5 G 68.2 G 68.0 Time for 'find' and 'zpool scrub' after fresh 'zfs import': find 5.6 s 5.5 s 42 s scrub 560 s 560 s 1510 s Time for serial 'find | xargs -P 1 md5sum' and parallell 'find | xargs -P 4 -n 10 md5sum'. (Only first 10000 files each) -P 1 md5sum 129 s 122 s 168 s -P 4 md5sum 182 s 150 s 187 s (size summed) 2443 MB 2499 MB 2423 MB Conflicts: include/sys/metaslab_impl.h
- Loading branch information