-
Notifications
You must be signed in to change notification settings - Fork 1.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add TRIM support #8419
Add TRIM support #8419
Conversation
Testing on this version vs the ntrim3 from @dweeezil I am not able to run zpool trim -p PoolName [root@bl]# zpool trim -p rpoole66c72b7534b499f965c331515ca3020 When I switch back to ntrim3 it works again. Some debug info: [root@bl-vsnap-100 ~]# zpool status -t
errors: No known data errors 1550458286 spa_misc.c:408:spa_load_note(): spa_load($import, config trusted): LOADED |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Posting my comments so far. I've not fully reviewed the new vdev_trim.c
file. That should be happening soon.
@bgly thanks for taking the time to test this PR. Unfortunately, I wasn't able to reproduce your failure. Can you describe your configuration, in particular and what kind of device |
The exposed device is via loopback from LIO. So think of it as a virtual disk exposed as a block device via loopback. part1 is just /dev/sdd1, which in the dbgmsg is:
/dev/bdg/7035cc52-4a75-4ced-b392-ef6bfa63fb2b - This is the virtual storage device
You can see the commands ran to create the pool also in the dbgmsg:
Btw I just tried your latest push and I do not have the issue of zpool iostat printing spaces and overflowing my text file. |
Are you using like an ISCSI lun or something? I believe if you do that you have to make sure you enable TPU via targetcli if that is what you are using for your storage mappings. |
Codecov Report
@@ Coverage Diff @@
## master #8419 +/- ##
==========================================
+ Coverage 78.6% 78.77% +0.17%
==========================================
Files 380 381 +1
Lines 115903 116951 +1048
==========================================
+ Hits 91105 92130 +1025
- Misses 24798 24821 +23
Continue to review full report at Codecov.
|
Refreshed PR, to resolve conflicts add address a few issues observed while testing and by the CI.
I've suppressed this warnings when only the pool name was specified since some devices may correctly not support discard.
Similar to other vdev specific operations (initializing, resilvering, repairing) a short version of the status is printed after the vdev while in progress. For trim in particular additional detail can be requested with the new $ zpool status -t
pool: tank
state: ONLINE
scan: none requested
config:
NAME STATE READ WRITE CKSUM
tank ONLINE 0 0 0
/var/tmp/vdev1 ONLINE 0 0 0 (100% trimmed, completed at Wed 20 Feb 2019 06:12:21 PM PST)
vdd ONLINE 0 0 0 (trim unsupported)
logs
/var/tmp/log ONLINE 0 0 0 (100% trimmed, completed at Wed 20 Feb 2019 06:12:21 PM PST) Lastly for clarity, autotrim and trim statistics are no longer reported separately in the extended iostats. For example:
@bgly I wasn't able to look at your issue today, but you're analysis in on the right track. The label will always exist but for some reason with the LIO devices you're using the new code is failing to locate it. I hope to have some time to look at this tomorrow. If you could reproduce the issue with something other than an LIO device that would be helpful. |
I have found the issue. When I create a symbolic link to the device and re-import with zpool import -d then this is when I get the issue. zpool trim works in this case:
trim does not work in this case:
The reason we do a symbolic link is to be able to import without allowing zfs to scan through bad devices if we specify a specific path. |
It seems like I found another bug somewhere with trim. With ntrim3 everything seems to work fine, but your latest branch it looks like I am again having issues.
I took off the symlink code when importing and I am now able to trim, but the status doesn't show progression. Or it is not getting completion somewhere since:
I am monitoring SCSI commands going through block layer/scsi layer/LIO/tcmu so I am able to see that all the unmap commands the large amounts of them going through and completing, but the progress bar in zpool status -t does not move? I noticed a bunch of unmap's coming through and then when it finished it just did some small writes. Thus I would have expected the trim status to show 100% completed. I also want to add that this is with your latest changes. |
I left it running and it actually completed roughly 28 minutes later. I didn't even delete anything, so there was nothing to trim. There must be something that is taking a long time to show completion?
If you compare the start time from my previous post and the completion time it is ~28 minutes to basically trim no data. - Also note that from the SCSI layer the unmap was almost instantaneous in terms of completion. I saw the unmap commands coming through and it took at most ~5 secs to handle all the unmap commands. |
@behlendorf I'm assuming its a reporting issue? I do not see any further SCSI UNMAP commands processing from the point where it starts to 30 minutes later when zpool status -t shows completion. |
@bgly thanks. Nothing out of place there, it does seem like a reporting issue. You could watch that kstat, run I've updated the PR with another commit to address a few additional issues, including the symlink one. |
@behlendorf Thanks for fixing the symlink issue, that works for me now. I will work on monitoring the trim, but while I was testing some more I found another bug.
When I issued a full device trim, it looked like
This was good because I was consistently getting data in the SCSI layer doing UNMAP. As this was happening I then realized that trimming was taking awhile so I then suspended it and canceled it. Afterwards I immediately issued a partial trim on the same device thinking this would work a lot faster. This is where issues started to occur.
--- Additional Note: It looks like it finally freed or timed out whatever the case, it is no longer hung.
|
If I were to guess it seems like for a partial trim in which nothing was deleted, then the UNMAP occurs so fast that I do not even get a chance to see anything from |
I have refreshed the OSX PR openzfsonosx/zfs#661 to be based on this one - plus a bunch of commits I needed to make it merge easier. Testing is a little hampered by that we frequently use
It appears the low-level Linux TRIM takes one (block,szie) pair only - so the vdev_disk/vdev_file call is no longer an array. Can I confirm it will stay a single entry each time, will simplify my own code permanently. |
@bgly am I correct in understanding the issue you're currently seeing only occurs when using the partial trim option? It sounds as if a normal trim proceeded as expected. @lundman thanks for picking this up for OSX. Regarding the short names this code was picked up from
Yes, the expectation is we'll keep it this way. @kpande thanks for letting me know, I'll run that down. |
The issue with completion taking awhile seems to be on partial trim. The issue with the infinite looping of stat going from 0-100 I only tried with full trim, suspend, cancel, then partial. This then causes that infinite loop to occur until an export and import is done. |
@bgly I wasn't able to reproduce your issue on my test setup. I didn't see any kind of looping behavior, but I did fix two issues related to cancelling a TRIM. When a TRIM was cancelled (instead of completed) it would not restart at the beginning nor would it clear a previously requested TRIM rate. Both issues have been fixed, and might explain some of the slowness you were seeing if you used the |
I'd like to update the status of the issue uncovered by the
Investigation of the issue over the past several days has shown the problem appears to actually be with manual trim and can occur any time a manual trim is performed while other allocations are occurring. I've been able to trivially reproduce the problem with specially created small pools and tuning set to reduce the number of metaslabs. I have pushed e196c2a to dweeezil:ntrim4 as heavy-handed fix. This patch is highly unlikely to be the ultimate solution but it does seem to make manual trim safe to use. |
UNMAP/TRIM support is a frequently-requested feature to help prevent performance from degrading on SSDs and on various other SAN-like storage back-ends. By issuing UNMAP/TRIM commands for sectors which are no longer allocated the underlying device can often more effeciently manage itself. This TRIM implementation is modeled on the `zpool initialize` feature which writes a pattern to all unallocated space in the pool. The new `zpool trim` command uses the same vdev_xlate() code to calculate what sectors are unallocated, the same per- vdev TRIM thread model and locking, and the same basic CLI for a consistent user experience. The core difference is that instead of writing a pattern it will issue UNMAP/TRIM commands for those extents. The zio pipeline was updated to accommodate this by adding a new ZIO_TYPE_TRIM type and assoicated spa taskq. This new type makes is straightforward to add the platform specific TRIM/UNMAP calls to vdev_disk.c and vdev_file.c. These new ZIO_TYPE_TRIM zios are handled largely the same way as ZIO_TYPE_READs or ZIO_TYPE_WRITEs. This made it possible to largely avoid changing the pipieline, one exception is that TRIM zio's may exceed the 16M block size limit since they contain no data. In addition to the manual `zpool trim` command, a background automatic TRIM was added and is controlled by the 'autotrim' property. It relies on the exact same infrastructure as the manual TRIM. However, instead of relying on the extents in a metaslab's ms_allocatable range tree, a ms_trim tree is kept per metaslab. When 'autotrim=on', ranges added back to the ms_allocatable tree are also added to the ms_free tree. The ms_free tree is then periodically consumed by an autotrim thread which systematically walks a top level vdev's metaslabs. Since the automatic TRIM will skip ranges it considers too small there is value in occasionally running a full `zpool trim`. This may occur when the freed blocks are small and not enough time was allowed to aggregate them. An automatic TRIM and a manual `zpool trim` may be run concurrently, in which case the automatic TRIM will yield to the manual TRIM. Contributions-by: Saso Kiselkov <saso.kiselkov@nexenta.com> Contributions-by: Tim Chase <tim@chase2k.com> Contributions-by: Chunwei Chen <tuxoko@gmail.com> Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
* Updated comments. * Converted ZIO_PRIORITY_TRIM to ASSERTs in vdev_queue_io() Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
* Added Big Theory section which discusses the high level details of both manual and automatic TRIM. * Updated partial TRIM description in zpool(8) using Richard Laagers proposed language. * Documented the hot spare / replacing limitation. * Brought zio_vdev_io_assess() back inline with the existing code and fixed the 'persistent bit' portion of the comment. * Fixed typos in comments. Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
During review it was observed that the current semantics of a partial TRIM are not well-defined. If we make the current partial trim a first class feature, folks will come to rely on the current behavior, despite any disclaimers in the manpage. This creates risk to future changes, which must now maintain the behavior of trying not to touch new metaslabs until absolutely necessary, lest partial trims become less useful. For this reason, until the work is done to fully design a well-defined partial TRIM feature it has been removed from the `zpool trim` command. That said, the existing partial TRIM behavior is still highly useful for very large thinly-provisioned pools. Therefore, it has been converted to a new 'zfs_trim_metaslab_skip' module option which is the appropriate place for this kind of setting. All of the existing partial TRIM behavior has been retained. To issue a partial TRIM set 'zfs_trim_metaslab_skip=1' then initiate a manual TRIM. After this point the module option may be changed without any effect on the running TRIM. The setting is stored on-disk and will persist for the duration of the requested TRIM. $ echo 1 >/sys/module/zfs/parameters/zfs_trim_metaslab_skip $ zpool trim partial-pool This means that a full TRIM can be requested for a different pool while the previous partial TRIM is still running. $ echo 0 >/sys/module/zfs/parameters/zfs_trim_metaslab_skip $ zpool trim full-pool Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
The issued_trim flag must always be set to B_TRUE before calling vdev_trim_ranges() since this function may be interrupted. In which case there can be outstanding TRIM I/Os which must complete before allowing new allocations. Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
@rugubara thanks for putting together that history. I was able to confirm that the versions you listed should never have suffered from the defect I referenced. While re-reviewing the code for potential issues I did find and fix a potential problem, see cd265fd, but given the scenario you described in the #8419 (comment) above I don't believe you could have hit it. I'll go over everything again, but at the moment I don't have an explanation for how the pool could have been damaged. |
@rugubara might have missed it, but which SSD brand/models were used when the corruption happened? |
These are SATA toshiba 60GB (ZIL) and SATA ocz vertex 120GB (l2arc). My understanding is that zil and l2arc are protected by checksums as well - i.e. if I have an abnormal restart and zil is corrupted, the import will stop complaining about checksum errors in ZIL. I can't imagine how a failed ZIL can corrupt the pool. |
Observed while running ztest. The SCL_CONFIG read lock should not be held when calling metaslab_disable() because the following dead lock is possible. This was introduced by allowing the metaslab_enable() function to wait for a txg sync. Thread 1191 (Thread 0x7ffa908c1700 (LWP 16776)): cv_wait spa_config_enter -> Waiting for SCL_CONFIG read lock, blocked due to pending writer (994) spa_txg_history_init_io txg_sync_thread Thread 994 (Thread 0x7ffa903a4700 (LWP 16791)): cv_wait spa_config_enter -> Waiting for SCL_ALL write lock, blocked due to holder (983) spa_vdev_exit spa_vdev_add ztest_vdev_aux_add_remove -> Holding ztest_vdev_lock ztest_execute Thread 1001 (Thread 0x7ffa90902700 (LWP 18020)): cv_wait txg_wait_synced -> Waiting on txg sync to decrement mg_ms_disabled (1191) metaslab_enable vdev_autotrim_thread Thread 983 (Thread 0x7ffa63451700 (LWP 18019)): cv_wait metaslab_group_disabled_increment -> Waiting for mg_ms_disabled to be less than max_disabled_ms (1001) metaslab_disable vdev_initialize_thread -> *** Holding SCL_CONFIG read lock *** Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Colleagues, should I keep a damaged pool for investigation? I don't have enough space to copy the data and I would like to rebuild the system if there is no hope of recovering the data. Are you interested in poking the damaged pool for details? |
Always wait for the next txg after checking a metaslab group. When no TRIM commands were issued then don't force a quiesce so idle pools continue to generate no-op txgs. Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
@rugubara unfortunately I don't think the damaged files are recoverable. The scrub would have repaired them if it were possible However, before you do rebuild the pool could you spot check a few of the damaged files and check when they were last updated (atime / mtime). That may provide some insight if this is somehow related to replaying the zil after an abnormal shutdown.
Do I understand correctly that the primary pool RAIDZ2 storage is using HDDs and SSDs are only used for the ZIL and L2ARC devices? This would mean that TRIM commands would only be sent to the ZIL device and never the primary pool. Could you please run
That's right. The ZIL is checksummed and it won't be replayed if it's corrupted. I don't see how it could have caused this issue either. |
The list of damaged files starts with a very long list of metadata. I think it is the reason why I can't mount majority of my datasets. 3 are directories and 1 file
zpool status reports that main pool devices don't support trim, and log and cache devices suppot it. |
@rugubara thank you, I wish I had an explanation for you. But I don't see anyway the TRIM code could have caused this. Nor was I able to reproduce any problem locally by repeatedly power cycling a trimming pool which was making heavy use of a dedicated log device. Additionally, |
UNMAP/TRIM support is a frequently-requested feature to help prevent performance from degrading on SSDs and on various other SAN-like storage back-ends. By issuing UNMAP/TRIM commands for sectors which are no longer allocated the underlying device can often more efficiently manage itself. This TRIM implementation is modeled on the `zpool initialize` feature which writes a pattern to all unallocated space in the pool. The new `zpool trim` command uses the same vdev_xlate() code to calculate what sectors are unallocated, the same per- vdev TRIM thread model and locking, and the same basic CLI for a consistent user experience. The core difference is that instead of writing a pattern it will issue UNMAP/TRIM commands for those extents. The zio pipeline was updated to accommodate this by adding a new ZIO_TYPE_TRIM type and associated spa taskq. This new type makes is straight forward to add the platform specific TRIM/UNMAP calls to vdev_disk.c and vdev_file.c. These new ZIO_TYPE_TRIM zios are handled largely the same way as ZIO_TYPE_READs or ZIO_TYPE_WRITEs. This makes it possible to largely avoid changing the pipieline, one exception is that TRIM zio's may exceed the 16M block size limit since they contain no data. In addition to the manual `zpool trim` command, a background automatic TRIM was added and is controlled by the 'autotrim' property. It relies on the exact same infrastructure as the manual TRIM. However, instead of relying on the extents in a metaslab's ms_allocatable range tree, a ms_trim tree is kept per metaslab. When 'autotrim=on', ranges added back to the ms_allocatable tree are also added to the ms_free tree. The ms_free tree is then periodically consumed by an autotrim thread which systematically walks a top level vdev's metaslabs. Since the automatic TRIM will skip ranges it considers too small there is value in occasionally running a full `zpool trim`. This may occur when the freed blocks are small and not enough time was allowed to aggregate them. An automatic TRIM and a manual `zpool trim` may be run concurrently, in which case the automatic TRIM will yield to the manual TRIM. Reviewed-by: Jorgen Lundman <lundman@lundman.net> Reviewed-by: Tim Chase <tim@chase2k.com> Reviewed-by: Matt Ahrens <mahrens@delphix.com> Reviewed-by: George Wilson <george.wilson@delphix.com> Reviewed-by: Serapheim Dimitropoulos <serapheim@delphix.com> Contributions-by: Saso Kiselkov <saso.kiselkov@nexenta.com> Contributions-by: Tim Chase <tim@chase2k.com> Contributions-by: Chunwei Chen <tuxoko@gmail.com> Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov> Closes openzfs#8419 Closes openzfs#598
"Be aware that automatic trimming of recently freed data blocks can put significant stress on the underlying storage devices." Link Does this still apply? Since turning on autotrim=on is the default from now on in some Ubuntu packages and components. |
That statement applies regardless of the filesystem you're using, though batching helps somewhat, and the severity of the impact may vary by storage device @TB-G . Manual trim on a schedule is often the optimal approach. |
Im curious about this part "put significant stress on the underlying storage devices". Of course autotrim takes some performance for a few seconds, but is this worse than with other (file)systems? This is listed separately in the ZFS documentation and so I wonder why they (Ubuntu) turn on or recommend autotrim=on (sometimes in combination with zpool trim) on Desktop and Server OS. Does it have more advantages than disadvantages? Are we talking about NVMe vs SSD? And why (and when) does it cause significant stress on the underlying storage devices? |
This is a hardware issue, not a filesystem issue @TB-G I'd recommend doing some research. |
I'm not talking about hardware or filesystems. I want to know why this is explicitly stated in the documentation and I want to know how the implementation of ZFS handles this and why ubuntu turns this on by default. See my other questions. |
You need to be talking about hardware, that's what this is about. It's explicitly stated in the documentation, because it can be important enough that users should be aware of it. You will need to ask the Canonical maintainer for their rationale in enabling it by default there, this is not the correct place for that. |
I will certainly ask. And I understand that the underlying storage applies, but that is not my question. I want to know how ZFS handles this even though it depends on the underlying hardware. So my conclusion is that autotrim can cause higher iowait in large SSD/NVMe pools and that this still applies to ZFS pools. In this case it is better for me to turn off autotrim and run zpool trim periodically. I just wanted to make this clear. |
Motivation and Context
UNMAP/TRIM support is a frequently-requested feature to help prevent performance from degrading on SSDs and on various other SAN-like storage back-ends. By issuing UNMAP/TRIM commands for sectors which are no longer allocated the underlying device can often more efficiently manage itself.
Description
This TRIM implementation is modeled on the
zpool initialize
feature which writes a pattern to all unallocated space in the pool. The newzpool trim
command uses the samevdev_xlate()
code to calculate what sectors are unallocated, the same per-vdev TRIM thread model and locking, and the same basic CLI for a consistent user experience. The core difference is that instead of writing a pattern it will issue UNMAP/TRIM commands for those extents.The zio pipeline was updated to accomidate this by adding a new
ZIO_TYPE_TRIM
type and associated spa taskq. This new type makes is straight forward to add the platform specific TRIM/UNMAP calls to vdev_disk.c and vdev_file.c. These newZIO_TYPE_TRIM
zios are handled largely the same way asZIO_TYPE_READ
s orZIO_TYPE_WRITE
s. This made it possible to largely avoid changing the pipeline, one exception is that TRIM zio's may exceed the 16M block size limit since they contain no data.In addition to the manual
zpool trim
command a background automatic TRIM was added and is controlled by the 'autotrim' property. It relies on the exact same infrastructure as the manual TRIM. However, instead of relying on the extents in a metaslab'sms_allocatable
range tree, ams_trim
tree is kept per metaslab. When 'autotrim=on' ranges added back to thems_allocatable
tree are also added to thems_trim
tree. Thems_trim
tree is then periodically consumed by an autotrim thread which systematically walks a top level vdev's metaslabs.Since the automatic TRIM will skip ranges it considers too small there is still value in occasionally running a full
zpool trim
whenautotrim=on
. This may be useful when the freed blocks are small and not enough time was allowed to aggregate them. An automatic TRIM and a manualzpool trim
may be run concurrently, in which case the automatic TRIM will yield to the manual TRIM.Added commands and example output.
# TRIM request size histogram (right two columns) $ zpool iostat -r testpool sync_read sync_write async_read async_write scrub trim req_size ind agg ind agg ind agg ind agg ind agg ind agg ---------- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- 512 0 0 0 0 0 0 0 0 0 0 0 0 1K 0 0 0 0 0 0 0 0 0 0 0 0 2K 0 0 0 0 0 0 0 0 0 0 0 0 4K 14.3M 0 1.86M 0 821K 0 48.3M 0 1.17M 0 0 0 8K 22.6K 17.4K 17.5K 4 42 67.1K 3.63M 20.1M 177K 59.0K 0 0 16K 11.8K 17.3K 56.2K 890 3.23K 62.4K 2.52M 13.8M 61.8K 80.0K 0 0 32K 47.7K 8.82K 29.8K 8.20K 11.7K 65.1K 3.38M 10.2M 26.1K 361K 1.77M 0 64K 36 1005 107K 12.3K 0 17.3K 3.12M 6.85M 10.1K 472K 1.20M 0 128K 0 1.66K 888K 43.2K 0 1.31K 13.5M 5.19M 47.9K 505K 946K 0 256K 0 1.78K 0 363K 0 60 0 7.52M 0 290K 542K 0 512K 0 0 0 1.58M 0 1 0 7.84M 0 367K 354K 0 1M 0 0 0 16.9M 0 0 0 13.2M 0 1.62M 231K 0 2M 0 0 0 0 0 0 0 0 0 0 177K 0 4M 0 0 0 0 0 0 0 0 0 0 130K 0 8M 0 0 0 0 0 0 0 0 0 0 89.9K 0 16M 0 0 0 0 0 0 0 0 0 0 193K 0 ----------------------------------------------------------------------------------------------
How Has This Been Tested?
The
ztest
command has been updated to periodically initiate azpool trim
and theautotrim
property will be randomly toggled on/off. This has been proven to provide excellent coverage for the TRIM functionality under a wide range of pool configurations and to expose otherwise hard to hit race conditions. In particularztest
does an excellent job verifying that a new feature interacts correctly with the existing features.25 new test cases where added to the ZFS Test Suite to provide additional test coverage. These include slightly modified versions of all the
zpool initialize
test cases which verify thatzpool trim
CLI behaves as intended (in a similar way to initialize). Newzpool trim
tests were added to test-p
partial TRIM,-r
rate limited TRIM, and that multiplezpool trim
commands can be issued. Additionally, stress tests were added to verify thatzpool trim
andzpool set autotrim=on
only discard unallocated space when under a heavy alloc/free workload.Performance testing was done using both mirrored and raidz configurations in order to assess the overall impact on applications while the
autotrim=on
property is set. The test case consists of:Create a pool. Both a mirror and raidz2 pool were tested, three 800GB Seagate ST800FM0173 devices were used for each configuration. This has the advantage that total usable pool capacity was approximately the same for both configurations (800G).
Warm up - Repeat the following commands for 2000 iterations. For each iteration, N is randomly selected between 1 and 200 and copy of the Linux kernel git tree is removed and then copied back from tmpfs (~3.4GB). This was done to both fill and fragment the unallocated pool space until a steady-state was reached. The value 200 was selected for N in order to keep the pool at ~76% capacity.
rm -r /testpool/fs/linux-$N
time (cp -a /tmp/linux /testpool/fs/Linux-$N; sync)
Manual TRIM - Issue a manual
zpool trim
, then repeat the commands from the previous step for 250 iterations. This was done to both verify a manual TRIM restores the initial performance and to determine approximately how long it takes to return to a steady-state. The manual TRIM phase was repeated twice.Automatic TRIM - Set the
autotrim=on
property, then repeat the commands from the previous step for 1000 iterations. As blocks are freed and added back in to the unallocated pool space those extents are asynchronously trimmed. What we want to verify is that the asynchronous TRIM does not negatively impact overall pool performance, that after enough iterations performance returns to close it its initial levels, and finally that performance is maintained at that level.The following graph shows pool performance for all of the phases described above. On the Y-axis the wall clock time in seconds is plotted for each
cp -a; sync
command. On the X-axis is each iteration, from 1 to 3500 (2000 warm up, 250 + 250 manual TRIM, 1000 automatic TRIM). Multiple test runs were performed for each pool configuration and the results of these runs then averaged together and graphed. To further reduce the noise a moving average was then plotted using the previously averaged data.Configuration:
[edit] Updated motivation and description to reflect current PR.
[edit] Removed work in progress warning.
Types of changes
Checklist:
Signed-off-by
.