Skip to content
Permalink
Dennis-Zhou/bt…

Commits on Nov 21, 2019

  1. btrfs: make smaller extents more likely to go into bitmaps

    It's less than ideal for small extents to eat into our extent budget, so
    force extents <= 32KB into the bitmaps save for the first handful.
    
    Signed-off-by: Dennis Zhou <dennis@kernel.org>
    Reviewed-by: Josef Bacik <josef@toxicpanda.com>
    dennisszhou authored and 0day robot committed Nov 21, 2019
  2. btrfs: increase the metadata allowance for the free_space_cache

    Currently, there is no way for the free space cache to recover from
    being serviced by purely bitmaps because the extent threshold is set to
    0 in recalculate_thresholds() when we surpass the metadata allowance.
    
    This adds a recovery mechanism by keeping large extents out of the
    bitmaps and increases the metadata upper bound to 64KB. The recovery
    mechanism bypasses this upper bound, thus making it a soft upper bound.
    But, with the bypass being 1MB or greater, it shouldn't add unbounded
    overhead.
    
    Signed-off-by: Dennis Zhou <dennis@kernel.org>
    Reviewed-by: Josef Bacik <josef@toxicpanda.com>
    dennisszhou authored and 0day robot committed Nov 21, 2019
  3. btrfs: add async discard header

    Give a brief overview for how async discard is implemented.
    
    Signed-off-by: Dennis Zhou <dennis@kernel.org>
    Reviewed-by: Josef Bacik <josef@toxicpanda.com>
    dennisszhou authored and 0day robot committed Nov 21, 2019
  4. btrfs: keep track of discard reuse stats

    Keep track of how much we are discarding and how often we are reusing
    with async discard.
    
    Signed-off-by: Dennis Zhou <dennis@kernel.org>
    Reviewed-by: Josef Bacik <josef@toxicpanda.com>
    dennisszhou authored and 0day robot committed Nov 21, 2019
  5. btrfs: only keep track of data extents for async discard

    As mentioned earlier, discarding data can be done either by issuing an
    explicit discard or implicitly by reusing the LBA. Metadata chunks see
    much more frequent reuse due to well it being metadata. So instead of
    explicitly discarding metadata blocks, just leave them be and let the
    latter implicit discarding be done for them.
    
    Signed-off-by: Dennis Zhou <dennis@kernel.org>
    Reviewed-by: Josef Bacik <josef@toxicpanda.com>
    dennisszhou authored and 0day robot committed Nov 21, 2019
  6. btrfs: have multiple discard lists

    Non-block group destruction discarding currently only had a single list
    with no minimum discard length. This can lead to caravaning more
    meaningful discards behind a heavily fragmented block group.
    
    This adds support for multiple lists with minimum discard lengths to
    prevent the caravan effect. We promote block groups back up when we
    exceed the BTRFS_ASYNC_DISCARD_MAX_FILTER size, currently we support
    only 2 lists with filters of 1MB and 32KB respectively.
    
    Signed-off-by: Dennis Zhou <dennis@kernel.org>
    Reviewed-by: Josef Bacik <josef@toxicpanda.com>
    dennisszhou authored and 0day robot committed Nov 21, 2019
  7. btrfs: make max async discard size tunable

    Expose max_discard_size as a tunable via sysfs.
    
    Signed-off-by: Dennis Zhou <dennis@kernel.org>
    dennisszhou authored and 0day robot committed Nov 21, 2019
  8. btrfs: limit max discard size for async discard

    Throttle the maximum size of a discard so that we can provide an upper
    bound for the rate of async discard. While the block layer is able to
    split discards into the appropriate sized discards, we want to be able
    to account more accurately the rate at which we are consuming ncq slots
    as well as limit the upper bound of work for a discard.
    
    Signed-off-by: Dennis Zhou <dennis@kernel.org>
    Reviewed-by: Josef Bacik <josef@toxicpanda.com>
    dennisszhou authored and 0day robot committed Nov 21, 2019
  9. btrfs: add bps discard rate limit

    Provide an ability to rate limit based on mbps in addition to the iops
    delay calculated from number of discardable extents.
    
    Signed-off-by: Dennis Zhou <dennis@kernel.org>
    dennisszhou authored and 0day robot committed Nov 21, 2019
  10. btrfs: calculate discard delay based on number of extents

    Use the number of discardable extents to help guide our discard delay
    interval. This value is reevaluated every transaction commit.
    
    Signed-off-by: Dennis Zhou <dennis@kernel.org>
    Reviewed-by: Josef Bacik <josef@toxicpanda.com>
    dennisszhou authored and 0day robot committed Nov 21, 2019
  11. btrfs: keep track of discardable_bytes

    Keep track of this metric so that we can understand how ahead or behind
    we are in discarding rate.
    
    Signed-off-by: Dennis Zhou <dennis@kernel.org>
    dennisszhou authored and 0day robot committed Nov 21, 2019
  12. btrfs: track discardable extents for async discard

    The number of discardable extents will serve as the rate limiting metric
    for how often we should discard. This keeps track of discardable extents
    in the free space caches by maintaining deltas and propagating them to
    the global count.
    
    Signed-off-by: Dennis Zhou <dennis@kernel.org>
    dennisszhou authored and 0day robot committed Nov 21, 2019
  13. btrfs: add discard sysfs directory

    Setup sysfs directory for discard stats + tunables.
    
    Signed-off-by: Dennis Zhou <dennis@kernel.org>
    Reviewed-by: Josef Bacik <josef@toxicpanda.com>
    dennisszhou authored and 0day robot committed Nov 21, 2019
  14. btrfs: make UUID/debug have its own kobject

    Btrfs only allowed attributes to be exposed in debug/. Let's let other
    groups be created by making debug its own kobject.
    
    This also makes the per-fs debug options separate from the global
    features mount attributes. This seems to be needed as
    sysfs_create_files() requires const struct attribute * while
    sysfs_create_group() can take struct attribute *. This seems nicer as
    per file system, you'll probably use to_fs_info().
    
    Signed-off-by: Dennis Zhou <dennis@kernel.org>
    Reviewed-by: Josef Bacik <josef@toxicpanda.com>
    dennisszhou authored and 0day robot committed Nov 21, 2019
  15. btrfs: add removal calls for sysfs debug/

    We probably should call sysfs_remove_group() on debug/.
    
    Signed-off-by: Dennis Zhou <dennis@kernel.org>
    Reviewed-by: Josef Bacik <josef@toxicpanda.com>
    dennisszhou authored and 0day robot committed Nov 21, 2019
  16. btrfs: discard one region at a time in async discard

    The prior two patches added discarding via a background workqueue. This
    just piggybacked off of the fstrim code to trim the whole block at once.
    Well inevitably this is worse performance wise and will aggressively
    overtrim. But it was nice to plumb the other infrastructure to keep the
    patches easier to review.
    
    This adds the real goal of this series which is discarding slowly (ie a
    slow long running fstrim). The discarding is split into two phases,
    extents and then bitmaps. The reason for this is two fold. First, the
    bitmap regions overlap the extent regions. Second, discarding the
    extents first will let the newly trimmed bitmaps have the highest chance
    of coalescing when being readded to the free space cache.
    
    Signed-off-by: Dennis Zhou <dennis@kernel.org>
    Reviewed-by: Josef Bacik <josef@toxicpanda.com>
    dennisszhou authored and 0day robot committed Nov 21, 2019
  17. btrfs: handle empty block_group removal

    block_group removal is a little tricky. It can race with the extent
    allocator, the cleaner thread, and balancing. The current path is for a
    block_group to be added to the unused_bgs list. Then, when the cleaner
    thread comes around, it starts a transaction and then proceeds with
    removing the block_group. Extents that are pinned are subsequently
    removed from the pinned trees and then eventually a discard is issued
    for the entire block_group.
    
    Async discard introduces another player into the game, the discard
    workqueue. While it has none of the racing issues, the new problem is
    ensuring we don't leave free space untrimmed prior to forgetting the
    block_group.  This is handled by placing fully free block_groups on a
    separate discard queue. This is necessary to maintain discarding order
    as in the future we will slowly trim even fully free block_groups. The
    ordering helps us make progress on the same block_group rather than say
    the last fully freed block_group or needing to search through the fully
    freed block groups at the beginning of a list and insert after.
    
    The new order of events is a fully freed block group gets placed on the
    unused discard queue first. Once it's processed, it will be placed on
    the unusued_bgs list and then the original sequence of events will
    happen, just without the final whole block_group discard.
    
    The mount flags can change when processing unused_bgs, so when flipping
    from DISCARD to DISCARD_ASYNC, the unused_bgs must be punted to the
    discard_list to be trimmed. If we flip off DISCARD_ASYNC, we punt
    free block groups on the discard_list to the unused_bg queue which will
    do the final discard for us.
    
    Signed-off-by: Dennis Zhou <dennis@kernel.org>
    Reviewed-by: Josef Bacik <josef@toxicpanda.com>
    dennisszhou authored and 0day robot committed Nov 21, 2019
  18. btrfs: add the beginning of async discard, discard workqueue

    When discard is enabled, everytime a pinned extent is released back to
    the block_group's free space cache, a discard is issued for the extent.
    This is an overeager approach when it comes to discarding and helping
    the SSD maintain enough free space to prevent severe garbage collection
    situations.
    
    This adds the beginning of async discard. Instead of issuing a discard
    prior to returning it to the free space, it is just marked as untrimmed.
    The block_group is then added to a LRU which then feeds into a workqueue
    to issue discards at a much slower rate. Full discarding of unused block
    groups is still done and will be address in a future patch in this
    series.
    
    For now, we don't persist the discard state of extents and bitmaps.
    Therefore, our failure recovery mode will be to consider extents
    untrimmed. This lets us handle failure and unmounting as one in the
    same.
    
    On a number of Facebook webservers, I collected data every minute
    accounting the time we spent in btrfs_finish_extent_commit() (col. 1)
    and in btrfs_commit_transaction() (col. 2). btrfs_finish_extent_commit()
    is where we discard extents synchronously before returning them to the
    free space cache.
    
    discard=sync:
                     p99 total per minute       p99 total per minute
          Drive   |   extent_commit() (ms)  |    commit_trans() (ms)
        ---------------------------------------------------------------
         Drive A  |           434           |          1170
         Drive B  |           880           |          2330
         Drive C  |          2943           |          3920
         Drive D  |          4763           |          5701
    
    discard=async:
                     p99 total per minute       p99 total per minute
          Drive   |   extent_commit() (ms)  |    commit_trans() (ms)
        --------------------------------------------------------------
         Drive A  |           134           |           956
         Drive B  |            64           |          1972
         Drive C  |            59           |          1032
         Drive D  |            62           |          1200
    
    While it's not great that the stats are cumulative over 1m, all of these
    servers are running the same workload and and the delta between the two
    are substantial. We are spending significantly less time in
    btrfs_finish_extent_commit() which is responsible for discarding.
    
    Signed-off-by: Dennis Zhou <dennis@kernel.org>
    Reviewed-by: Josef Bacik <josef@toxicpanda.com>
    dennisszhou authored and 0day robot committed Nov 21, 2019
  19. btrfs: keep track of cleanliness of the bitmap

    There is a cap in btrfs in the amount of free extents that a block group
    can have. When it surpasses that threshold, future extents are placed
    into bitmaps. Instead of keeping track of if a certain bit is trimmed or
    not in a second bitmap, keep track of the relative state of the bitmap.
    
    With async discard, trimming bitmaps becomes a more frequent operation.
    As a trade off with simplicity, we keep track of if discarding a bitmap
    is in progress. If we fully scan a bitmap and trim as necessary, the
    bitmap is marked clean. This has some caveats as the min block size may
    skip over regions deemed too small. But this should be a reasonable
    trade off rather than keeping a second bitmap and making allocation
    paths more complex. The downside is we may overtrim, but ideally the min
    block size should prevent us from doing that too often and getting stuck
    trimming pathological cases.
    
    BTRFS_TRIM_STATE_TRIMMING is added to indicate a bitmap is in the
    process of being trimmed. If additional free space is added to that
    bitmap, the bit is cleared. A bitmap will be marked
    BTRFS_TRIM_STATE_TRIMMED if the trimming code was able to reach the end
    of it and the former is still set.
    
    Signed-off-by: Dennis Zhou <dennis@kernel.org>
    Reviewed-by: Josef Bacik <josef@toxicpanda.com>
    dennisszhou authored and 0day robot committed Nov 21, 2019
  20. btrfs: keep track of which extents have been discarded

    Async discard will use the free space cache as backing knowledge for
    which extents to discard. This patch plumbs knowledge about which
    extents need to be discarded into the free space cache from
    unpin_extent_range().
    
    An untrimmed extent can merge with everything as this is a new region.
    Absorbing trimmed extents is a tradeoff to for greater coalescing which
    makes life better for find_free_extent(). Additionally, it seems the
    size of a trim isn't as problematic as the trim io itself.
    
    When reading in the free space cache from disk, if sync is set, mark all
    extents as trimmed. The current code ensures at transaction commit that
    all free space is trimmed when sync is set, so this reflects that.
    
    Signed-off-by: Dennis Zhou <dennis@kernel.org>
    dennisszhou authored and 0day robot committed Nov 21, 2019
  21. btrfs: rename DISCARD opt to DISCARD_SYNC

    This series introduces async discard which will use the flag
    DISCARD_ASYNC, so rename the original flag to DISCARD_SYNC as it is
    synchronously done in transaction commit.
    
    Signed-off-by: Dennis Zhou <dennis@kernel.org>
    Reviewed-by: Josef Bacik <josef@toxicpanda.com>
    Reviewed-by: Johannes Thumshirn <jthumshirn@suse.de>
    dennisszhou authored and 0day robot committed Nov 21, 2019
  22. bitmap: genericize percpu bitmap region iterators

    Bitmaps are fairly popular for their space efficiency, but we don't have
    generic iterators available. Make percpu's bitmap region iterators
    available to everyone.
    
    Signed-off-by: Dennis Zhou <dennis@kernel.org>
    Reviewed-by: Josef Bacik <josef@toxicpanda.com>
    dennisszhou authored and 0day robot committed Nov 21, 2019

Commits on Nov 19, 2019

Commits on Nov 18, 2019

  1. btrfs: drop bdev argument from submit_extent_page

    After previous patches removing bdev being passed around to set it to
    bio, it has become unused in submit_extent_page. So it now has "only" 13
    parameters.
    
    Signed-off-by: David Sterba <dsterba@suse.com>
    kdave committed Nov 18, 2019
  2. btrfs: remove extent_map::bdev

    We can now remove the bdev from extent_map. Previous patches made sure
    that bio_set_dev is correctly in all places and that we don't need to
    grab it from latest_bdev or pass it around inside the extent map.
    
    Signed-off-by: David Sterba <dsterba@suse.com>
    kdave committed Nov 18, 2019
  3. btrfs: drop bio_set_dev where not needed

    bio_set_dev sets a bdev to a bio and is not only setting a pointer bug
    also changing some state bits if there was a different bdev set before.
    This is one thing that's not needed.
    
    Another thing is that setting a bdev at bio allocation time is too early
    and actually does not work with plain redundancy profiles, where each
    time we submit a bio to a device, the bdev is set correctly.
    
    In many places the bio bdev is set to latest_bdev that seems to serve as
    a stub pointer "just to put something to bio". But we don't have to do
    that.
    
    Where do we know which bdev to set:
    
    * for regular IO: submit_stripe_bio that's called by btrfs_map_bio
    
    * repair IO: repair_io_failure, read or write from specific device
    
    * super block write (using buffer_heads but uses raw bdev) and barriers
    
    * scrub: this does not use all regular IO paths as it needs to reach all
      copies, verify and fixup eventually, and for that all bdev management
      is independent
    
    * raid56: rbio_add_io_page, for the RMW write
    
    * integrity-checker: does it's own low-level block tracking
    
    Signed-off-by: David Sterba <dsterba@suse.com>
    kdave committed Nov 18, 2019
  4. btrfs: get bdev directly from fs_devices in submit_extent_page

    This is preparatory patch to remove @bdev parameter from
    submit_extent_page. It can't be removed completely, because the cgroups
    need it for wbc when initializing the bio
    
    wbc_init_bio
      bio_associate_blkg_from_css
        dereference bdev->bi_disk->queue
    
    The bdev pointer is the same as latest_bdev, thus no functional change.
    We can retrieve it from fs_devices that's reachable through several
    dereferences. The local variable shadows the parameter, but that's only
    temporary.
    
    Signed-off-by: David Sterba <dsterba@suse.com>
    kdave committed Nov 18, 2019
  5. btrfs: record all roots for rename exchange on a subvol

    Testing with the new fsstress support for subvolumes uncovered a pretty
    bad problem with rename exchange on subvolumes.  We're modifying two
    different subvolumes, but we only start the transaction on one of them,
    so the other one is not added to the dirty root list.  This is caught by
    btrfs_cow_block() with a warning because the root has not been updated,
    however if we do not modify this root again we'll end up pointing at an
    invalid root because the root item is never updated.
    
    Fix this by making sure we add the destination root to the trans list,
    the same as we do with normal renames.  This fixes the corruption.
    
    Fixes: cdd1fed ("btrfs: add support for RENAME_EXCHANGE and RENAME_WHITEOUT")
    CC: stable@vger.kernel.org # 4.9+
    Reviewed-by: Filipe Manana <fdmanana@suse.com>
    Signed-off-by: Josef Bacik <josef@toxicpanda.com>
    Signed-off-by: David Sterba <dsterba@suse.com>
    josefbacik authored and kdave committed Nov 18, 2019
  6. Btrfs: fix block group remaining RO forever after error during device…

    … replace
    
    When doing a device replace, while at scrub.c:scrub_enumerate_chunks(), we
    set the block group to RO mode and then wait for any ongoing writes into
    extents of the block group to complete. While doing that wait we overwrite
    the value of the variable 'ret' and can break out of the loop if an error
    happens without turning the block group back into RW mode. So what happens
    is the following:
    
    1) btrfs_inc_block_group_ro() returns 0, meaning it set the block group
       to RO mode (its ->ro field set to 1 or incremented to some value > 1);
    
    2) Then btrfs_wait_ordered_roots() returns a value > 0;
    
    3) Then if either joining or committing the transaction fails, we break
       out of the loop wihtout calling btrfs_dec_block_group_ro(), leaving
       the block group in RO mode forever.
    
    To fix this, just remove the code that waits for ongoing writes to extents
    of the block group, since it's not needed because in the initial setup
    phase of a device replace operation, before starting to find all chunks
    and their extents, we set the target device for replace while holding
    fs_info->dev_replace->rwsem, which ensures that after releasing that
    semaphore, any writes into the source device are made to the target device
    as well (__btrfs_map_block() guarantees that). So while at
    scrub_enumerate_chunks() we only need to worry about finding and copying
    extents (from the source device to the target device) that were written
    before we started the device replace operation.
    
    Fixes: f0e9b7d ("Btrfs: fix race setting block group readonly during device replace")
    Signed-off-by: Filipe Manana <fdmanana@suse.com>
    Signed-off-by: David Sterba <dsterba@suse.com>
    fdmanana authored and kdave committed Nov 18, 2019
  7. btrfs: scrub: Don't check free space before marking a block group RO

    [BUG]
    When running btrfs/072 with only one online CPU, it has a pretty high
    chance to fail:
    
      btrfs/072 12s ... _check_dmesg: something found in dmesg (see xfstests-dev/results//btrfs/072.dmesg)
      - output mismatch (see xfstests-dev/results//btrfs/072.out.bad)
          --- tests/btrfs/072.out     2019-10-22 15:18:14.008965340 +0800
          +++ /xfstests-dev/results//btrfs/072.out.bad      2019-11-14 15:56:45.877152240 +0800
          @@ -1,2 +1,3 @@
           QA output created by 072
           Silence is golden
          +Scrub find errors in "-m dup -d single" test
          ...
    
    And with the following call trace:
    
      BTRFS info (device dm-5): scrub: started on devid 1
      ------------[ cut here ]------------
      BTRFS: Transaction aborted (error -27)
      WARNING: CPU: 0 PID: 55087 at fs/btrfs/block-group.c:1890 btrfs_create_pending_block_groups+0x3e6/0x470 [btrfs]
      CPU: 0 PID: 55087 Comm: btrfs Tainted: G        W  O      5.4.0-rc1-custom+ torvalds#13
      Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015
      RIP: 0010:btrfs_create_pending_block_groups+0x3e6/0x470 [btrfs]
      Call Trace:
       __btrfs_end_transaction+0xdb/0x310 [btrfs]
       btrfs_end_transaction+0x10/0x20 [btrfs]
       btrfs_inc_block_group_ro+0x1c9/0x210 [btrfs]
       scrub_enumerate_chunks+0x264/0x940 [btrfs]
       btrfs_scrub_dev+0x45c/0x8f0 [btrfs]
       btrfs_ioctl+0x31a1/0x3fb0 [btrfs]
       do_vfs_ioctl+0x636/0xaa0
       ksys_ioctl+0x67/0x90
       __x64_sys_ioctl+0x43/0x50
       do_syscall_64+0x79/0xe0
       entry_SYSCALL_64_after_hwframe+0x49/0xbe
      ---[ end trace 166c865cec7688e7 ]---
    
    [CAUSE]
    The error number -27 is -EFBIG, returned from the following call chain:
    btrfs_end_transaction()
    |- __btrfs_end_transaction()
       |- btrfs_create_pending_block_groups()
          |- btrfs_finish_chunk_alloc()
             |- btrfs_add_system_chunk()
    
    This happens because we have used up all space of
    btrfs_super_block::sys_chunk_array.
    
    The root cause is, we have the following bad loop of creating tons of
    system chunks:
    
    1. The only SYSTEM chunk is being scrubbed
       It's very common to have only one SYSTEM chunk.
    2. New SYSTEM bg will be allocated
       As btrfs_inc_block_group_ro() will check if we have enough space
       after marking current bg RO. If not, then allocate a new chunk.
    3. New SYSTEM bg is still empty, will be reclaimed
       During the reclaim, we will mark it RO again.
    4. That newly allocated empty SYSTEM bg get scrubbed
       We go back to step 2, as the bg is already mark RO but still not
       cleaned up yet.
    
    If the cleaner kthread doesn't get executed fast enough (e.g. only one
    CPU), then we will get more and more empty SYSTEM chunks, using up all
    the space of btrfs_super_block::sys_chunk_array.
    
    [FIX]
    Since scrub/dev-replace doesn't always need to allocate new extent,
    especially chunk tree extent, so we don't really need to do chunk
    pre-allocation.
    
    To break above spiral, here we introduce a new parameter to
    btrfs_inc_block_group(), @do_chunk_alloc, which indicates whether we
    need extra chunk pre-allocation.
    
    For relocation, we pass @do_chunk_alloc=true, while for scrub, we pass
    @do_chunk_alloc=false.
    This should keep unnecessary empty chunks from popping up for scrub.
    
    Also, since there are two parameters for btrfs_inc_block_group_ro(),
    add more comment for it.
    
    Reviewed-by: Filipe Manana <fdmanana@suse.com>
    Signed-off-by: Qu Wenruo <wqu@suse.com>
    Signed-off-by: David Sterba <dsterba@suse.com>
    adam900710 authored and kdave committed Nov 18, 2019
  8. btrfs: change btrfs_fs_devices::rotating to bool

    struct btrfs_fs_devices::rotating currently is declared as an integer
    variable but only used as a boolean.
    
    Change the variable definition to bool and update to code touching it to
    set 'true' and 'false'.
    
    Reviewed-by: Qu Wenruo <wqu@suse.com>
    Reviewed-by: Anand Jain <anand.jain@oracle.com>
    Signed-off-by: Johannes Thumshirn <jthumshirn@suse.de>
    Reviewed-by: David Sterba <dsterba@suse.com>
    Signed-off-by: David Sterba <dsterba@suse.com>
    Johannes Thumshirn authored and kdave committed Nov 18, 2019
  9. btrfs: change btrfs_fs_devices::seeding to bool

    struct btrfs_fs_devices::seeding currently is declared as an integer
    variable but only used as a boolean.
    
    Change the variable definition to bool and update to code touching it to
    set 'true' and 'false'.
    
    Reviewed-by: Qu Wenruo <wqu@suse.com>
    Reviewed-by: Anand Jain <anand.jain@oracle.com>
    Signed-off-by: Johannes Thumshirn <jthumshirn@suse.de>
    Reviewed-by: David Sterba <dsterba@suse.com>
    Signed-off-by: David Sterba <dsterba@suse.com>
    Johannes Thumshirn authored and kdave committed Nov 18, 2019
  10. btrfs: rename btrfs_block_group_cache

    The type name is misleading, a single entry is named 'cache' while this
    normally means a collection of objects. Rename that everywhere. Also the
    identifier was quite long, making function prototypes harder to format.
    
    Suggested-by: Nikolay Borisov <nborisov@suse.com>
    Reviewed-by: Qu Wenruo <wqu@suse.com>
    Signed-off-by: David Sterba <dsterba@suse.com>
    kdave committed Nov 18, 2019
  11. btrfs: block-group: Reuse the item key from caller of read_one_block_…

    …group()
    
    For read_one_block_group(), its only caller has already got the item key
    to search next block group item.
    
    So we can use that key directly without doing our own convertion on
    stack.
    
    Also, since that key used in btrfs_read_block_groups() is vital for
    block group item search, add 'const' keyword for that parameter to
    prevent read_one_block_group() to modify it.
    
    Signed-off-by: Qu Wenruo <wqu@suse.com>
    Reviewed-by: David Sterba <dsterba@suse.com>
    Signed-off-by: David Sterba <dsterba@suse.com>
    adam900710 authored and kdave committed Nov 18, 2019
  12. btrfs: block-group: Refactor btrfs_read_block_groups()

    Refactor the work inside the loop of btrfs_read_block_groups() into one
    separate function, read_one_block_group().
    
    This allows read_one_block_group to be reused for later BG_TREE feature.
    
    The refactor does the following extra fix:
    - Use btrfs_fs_incompat() to replace open-coded feature check
    
    Reviewed-by: Johannes Thumshirn <jthumshirn@suse.de>
    Reviewed-by: Anand Jain <anand.jain@oracle.com>
    Signed-off-by: Qu Wenruo <wqu@suse.com>
    Reviewed-by: David Sterba <dsterba@suse.com>
    Signed-off-by: David Sterba <dsterba@suse.com>
    adam900710 authored and kdave committed Nov 18, 2019
Older
You can’t perform that action at this time.