Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Sending/receiving incremental snapshots failing after bees #79

Closed
Massimo-B opened this issue Oct 10, 2018 · 28 comments

Comments

Projects
None yet
4 participants
@Massimo-B
Copy link

commented Oct 10, 2018

Hi, I can't provide a precise error description. I like to discuss the impact of bees to sending/receiving incremental snapshots.

I have 3 btrfs. One main filesystem on bcache and 2 USB btrfs as "backups" receiving incremental snapshots via btrbk. (Please don't insist on the suitability of a btrfs as a backup for another btrfs... you need to trust btrfs a lot for doing that)

As in best case all snapshots received are already incremental, a lot is already de-duplicated. But one of the 2 backup-btrfs is a mobile device that I use for backups of different machines and subvolumes. So each subvolume line is incremental but de-duplication is still beneficial between the machines, as most of the machines have almost identical installations.

After bees was finished on all the 3 btrfs and starting another incremental backup, I expected that the difference could be big based on the many changed extents on the source. But there are many errors on receiving:

ERROR: send ioctl failed with -5: Input/output error
ERROR: unexpected EOF in stream

And in the syslog I see many lines like

BTRFS error (device dm-0): Send: inconsistent snapshot, found updated extent for inode 38464 without updated inode item, send root is 1977, parent root is 1970

Here is the complete btrbk stdout:

# btrbk --progress run default
Creating backup: /mnt/usb/data/snapshots/root/root.20181009T072639+0200
 611KiB 0:00:02 [ 289KiB/s] [ 289KiB/s]
WARNING: [send/receive] (send=/mnt/btrfs-top-lvl/snapshots/root/root.20181009T072639+0200, receive=/mnt/usb/data/snapshots/root) At subvol /mnt/btrfs-top-lvl/snapshots/root/root.20181009T072639+0200
WARNING: [send/receive] (send=/mnt/btrfs-top-lvl/snapshots/root/root.20181009T072639+0200, receive=/mnt/usb/data/snapshots/root) ERROR: send ioctl failed with -5: Input/output error
WARNING: [send/receive] (send=/mnt/btrfs-top-lvl/snapshots/root/root.20181009T072639+0200, receive=/mnt/usb/data/snapshots/root) ERROR: unexpected EOF in stream
WARNING: [send/receive] (send=/mnt/btrfs-top-lvl/snapshots/root/root.20181009T072639+0200, receive=/mnt/usb/data/snapshots/root) At snapshot root.20181009T072639+0200
ERROR: Failed to send/receive btrfs subvolume: /mnt/btrfs-top-lvl/snapshots/root/root.20181009T072639+0200 [/mnt/btrfs-top-lvl/snapshots/root/root.20181004T112940+0200] -> /mnt/usb/data/snapshots/root
WARNING: Deleted partially received (garbled) subvolume: /mnt/usb/data/snapshots/root/root.20181009T072639+0200
ERROR: Error while resuming backups, aborting
Creating backup: /mnt/usb/mobiledata/snapshots/bm73/root/root.20181009T072639+0200
^C
ERROR: Cought SIGINT, dumping transaction log:
localtime type status target_url source_url parent_url message
2018-10-09T13:09:46+0200 startup v0.27.0-dev - - - # btrbk command line client, version 0.27.0-dev
2018-10-09T13:09:46+0200 snapshot starting /mnt/btrfs-top-lvl/snapshots/root/root.20181009T130946+0200 /mnt/btrfs-top-lvl/root - -
2018-10-09T13:09:47+0200 snapshot success /mnt/btrfs-top-lvl/snapshots/root/root.20181009T130946+0200 /mnt/btrfs-top-lvl/root - -
2018-10-09T13:09:47+0200 snapshot starting /mnt/btrfs-top-lvl/snapshots/home/home.20181009T130946+0200 /mnt/btrfs-top-lvl/home - -
2018-10-09T13:09:48+0200 snapshot success /mnt/btrfs-top-lvl/snapshots/home/home.20181009T130946+0200 /mnt/btrfs-top-lvl/home - -
2018-10-09T13:09:48+0200 send-receive starting /mnt/usb/data/snapshots/root/root.20181009T072639+0200 /mnt/btrfs-top-lvl/snapshots/root/root.20181009T072639+0200 /mnt/btrfs-top-lvl/snapshots/root/root.20181004T112940+0200 -
2018-10-09T13:11:23+0200 send-receive ERROR /mnt/usb/data/snapshots/root/root.20181009T072639+0200 /mnt/btrfs-top-lvl/snapshots/root/root.20181009T072639+0200 /mnt/btrfs-top-lvl/snapshots/root/root.20181004T112940+0200 -
2018-10-09T13:11:23+0200 delete_garbled starting /mnt/usb/data/snapshots/root/root.20181009T072639+0200 - - -
2018-10-09T13:11:39+0200 delete_garbled success /mnt/usb/data/snapshots/root/root.20181009T072639+0200 - - -
2018-10-09T13:11:39+0200 abort_target ABORT /mnt/usb/data/snapshots/root - - # Failed to send/receive subvolume
2018-10-09T13:11:39+0200 send-receive starting /mnt/usb/mobiledata/snapshots/bm73/root/root.20181009T072639+0200 /mnt/btrfs-top-lvl/snapshots/root/root.20181009T072639+0200 /mnt/btrfs-top-lvl/snapshots/root/root.20181005T130053+0200 -
2018-10-09T13:11:40+0200 signal SIGINT - - - -
mbuffer: warning: error during output to <stdout>: canceled

I started now another run with "incremental no", which is working but transferring all snapshots as deep copies, after that bees will have a big task to de-duplicate again. I did not have the courage to run bees in background of send/receive. I will check if the subsequent runs of btrbk, bees, btrbk, bees will lead to some stable incremental snapshotting.

@Zygo

This comment has been minimized.

Copy link
Owner

commented Oct 11, 2018

This looks like a duplicate of issue #65.

One thing that isn't clear to me: does this send error occur if bees is running during the send, or has run at some time between two sends? If it's the first one, it's probably one of a cluster of send bugs that needs to be fixed in the kernel. If it's the second one, probably the only thing we can do is make bees aggressively ignore subvols used for incremental send somehow (maybe look for the UUID properties? How could we detect such things?)

@Massimo-B

This comment has been minimized.

Copy link
Author

commented Oct 11, 2018

I was sending before bees, have run bees, and have then sent another one incrementally without having bees running.
I did non-incremental transfers now, that where transferring tons of GiB instead for each snapshot now. I'm going to re-test the incremental send, after bees has finished the new GiB on the target btrfs.

Ignoring snapshots would be a big disadvantage for me, as I only have snapshots on those targets. Let's say I snapshot the subs "root", "home" and "data" on machines A, B and C. Then the home snapshots of machine A are already incremental or deduplicated, while within a single snapshot there could still be duplicates. But home of A and B are almost identical but not deduplicated at all as btrbk can only reference the incremental for the A-line or B-line snapshots.

@Zygo

This comment has been minimized.

Copy link
Owner

commented Oct 11, 2018

Snapshots on the receiver side don't look like they would be a problem--as far as I can tell from the error messages, it is the sending side that has the problems. The receiver could dedupe all the snapshots it wants.

If the sender-side snapshots are rotated then bees can ignore them, but still make changes to non-readonly snapshots. That would eventually free space when the snapshots are rotated out (though it would consume space, possibly significant amounts of space, in the meantime).

In other news, the error message comes just after this comment:

                         * We may have found an extent item that has changed
                         * only its disk_bytenr field and the corresponding
                         * inode item was not updated. This case happens due to
                         * very specific timings during relocation when a leaf
                         * that contains file extent items is COWed while
                         * relocation is ongoing and its in the stage where it
                         * updates data pointers. So when this happens we can
                         * safely ignore it since we know it's the same extent,
                         * but just at different logical and physical locations
                         * (when an extent is fully replaced with a new one, we
                         * know the generation number must have changed too,
                         * since snapshot creation implies committing the current
                         * transaction, and the inode item must have been updated
                         * as well).
                         * This replacement of the disk_bytenr happens at
                         * relocation.c:replace_file_extents() through
                         * relocation.c:btrfs_reloc_cow_block().
                         */

This comment is straight-up wrong: during dedupe, the generation number of the extent data may or may not be changed, you get an extent data in a subvol page with a new transid, but the transid in the extent data matches the other extent item's transid (which may be older or newer than the original snapshot). For sends it shouldn't matter because the receiver has the same data in its copy of the old extent.

On the other hand, I have no idea how the send code could possibly reach the point where it produces that log message, so maybe something very much wrong happened long before we got here?

@Zygo

This comment has been minimized.

Copy link
Owner

commented Oct 11, 2018

On the other hand, I have no idea how the send code could possibly reach

Oh wait yes I do, I read the caller's code wrong...

OK, my guess is we can delete that comment and the 'if' statement that follows, but we also have to go back to the previous inode record and emit something in the send stream to prepare the receiver for new extents. I don't know how send works well enough to be able to make that change.

@Zygo

This comment has been minimized.

Copy link
Owner

commented Oct 11, 2018

Or...maybe dedupe can increment the inode's generation number (I thought it already did that?) which would make send do the right thing.

@Zygo

This comment has been minimized.

Copy link
Owner

commented Oct 11, 2018

Nope, dedupe does increment the inode's generation number (and nothing else in the dedupe case): btrfs_extent_same_range calls btrfs_clone calls clone_finish_inode_update. So that's not the way to fix it.

@kakra

This comment has been minimized.

Copy link
Contributor

commented Oct 11, 2018

Let's conclude then: Since the problem is one the sender side only, we could simply start with letting bees ignore read-only subvols, right? Because a subvol to be sent is usually a read-only snapshot. And it theoretically should have no dedupe candidates anyway (at least not shortly after creation). After the next send, the older "parent" could be rotated out - thus eliminating to scan it for dupes anyway.

Then the solution is to make sure that btrfs-send only operates on non-mutating data (that is, only on read-only snapshots, which is probably already how btrbk works but that's guessing without having ever looked at the source).

BTW: Whatever I wrote, what I meant is not using a read-only subvol as a dedup dst... It's still fine the other way (using it as a dedup src), right?

@Zygo

This comment has been minimized.

Copy link
Owner

commented Oct 11, 2018

As far as I know, if we don't modify a send parent snapshot, send will work OK. The tricky part at the moment is finding a way to avoid doing that in bees.

Current-bees can optionally "fail" to open read-only subvols (it would be the wrong thing to do on the receive side but the right thing to do on the send side, and there's no reliable automatic way to detect that, so we need an option). This would mean that a read-only subvol isn't scanned, isn't a dedup src, and isn't a dedup dst. Any references to the data through another read-write subvol would be scanned and could be a dedup src or a dedup dst, but any dedupe that occurs would not propagate into the read-only subvol.

Current-bees does dedup too early and doesn't consider the implications of disposing--or failing to dispose--of all the references to an extent (such as whether some of the extent references might be immutable, or whether the dedup space saving is worth the overhead). That makes it difficult to implement any rules that concern what happens with all references to an extent. The current code makes a decision about the first duplicate reference to an extent it finds, and then tries to fix the broken extent it creates after the fact.

Future-bees analyzes all references to an extent at once, then makes a plan to dispose of the entire extent based on the analysis. So we will get more cases to consider, like "some of the references to this extent are read-only" and "some of the blocks in this extent are unreachable if and only if the read-only references are not considered." We'll know that before we start deduping an extent, so we can make better decisions (maybe even decide to do nothing at all). The analyzer can recommend an extent be used as dedup src or dedup dst, or replaced with a copy--and order potential extent disposal plans by assorted metrics, e.g. compression preferences, defrag rules, block reachability, or whether the extent refs are mutable.

With the extent analyzer it's easy to implement bees config options like "reject all the plans that involve using a read-only extent ref as dedup dst" and even throw in rules like "...and that save less than 8K of space in an extent" or "...and that replace a zstd-compressed extent with a zlib-compressed one." The analyzer simply keeps producing plans in decreasing preference order (which could also be configurable) until it hits one that its configuration filters allow it to execute.

@kakra

This comment has been minimized.

Copy link
Contributor

commented Oct 11, 2018

I already imagined that current-bees has this problem and cannot make that decision... But I got what the grand plan is, and it sounds good to me. With an implementation of a decision-before-dedup algorithm comes a great potential of optimizations, I think. Because we could also check the cost-factor of introduced fragmentation...

@Massimo-B

This comment has been minimized.

Copy link
Author

commented Oct 31, 2018

Current workflow for snapshots is not acceptable. Incremental snapshot send/receive is not possible. So first it takes 1 day to transfer the deep snapshots to my 2 backup devices, blowing up the available space there, then it takes another week or two getting 3 bees instances finished and shrinking back to deduped size.

Reading the very interesting discourse it could be helpful to add options for such an analyizer on future-bees, ignoring read-only snapshots or something. I hope that will not skip too many duplicates as the snapshots should be already deduplicated by design, as kakra said.

However I had no failures anymore in the meantime, just the very long send/receive and new bees run after that ending in an 24h uptime infinite loop on high load (still usable due to --loadavg-target).

@Massimo-B

This comment has been minimized.

Copy link
Author

commented Nov 8, 2018

Currently I'm running bees only on the targets receiving incremental snapshots. You're right, bees on the receiving side does not break the incremental snapshots, seems only to be an issue on the sender side.

However receiving incremental snapshots I would expect to have almost no duplicates compared to the other snapshots, but of course there can be duplicates. Here I see no way to improve, all new incremental writes need to be scanned. Did I get you right, that you plan to add an option to make bees leaving (ignoring) ro snapshots untouched and that I would need to use that switch on the sending side only? On the receiving side the switch wouldn't be useful as there are only ro snapshots that still need to be scanned.

@Zygo

This comment has been minimized.

Copy link
Owner

commented Nov 8, 2018

The short-term plan is to add a switch that pretends all RO snapshots don't exist. bees would only dedupe data in RW subvols. The switch is to be turned on if you're using btrfs send incrementally anywhere on the filesystem, off in all other cases.

@Massimo-B

This comment has been minimized.

Copy link
Author

commented Nov 21, 2018

While bees was confusing the incremental btrfs send when it was touching the RO snapshots the first time, it now would never touch them again. So after the first btrfs send confusion I had no conflicts anymore while running bees and incremental sends. So I had the idea if I would actually need that switch or not, as for the very first bees run I would like to get all RO snapshots deduped as well and later after the huge full copy sends I had no issues anymore.

But:
I'm not sure if new RO snapshots could be potentially touched by bees as it is always a reflink to rw data. Maybe if meanwhile snapshots before the next bees run keep data that has already disappeared, it could happen again, that bees is touching RO snapshots that have already been sent to the target.

So this workflow could be more reasonable:

  • Full bees run on sources (sender) and targets (receiver)
  • Full copy send/receive
  • Small bees run skipping RO snapshots on the sources
  • Full bees run on the targets as usual
  • Going on with incremental copy send/receive
@Zygo

This comment has been minimized.

Copy link
Owner

commented Nov 21, 2018

I implemented a --workaround-btrfs-send switch which does the following:

  • silently discard dedupe requests with dst on a RO subvol
  • skip crawls on RO subvols

This still allows RO subvols to be dedupe src, as long as the data was scanned through a RW subvol. You'll still see some dedupes in the logs using RO snapshots as src, but never dst.

With the workaround, the generation number of RO subvols never changes (visible with e.g. btrfs sub list -a) so it is compatible with btrfs send.

Without the workaround, "old" RO subvols can still be changed if both conditions are true:

  • The RO subvol shares an extent with a RW subvol
  • Some blocks in the shared extent are duplicate and some are not duplicate

In this case, bees will normally replace all references to the shared extent, including RO subvol references, so this can still break incremental send. The --workaround-btrfs-send option prevents the replacement of the RO subvol references.

@Zygo

This comment has been minimized.

Copy link
Owner

commented Nov 21, 2018

Here is the impact of the workaround on a small test filesystem with 3 subvols: a primary, 1 rw snapshot and 1 ro snapshot. The red line is with workaround, the green line is without. At the end of the test, the ro and rw snapshots are deleted, leaving only the original subvol.

00-all-df-summary

Note that the workaround dramatically increases the amount of temporary space required; however, it also skips the entire scan of the ro subvol. When the RO subvol is deleted, all the temporary space comes back at once.

@Massimo-B

This comment has been minimized.

Copy link
Author

commented Nov 22, 2018

I implemented a --workaround-btrfs-send switch which does the following:

Thanks, a big name for that option. If you plan further filter rules as you mentioned before, some --filter=rosub or something would be nice.

* skip crawls on RO subvols

What does crawl mean? Reading new RO subvols as a src would still be done?

Without the workaround, "old" RO subvols can still be changed if both conditions are true:

* The RO subvol shares an extent with a RW subvol

* Some blocks in the shared extent are duplicate and some are not duplicate

In this case, bees will normally replace all references to the shared extent, including RO subvol references, ...

The second condition I don't understand. If the extent is shared, how can some blocks be different? I'm missing some btrfs basic knowledge..

@Massimo-B

This comment has been minimized.

Copy link
Author

commented Nov 22, 2018

Note that the workaround dramatically increases the amount of temporary space required; however, it also skips the entire scan of the ro subvol. When the RO subvol is deleted, all the temporary space comes back at once.

WIthout the workaround, the freed space is crossing the 0-line after a while. This is what I experienced as bees is first increasing the usage a bit then later shrinking. However I'm using --scan-mode 2 increasing the temporary usage a bit.
Now with the workaround you mean, while keeping the snapshot, the usage will increase a lot and is not going to be freed until subvolume deletion? Why is that? Why is temporary usage increasing at all?

Reading in how-it-works I found:

...attempt to match all other blocks in the newer extent with blocks in the older extent...If this is not possible, then bees will create a temporary copy of the unmatched data in the new extent so that the entire new extent can be removed by deduplication. This must be done because btrfs cannot partially overwrite extents--the entire extent must be replaced.

Somewhere there is the explanation. So after the temporary copy, it cannot deduplicate that when skipping the RO subs? Then there is no benefit of using bees with ro snapshots if the usage is increasing instead of shrinking?

@Zygo

This comment has been minimized.

Copy link
Owner

commented Nov 22, 2018

The benefit is delayed with RO snapshots until after the RO snapshot is deleted. Normal usage of btrfs incremental send deletes the RO snapshots as soon as they aren't needed any more. This is the vertical line at the end on the graph lines using the workaround (-a):

00-all-df-summary

When the RO snapshot is deleted, all the space comes back at once.

If the workaround is not used, the space comes back gradually as each subvol is scanned (except for -m2, where it appears some space is lost forever).

@Zygo

This comment has been minimized.

Copy link
Owner

commented Nov 22, 2018

If the extent is shared, how can some blocks be different?

I explained that before, but maybe I can try again and be a little clearer (if it is, maybe we can move this text into the how-it-works page):

Consider the case where a scanned extent contains some duplicate blocks and some unique blocks. bees wants to remove the duplicate blocks by replacing references to the duplicate blocks in the scanned extent with references to identical blocks in some extent stored in the hash table.

btrfs will keep the entire extent allocated until all references to any block in the extent are removed, so it is not sufficient to replace only the duplicate blocks. The unique blocks can't be left behind. bees will create a temporary extent with a copy of the unique blocks, then use the temporary extent as dedup src for the unique blocks. This is the "copy" message that appears in the logs, usually followed by a series of "dedup" messages replacing all the references to the unique data.

Ideally bees would handle all the duplicate block references at the same time too, but this requires features that are not available in kernels before 4.14, and also a rewrite of the bees scan code to avoid duplicating a lot of work. bees does currently do a lot of duplicate work for the unique blocks, but there are many fewer mixed duplicate+unique extents than fully-duplicate extents, so it doesn't hurt performance as much.

If all this is successful, this combination of dedupe and copy removes the entire scanned extent, and space is saved.

The workaround breaks the "remove unique blocks" part of this process. The scanned extents accumulate because bees isn't allowed to remove references if the reference is in a RO subvol. bees continues to process all the RW references as normal, so the scanned extent is fully deduped in all RW subvols. When the RO subvols are deleted, all the unique block references disappear, and space for both the temporary copies and deduped duplicate blocks becomes available.

@Zygo

This comment has been minimized.

Copy link
Owner

commented Nov 22, 2018

Somewhere there is the explanation. So after the temporary copy, it cannot deduplicate that when skipping the RO subs?

It cannot deduplicate anything referenced by a RO sub; however, the temporary copies use additional space in addition to the zero dedupes.

Then there is no benefit of using bees with ro snapshots if the usage is increasing instead of shrinking?

This is why I call this thing a "workaround" instead of a "feature." The real fix is to make the kernel's btrfs send code not lose its mind when dedupe touches a parent snapshot. Anything less is going to suck.

@Zygo

This comment has been minimized.

Copy link
Owner

commented Nov 22, 2018

If you plan further filter rules as you mentioned before, some --filter=rosub or something would be nice.

The filtering engine will probably split this functionality across a few different features, e.g. separately avoiding changes to RO subvols, and turning off copies for extents referenced by RO subvols, so the freed space stays at zero until the RO subvol is deleted, then you only get full-extent dedupes like what you get from dduper or duperemove. The --workaround-btrfs-send option would become a shorthand for enabling a similar-behaving combination of filters.

@automorphism88

This comment has been minimized.

Copy link

commented Jun 8, 2019

It looks some send/dedupe related fixes are in kernel 5.2: http://lkml.iu.edu/hypermail/linux/kernel/1905.0/03943.html

@Zygo

This comment has been minimized.

Copy link
Owner

commented Jun 8, 2019

The fix in torvalds/linux@62d54f3 prevents the send from failing (or worse) if dedupe is running at the same time.

bees will get an EAGAIN error from dedupe if there is a send in progress, but bees doesn't yet do any of the "rescheduling" proposed in the mailing list discussion behind the patch. This generates a lot of noise in the log, and bees currently just skips the extents. If the snapshot is deleted after the btrfs send, then we didn't need to dedupe it anyway; however, if the snapshot is not deleted after the btrfs send, the duplicate extent references that are skipped during send will not ever be deduped.

Ideally bees would pause the subvol crawler when we see EAGAIN in dedupe, and unpause when we restart crawlers (at 10-transid intervals). If the send is still in progress, the crawler will be immediately paused again. Presumably at some point the send will be done, and the crawler can finish deduping the subvol.

@Zygo

This comment has been minimized.

Copy link
Owner

commented Jun 8, 2019

There is still some documentation work left before this can be closed:

Kernel bugs doc page:

  • Recommend kernel 5.2 to avoid btrfs send issue (after #113 is closed)
  • Use --workaround-btrfs-send option with bees on kernels before 5.2

Also we might want to keep the --workaround-btrfs-send behavior but give it a more permanent name like --ignore-ro-subvols or just --ignore-ro. It turns out to be quite useful to speed things up if you have many readonly subvols lying around (e.g. daily snapshots) with no pressing need to dedupe them (i.e. because they were already mostly deduped when they were created).

@automorphism88

This comment has been minimized.

Copy link

commented Jun 8, 2019

The fix in torvalds/linux@62d54f3 prevents the send from failing (or worse) if dedupe is running at the same time.

What about incremental sends where the parent snapshot has been deduped? Is that still unsafe as described in the documentation for --workaround-btrfs-send regardless of whether bees and send are running simultaneously?

@Zygo

This comment has been minimized.

Copy link
Owner

commented Jun 9, 2019

As far as I know, the only issue occurs when bees (or any dedupe agent) is running at the same time as send, and only when dedupe is used to modify a subvol involved in the send (i.e. either the subvol being sent, or the parent subvol for incremental send). i.e. all of these should work:

  • alternating bees and send but not running both at the same time
  • running send at the same time as bees with --workaround-btrfs-send on any kernel (assuming that kernel does not have other bugs)
  • running send at the same time as bees without --workaround-btrfs-send on kernel 5.2 and later

Note that the in-kernel btrfs send code has had a bug fix every 6 weeks on average for the past 2 years, and there is no reason to expect torvalds/linux@62d54f3 to be the last one (indeed, there are two send bug fixes already that follow it). btrfs send may still have problems that have nothing to do with bees (though bees might trigger these bugs more often). For the next year at least, it would be a good idea to verify each received subvol's data against the sent subvol, and report bugs if you find data differences or experience issues like crashes or lockups. I did this with rsync backups, and found 3 btrfs kernel bugs in 4 years--and 2 of those bugs were 10 years old. btrfs send currently has >5x the rsync bug fix rate, and you should set your correctness expectations accordingly.

If there's a real issue with bees running on an incremental snapshot parent between sends then we should open a new issue for that case, as torvalds/linux@62d54f3 would not address any such case, and this issue is long and complex enough already.

If there's no issue, but we have docs that say there's an issue, then we should fix the docs.

@automorphism88

This comment has been minimized.

Copy link

commented Jun 13, 2019

As far as I know, the only issue occurs when bees (or any dedupe agent) is running at the same time as send, and only when dedupe is used to modify a subvol involved in the send (i.e. either the subvol being sent, or the parent subvol for incremental send). i.e. all of these should work:

* alternating bees and send but not running both at the same time

This is indeed causing the send to fail. Opened a separate issue at #115 as requested.

@Zygo

This comment has been minimized.

Copy link
Owner

commented Jun 13, 2019

OK, so the docs are updated and a separate issue for the "other" btrfs send bug has been opened. I will close this one.

@Zygo Zygo closed this Jun 13, 2019

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.