Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Downloader for iroh-bytes #1334

Closed
Frando opened this issue Aug 7, 2023 · 6 comments
Closed

Downloader for iroh-bytes #1334

Frando opened this issue Aug 7, 2023 · 6 comments
Assignees
Labels
c-iroh c-iroh-bytes feat New feature or request
Milestone

Comments

@Frando
Copy link
Member

Frando commented Aug 7, 2023

The iroh-sync PR adds a downloader. In its current state it is unfinished, and cannot do retries and doesn't have a proper queue.

Features we need from the downloader from a high level iroh sync point of view:

  • Enqueue downloads as Hash, PeerId pairs
  • Get notified when a download is finished (through a future)

This high level API exists already in the PR, with public methods push(hash, peer_id) and async finished(hash: Hash). The actual actor below misses many features though. We want:

  • Retries with incremental backoff
  • Limits on parallel downloads and connection attempts (e.g. max 5 concurrent dials)
  • Being able to queue a hash with a list of peers in an order with different timeouts (e.g. try from peer 1 immediately, from peer 2 in 100ms, and from peer 3 in 1 second) - aborting/skipping of course if a download succeeded

--

copying some notes from the now-merged downloader improvements PR:

The Downloader currently acts immediately on new requests and doesn't do any retries. It also does not have a proper queue internally.

We want to be able to queue downloads with retries and different timeouts. My idea was to add a time-based queue, maybe using the TimerMap from iroh_gossip::util. Then I thought to change the API to expose download(hashes: Vec<Hash>, peer: Peer, delay: Option<Instant>). The actor would maintain a timer queue of pending requests and their delay, and poll that for the next operation, check if the operation is still needed, and if so enqueue the future to actually do the download. The actor should also keep track of running downloads, and have a limit of how many peers are being dialed or connected to in parallel (configurable). Code-wise, I thought to add at least a DownloadState for each pending download or so, and then similar to the current code have indexes into those for PeerId and Hash (and Instant if using something like the TimerMap).

With this, we can then:

  • On new entry message via gossip, enqueue download from
    • peer we recv'd from with a very small delay
    • from our other current active gossip neighbors with a larger delay
    • from the author with a larger delay plus random offset (*note: for this we need to add the peer info (address&region), with a signature, to the gossip message in addition to the SignedEntry)
  • Queue more retries if all fail up to some limit

Later on, we'd also want to add an explicit command to retry all missing hashes in a document from a peer, and optionally a mode that would do that automatically if we get new neighbors. This will initially be quite inefficient for large docs, because we have no ẁant/have` exchange in iroh-bytes. So maybe defer this a bit.

Edit
Another thought on the API, with peer and delay we'd also want to pass a enum RetryStrategy { None, Auto } or so likely, with auto being some incremental backoff with max retries or similar.

Edit 2
Currently the quinn::Connections are dropped and thus cancelled if there is no immediate next download pending after a download finished. Likely we'll want to keep them around for a configurable-with-default amount of time, and then drop them if no operation happened. Redialing is quite cheap but not free (one roundtrip in current setup).

@rklaehn
Copy link
Contributor

rklaehn commented Aug 8, 2023

A few thoughts on this topic:

I think the downloader API should be separated into declarations of intent and hints.

A declaration of intent would be e.g.

  • I want to have <hash>.
  • I want to have ranges <ranges> of <hash>
  • I want to have collection <hash> and it's children

For a declaration of intent you get a u64 handle that allows you to control/delete the intent.

Then you have hints. Hints are something like:

  • you might be able to get <hash> from <peer>, says <source>, or some more complex hint

It is then up to the downloader to make use of the hints, ask some discovery mechanism etc.

The downloader keeps track of the intents and makes sure that there are no duplicates, e.g. if two places want the same hash we don't want two download tasks.

An extension of the intent/hint scheme would be something like topics. E.g. instead of saying for these 10000 hashes you should try these 100 peers, have a "download topic" that in case of doc sync might correspond to a document. Then you can say "I want to download these 10000 hashes for topic <x>", and "these 100 peers are participating in topic <x>".

A big question: should the downloader support collections? I would say yes, otherwise the downloader will be useless for many non document sync related iroh tasks.

@rklaehn rklaehn added c-iroh-bytes c-iroh feat New feature or request labels Aug 8, 2023
@divagant-martian
Copy link
Contributor

The parts that @Frando mentions seem clear to me how to implement, @rklaehn 's not so much.

Could you guys help me here? (sorry still noob to these parts of the code):

  • Given a hash that is collection, how do I retrieve the hashes that belong to the collection? Last time I checked the collection decoding has a list (set?) of hashes. Please correct me if I'm wrong or if this has changed
  • Given a hash, how to map a range to a list of hashes?

Thank you both!

@Frando
Copy link
Member Author

Frando commented Aug 9, 2023

Given a hash that is collection, how do I retrieve the hashes that belong to the collection? Last time I checked the collection decoding has a list (set?) of hashes. Please correct me if I'm wrong or if this has changed

You need the CollectionParser that is part of an iroh node. That likely means that the Downloader will have to be generic over C: CollectionParser, and the collection parser will have to be passed into the downloader. You can then call collection_parser.parse() to obtain a list of hashes from a collection. The CollectionParser can be set by the application to support other kinds of collection than the default iroh collections.

Given a hash, how to map a range to a list of hashes?

Not sure I got the question right, and @rklaehn knows much more than me about hashes and ranges. I think a range always refers to part(s) of a single blob identified by a single hash. So you can say "gimme these ranges of this blob identified by hash". I do not know how this relates to collections and if we have a concept of "ranges of a collection".

@rklaehn
Copy link
Contributor

rklaehn commented Aug 9, 2023

The parts that @Frando mentions seem clear to me how to implement, @rklaehn 's not so much.

Could you guys help me here? (sorry still noob to these parts of the code):

  • Given a hash that is collection, how do I retrieve the hashes that belong to the collection? Last time I checked the collection decoding has a list (set?) of hashes. Please correct me if I'm wrong or if this has changed

What @Frando said

  • Given a hash, how to map a range to a list of hashes?

The protocol for individual blobs goes like this: you ask for a hash and a set of ranges of chunks (1 chunk = 1024 bytes). Then you get back data only for what you have asked for. The default thing is to just ask for a hash and all it's chunks RangeSet2::all(), then you get all the data for the blob.

For collections it is more complex. You can specify ranges for the collection itself and for each of it's children. E.g. you have downloaded a collection and half of it's children and were interrupted. You then figure out what is missing and ask for exactly that.

Thank you both!

@rklaehn
Copy link
Contributor

rklaehn commented Aug 10, 2023

Here is an issue that is related to the downloader. Probably not initially, but eventually we want to support this: #839

@b5 b5 added this to the v0.6.0 - Sync milestone Aug 16, 2023
github-merge-queue bot pushed a commit that referenced this issue Aug 24, 2023
## Description

This PR adds `iroh-sync`, a document synchronization protocol, to iroh,
and integrates with `iroh-net`, `iroh-gossip` and `iroh-bytes`.

* At the core is the `iroh-sync` crate, with a set reconciliation
algorithm implemented by @dignifiedquire. See [the old iroh-sync
repo](https://github.com/n0-computer/iroh-sync/) for the prehistory and
#1216 for the initial PR (fully included in this PR, and by now
outdated)
* Iroh sync is integrated in the iroh node, with iroh-gossip, in the RPC
interface, and the CLI.
* `LiveSync` is the handle to an actor that integrates sync with
[gossip](#1149 ) to broadcast and receive document updates from peers.
For each open document a gossip swarm is joined with a `TopicId` derived
from the doc namespace.
* mod `download` contains the new downloader. It will be improved in
#1344 .
* mod `client` is the new high-level RPC client. It currently only has
methods for dealing with docs and sync, other things should be added
once we merged this. CLI commands for sync are in `commands/sync.rs`.
Will be much better with #1356 .
* `examples/sync.rs` has a REPL to modify and sync docs. It does a full
setup without using the iroh console. Also includes code to sync
directories, and a hammer command for load testing.
* The PR also introduces `iroh::client::Iroh`, a wrapper around the RPC
client, and `iroh::client::Doc`, a wrapper around RPC client for a
single document

## Notes & open questions

#### Should likely happen before merge:

* [x] Make `iroh_sync::Store:::list_authors` and `list_replicas` return
iterators `iroh-sync` *fixed in #1366 *
* [ ] Add `iroh_sync::Store::close_replica`
* [x] `ContentStatus` in `on_insert` callback is reported as `Ready` if
the content is still `baomap::PartialEntry` (in-process download) *fixed
in a8e8093*


#### Can happen after merge, but  before `0.6` release

* [ ] Implement `AuthorImport` and `AuthorShare` RPC & CLI commands
* [ ] sync store `list_namespaces` and `list_authors` internally
collect, return iterator instead
* [ ] Fix cross-compiles to arm/android. See
cross-rs/cross#1311
* [ ] Ensure that fingerpring calculation is efficient and/or cached for
large documents. Currently calculating the initial fingerprint iterates
once over all entries in a document.
* [ ] Make content downloads be more reliable
* [ ] Add some way to download content from peers independent of the
first insertion event for a remote entry. The downloader with retries is
tracked in #1334 and 1344, but independent of that, we still would
currently only ever try to queue a download when the `on_insert`
callback triggers, which is only once. There should be a way, even if
manual for now, to try to download missing content in a replica from
peers.
* [ ] during `iroh-sync` sync include info if content is available for
each entry
* [ ] Add basic peer management and persistence. Currently live sync
will stop to do anything after a node restart.
* [ ] Persist the addressbook of peers for a document, to reconnect when
restarting the node
* [ ] Implement `PeerAdd` and `PeerList` RPC & CLI commands. The latter
needs changes in `iroh-net` to expose information of currently-connected
peers and their peer info.
* [ ] Make read-only replicas possible
* [ ] Improve reliablity of replica sync. 
* sync is triggered on each `NeighborUp` event from gossip. check that
we don't sync too much.
* maybe include peer info in gossip messages, to queue syncs with those
(but not all at once)
* track and exchange the timestamp of last full sync for peers, to know
if you missed gossiped message and react accordingly
     * add more tests with peers coming and leaving

#### Open questions

* [ ] `iroh_sync::EntrySignature` should the signatures include a
namespace prefix?
* [ ] do we want the 1:1 mapping of `NamespaceId`and gossip `TopicId`,
or would the topic id as a hash be better?

#### Other TODOs collected from the code

* [ ] Port `hammer` and `fs` commands from REPL example to iroh cli
* [ ] docs/naming: settle terminology about keypairs,
private/secret/signing keys, public keys/identifiers and make docs and
symbols consistent
* [ ] Make `bytes_get` streaming in the RPC interface
* [ ] Allow to unset the subscription on a replica
* [ ] `iroh-sync` shouldn't depend on `iroh-bytes` only for `Hash` type
-> #1354
* [ ] * [ ] Move `sync::live::PeerSource` to iroh-net or even better ->
#1354
* [ ] `StoreInstance::put` propagate error and verify timestamp is
reasonable.
* [ ] `StoreInstance::get_range` implement inverted range
* [ ] `iroh_sync`: Remove some items only used in tests (marked with
#[cfg(test)])
* [ ] `iroh_sync` fs store: verify get method fetches all keys with this
namespace
* [ ] `ranger::SimpleStore::get_range`: optimize
* [ ] `ranger::Peer` avoid allocs?
* [ ] `fs::StoreInstance::get_fingerprint` optimize
* [ ] `SyncEngine::doc_subscribe` remove unwrap, handle error


## Change checklist

- [x] Self-review.
- [x] Documentation updates if relevant.
- [ ] Tests if relevant.

---------

Co-authored-by: dignifiedquire <me@dignifiedquire.com>
Co-authored-by: Asmir Avdicevic <asmir.avdicevic64@gmail.com>
Co-authored-by: Kasey <klhuizinga@gmail.com>
@b5 b5 modified the milestones: v0.6.0-alpha: Sync!, v0.6.0-alpha2 Aug 28, 2023
@b5 b5 modified the milestones: v0.6.0-alpha2, v0.6.0 Sep 5, 2023
github-merge-queue bot pushed a commit that referenced this issue Sep 13, 2023
## Description

Adds the `Downloader` as specified in #1334 plus some backchannel convos
Features include:
- Support collections
- Add delays to downloads
- Add download retries with an incremental backoff 
- Keeping peers for a bit longer than necessary in hopes they will be
useful again
- Having the concept of intents and deduplicating downloads efforts
- Cancelling download intents
- Limiting how many concurrent requests are done in total
- Limiting how many concurrent requests are done per peer
- Limiting the number of open connections in total
- Basic error management in the form of deciding whether a peer should
be dropped, the request should be dropped, or if the request should be
retried

## Notes & open questions
### TODOs
- A remaining TODO in the code is whether something special should be
done when dropping quic connections
- Should downloads have a timeout?
- ~<sup>I know I've said this a hundred times with a hundred different
things but would love to test this as well under stress scenarios and a
large number of peers. don't hate me</sup>~
In reality after abstracting away all the IO most scenarios can be
simulated easily. What would remain for a _much_ later time when the
need and opportunity for real case testing scenario arises is to tune
the concurrency parameters

### Future work
#### Downloading Ranges
There was the requirement of downloading a Hash, a range of a Hash, a
collection and (not mentioned but potentially implied) ranges of
collections. There is no support for ranges right now because of the
great duplication of the `get` code in order to take advantage of proper
errors added in #1362. In principle, adding ranges should be really
easy. This is because it's an extension of the `DownloadKind` and would
simply need calculating the missing ranges not based on the difference
between what we have and the whole range, but the given range. I would
prefer to find a way to deduplicate the get code before doing this
extension.
Also, as far as I can tell, there is no need for this yet.
#### Prioritizing candidates per role: `Provider` and `Candidate`
A nice extension, as discussed at some point, is to differentiate
candidates we know have the data, from those that _might_ have the data.
This has added benefit that when a peer is available to perform another
download under the concurrency limits, a hash we know they have could be
downloaded right away instead of waiting for the delay. At this point
making this doesn't make sense because we will likely attempt a download
before the peer has retrieved the data themselves. To implement this, we
would need to add the notifications of fully downloaded hashes as
available into gossip first.
#### Leveraging the info from gossip
When declaring that a hash `X` should be downloaded, it's also an option
to query gossip for peers that are subscribed to the topic to which `X`
belongs to use them as candidates. This could be done connecting the
`ProviderMap` to `gossip`. For now I don't see the need to do this.

### Open questions about Future work
- In line with the described work from above, the registry only allows
querying for peer candidates to a hash since that's as good as it gets
in terms of what we know from a remote right now. It's not clear to me
if we would want to change this to have better availability information
with #1413 in progress.
- More future work: downloading a large data set/blob from multiple
peers would most likely require us to do a three step process:
  1. understanding the data layout/size.
  2. splitting the download.
  3. actually performing the separate downloads.
Generally curious how this will end. My question here is whether we
should do this for every download, or just on data that we expect to be
big. Is there any way to obtain such hint without relying on a query
every single time?


## Change checklist

- [x] Self-review.
- [x] Documentation updates if relevant.
- [x] Tests if relevant.
@b5
Copy link
Member

b5 commented Sep 14, 2023

@Frando & @divagant-martian both agree that #1420 closes

@b5 b5 closed this as completed Sep 14, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
c-iroh c-iroh-bytes feat New feature or request
Projects
Archived in project
Development

No branches or pull requests

4 participants