Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
37 changes: 10 additions & 27 deletions src/app/docs/concepts/discovery/page.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ There are four different implementations of the discovery service in iroh, all o
| --- | --- | --- |
| 1 | [DNS](#dns-discovery) | uses a custom Domain Name System server |
| 2 | [Local](#local-discovery) | uses an mDNS-like system to find nodes on the local network |
| 3 | [Pkarr](#pkarr-discovery) | uses Pkarr Servers over HTTP |
| 3 | [Pkarr](#pkarr-discovery) | uses Pkarr Servers over HTTP |
| 4 | [DHT](#dht-discovery) | uses the BitTorrent Mainline DHT |

By Default, iroh uses the DNS discovery system to resolve NodeIds to addresses. And can be configured to use any of the other discovery systems.
Expand All @@ -35,26 +35,18 @@ Local Discovery is _not_ enabled by default, and must be enabled by the user. Yo

```toml
[dependencies]
iroh = { version = "0.1", features = ["discovery-local-network"] }
# Make sure to use the most recent version here instead of nn. (at the time of writing: 0.32)
iroh = { version = "0.nn", features = ["discovery-local-network"] }
```

Then configure your endpoint to use local discovery concurrently with DNS discovery:

```rust
use iroh::{
discovery::{dns::DnsDiscovery, LocalSwarmDiscovery, pkarr::PkarrPublisher, ConcurrentDiscovery},
Endpoint, SecretKey,
};

let secret_key = SecretKey::generate(rand::rngs::OsRng);
let discovery = ConcurrentDiscovery::from_services(vec![
Box::new(DnsDiscovery::n0_dns()),
Box::new(LocalSwarmDiscovery::new(secret_key.public())?),
]);
use iroh::Endpoint;

let ep = Endpoint::builder()
.secret_key(secret_key)
.discovery(Box::new(discovery))
.discovery_n0()
.discovery_local_network()
.bind()
.await?;
```
Expand All @@ -69,27 +61,18 @@ DHT Discovery is _not_ enabled by default, and must be enabled by the user. You'

```toml
[dependencies]
# Make sure to use the most recent version here instead of nn.
# Make sure to use the most recent version here instead of nn. (at the time of writing: 0.32)
iroh = { version = "0.nn", features = ["discovery-pkarr-dht"] }
```

Then configure your endpoint to use DHT discovery concurrently with DNS discovery:

```rust
use iroh::{
discovery::{dns::DnsDiscovery, pkarr::dht::DhtDiscovery, ConcurrentDiscovery},
Endpoint, SecretKey,
};

let secret_key = SecretKey::generate(rand::rngs::OsRng);
let discovery = ConcurrentDiscovery::from_services(vec![
Box::new(DnsDiscovery::n0_dns()),
Box::new(DhtDiscvoery::new(secret_key.public())?),
]);
use iroh::Endpoint;

let ep = Endpoint::builder()
.secret_key(secret_key)
.discovery(Box::new(discovery))
.discovery_n0()
.discovery_dht()
.bind()
.await?;
```
11 changes: 7 additions & 4 deletions src/app/docs/concepts/router/page.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -21,14 +21,12 @@ async fn main() -> Result<()> {
let endpoint = Endpoint::builder().discovery_n0().bind().await?;

// configure the blobs protocol to run in-memory
let lp = LocalPool::default();
let blobs = Blobs::memory()
.build(lp.handle(), &endpoint);
let blobs = Blobs::memory().build(&endpoint);

// Build our router and add the blobs protocol,
// identified by its ALPN. Spawn the router to start listening.
let router = Router::builder(endpoint)
.accept(iroh_blobs::ALPN, blobs.clone())
.accept(iroh_blobs::ALPN, blobs)
.spawn()
.await?;

Expand All @@ -39,6 +37,11 @@ async fn main() -> Result<()> {
// Wait for exit
tokio::signal::ctrl_c().await?;

// Gracefully close the endpoint & protocols.
// This makes sure that remote nodes are notified about possibly still open connections
// and any data is written to disk fully (or any other shutdown procedure for protocols).
router.shutdown().await?;

Ok(())
}
```
Expand Down
10 changes: 0 additions & 10 deletions src/app/docs/concepts/tickets/page.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -40,16 +40,6 @@ It's worth point out this setup is considerably better than full peer-2-peer sys

When you create a document ticket, you're creating a secret that allows someone to read or write to a document. This means that you should be careful about sharing document tickets with people you don't trust. What's more, someone who has a document ticket can use it to create new tickets for the same document. This means that if you share a document ticket with someone, they can use it to create new tickets for the same document, and share those tickets with others.

## Creating Tickets

| Type | Command |
| --- | --- |
| `node` | [`dumbpipe listen`](https://dumbpipe.dev) |
| `blob` | `iroh blob share` |
| `doc` | [`iroh doc share`](/docs/api/doc-share) |

by default, tickets only include the nodeID If you still want to add relay and direct addresses to the ticket, you can pass `--addr-options RelayAndAddresses` to the ticket generation commands.

## Tickets in Apps
Using tickets in your app comes down to what you're trying to accomplish. For short-lived sessions where both devices are online at the same time, tickets are an incredibly powerful way to bootstrap connections, and require no additinonal servers for coordination.

Expand Down
79 changes: 45 additions & 34 deletions src/app/docs/quickstart/page.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -75,9 +75,9 @@ In that case, you'll need to make sure you're not only dialing by `NodeId`, but
</Note>


## Using an existing protocol: [iroh-blobs](/proto/iroh-blobs)
## Using an existing protocol: iroh-blobs

Instead of writing our own protocol from scratch, let's use iroh-blobs, which already does what we want:
Instead of writing our own protocol from scratch, let's use [iroh-blobs](/proto/iroh-blobs), which already does what we want:
It loads files from your file system and provides a protocol for seekable, resumable downloads of these files.

```rust
Expand All @@ -88,8 +88,7 @@ async fn main() -> anyhow::Result<()> {
let endpoint = Endpoint::builder().discovery_n0().bind().await?;

// We initialize the Blobs protocol in-memory
let local_pool = LocalPool::default();
let blobs = Blobs::memory().build(&local_pool, &endpoint);
let blobs = Blobs::memory().build(&endpoint);

// ...

Expand Down Expand Up @@ -118,8 +117,7 @@ async fn main() -> anyhow::Result<()> {
let endpoint = Endpoint::builder().discovery_n0().bind().await?;

// We initialize the Blobs protocol in-memory
let local_pool = LocalPool::default();
let blobs = Blobs::memory().build(&local_pool, &endpoint);
let blobs = Blobs::memory().build(&endpoint);

// Now we build a router that accepts blobs connections & routes them
// to the blobs protocol.
Expand All @@ -133,19 +131,18 @@ async fn main() -> anyhow::Result<()> {
// Gracefully shut down the router
println!("Shutting down.");
router.shutdown().await?;
local_pool.shutdown().await;

Ok(())
}
```

I've also taken the liberty to make sure that we're gracefully shutting down the `Router` and all its protocols with it, as well as the `LocalPool` that the iroh-blobs library needs to operate.
I've also taken the liberty to make sure that we're gracefully shutting down the `Router` and all its protocols with it, in this case that's only iroh-blobs.


## Doing something

So far, this code works, but doesn't actually do anything besides spinning up a node and immediately shutting down.
Even if we put in a `tokio::time::timeout` or `tokio::signal::ctrl_c().await` in there, it *would* actually respond to network requests for the blobs protocol, but even that is practically useless as we've stored no blobs to respond with.
If we put in a `tokio::time::timeout` or `tokio::signal::ctrl_c().await` in there, although it *would* actually respond to network requests for the blobs protocol, these responses are practically useless as we've stored no blobs to respond with.

Here's our plan for turning this into a CLI that actually does what we set out to build:
1. We'll grab a [`Blobs::client`](https://docs.rs/iroh-blobs/latest/iroh_blobs/net_protocol/struct.Blobs.html#method.client) to interact with the iroh-blobs node we're running locally.
Expand All @@ -162,18 +159,23 @@ Here's our plan for turning this into a CLI that actually does what we set out t
Phew okay! Here's how we'll grab an iroh-blobs client and look at the CLI arguments:

```rust
let blobs = blobs.client();
// We use a blobs client to interact with the blobs protocol we're running locally:
let blobs_client = blobs.client();

let args = std::env::args().collect::<Vec<_>>();
match &args.iter().map(String::as_str).collect::<Vec<_>>()[..] {
[_cmd, "send", path] => {
// Grab all passed in arguments, the first one is the binary itself, so we skip it.
let args: Vec<String> = std::env::args().skip(1).collect();
// Convert to &str, so we can pattern-match easily:
let arg_refs: Vec<&str> = args.iter().map(String::as_str).collect();

match arg_refs.as_slice() {
["send", filename] => {
todo!();
}
[_cmd, "receive", ticket, path] => {
["receive", ticket, filename] => {
todo!();
}
_ => {
println!("Couldn't parse command line arguments.");
println!("Couldn't parse command line arguments: {args:?}");
println!("Usage:");
println!(" # to send:");
println!(" cargo run --example transfer -- send [FILE]");
Expand All @@ -189,23 +191,26 @@ Now all we need to do is fill in the `todo!()`s one-by-one:

### Getting ready to send

If we want to make a file available over the network with iroh-blobs, we first need to index this file.
If we want to make a file available over the network with iroh-blobs, we first need to hash this file.

<Note>
What does this step do?

It hashes the file using BLAKE3 and stores a so-called "outboard" for that file.
This outboard file contains information about hashes for parts of this file.
It hashes the file using [BLAKE3](https://en.wikipedia.org/wiki/BLAKE_(hash_function)) and stores a so-called ["outboard"](https://github.com/oconnor663/bao?tab=readme-ov-file#outboard-mode) for that file.
This outboard file contains information about hashes of parts of this file.
All of this enables some extra features with iroh-blobs like automatically verifying the integrity of the file *during* streaming, verified range downloads and download resumption.
</Note>

```rust
let abs_path = PathBuf::from_str(path)?.canonicalize()?;
let filename: PathBuf = filename.parse()?;
let abs_path = std::path::absolute(&filename)?;

println!("Indexing file.");
println!("Hashing file.");

let blob = blobs
.add_from_path(abs_path, true, SetTagOption::Auto, WrapOption::NoWrap)
// keep the file in place and link it, instead of copying it into the in-memory blobs database
let in_place = true;
let blob = blobs_client
.add_from_path(abs_path, in_place, SetTagOption::Auto, WrapOption::NoWrap)
.await?
.finish()
.await?;
Expand All @@ -221,7 +226,7 @@ This ticket contains the `NodeId` of our `Endpoint` as well as the file's BLAKE3
let node_id = router.endpoint().node_id();
let ticket = BlobTicket::new(node_id.into(), blob.hash, blob.format)?;

println!("File analyzed. Fetch this file by running:");
println!("File hashed. Fetch this file by running:");
println!("cargo run --example transfer -- receive {ticket} {path}");

tokio::signal::ctrl_c().await?;
Expand All @@ -236,12 +241,13 @@ On the connection side, we got the `ticket` and the `path` from the CLI argument
With them parsed, we can call `blobs.download` with the information contained in the ticket and wait for the download to finish:

```rust
let path_buf = PathBuf::from_str(path)?;
let ticket = BlobTicket::from_str(ticket)?;
let filename: PathBuf = filename.parse()?;
let abs_path = std::path::absolute(filename)?;
let ticket: BlobTicket = ticket.parse()?;

println!("Starting download.");

blobs
blobs_client
.download(ticket.hash(), ticket.node_addr().clone())
.await?
.finish()
Expand All @@ -250,22 +256,29 @@ blobs
println!("Finished download.");
```

As a final step, we'll copy the file we just downloaded to the desired file path:
As a final step, we'll export the file we just downloaded into our blobs database to the desired file path:

```rust
println!("Copying to destination.");

let mut file = tokio::fs::File::create(path_buf).await?;
let mut reader = blobs.read_at(ticket.hash(), 0, ReadAtLen::All).await?;
tokio::io::copy(&mut reader, &mut file).await?;
blobs_client
.export(
ticket.hash(),
abs_path,
ExportFormat::Blob,
ExportMode::Copy,
)
.await?
.finish()
.await?;

println!("Finished copying.");
```

<Note>
This first download the file completely into memory, then copy that memory into a file in two steps.
This first downloads the file completely into memory, then copies it from memory to file in a second step.

There's ways to make this work without having to store the whole file in memory, but that involves setting up `Blobs::persistent` instead of `Blobs::memory` and using `blobs.export` with `EntryMode::TryReference`.
There's ways to make this work without having to store the whole file in memory, but those involve setting up `Blobs::persistent` instead of `Blobs::memory` and using `blobs.export` with `EntryMode::TryReference`.
We'll leave these changes as an exercise to the reader 😉
</Note>

Expand All @@ -281,5 +294,3 @@ If you're hungry for more, check out
- [other examples](/docs/examples),
- other available [protocols](/proto) or
- a longer guide on [how to write your own protocol](/docs/protocols/writing).

If rust is not actually your jam, make sure to check out the [language bindings](/docs/sdks)!
8 changes: 4 additions & 4 deletions src/app/docs/tour/1-endpoints/page.mdx
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
import { PageLink } from '@/components/PageNavigation';

# 2. Endpoints
# 1. Endpoints

The journey of a connection starts with an endpoint. An endpoint is one of the two termination points for a connection. It’s one of the tin cans that the wire is connected to. Because this is peer-2-peer, endpoints both _initiate_ *and* _accept_ connections. The endpoint handles both.

Expand All @@ -9,10 +9,10 @@ Let's first add iroh to our project. From the project root run `cargo add iroh`
```bash
$ cargo add iroh
Updating crates.io index
Adding iroh v0.31.0 to dependencies
Adding iroh v0.32.1 to dependencies
Features:
+ discovery-pkarr-dht
+ metrics
- discovery-pkarr-dht
- discovery-local-network
- examples
- test-utils
Expand All @@ -31,7 +31,7 @@ In the end your `Cargo.toml` file's `[dependencies]` section should look somethi
```toml
[dependencies]
anyhow = "1.0.95"
iroh = "0.31.0"
iroh = "0.32.1"
rand = "0.8.5"
tokio = "1.43.0"
```
Expand Down
13 changes: 4 additions & 9 deletions src/app/docs/tour/3-discovery/page.mdx
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
import { PageLink } from '@/components/PageNavigation';

# Discovery
# 3. Discovery

Discovery is the glue that connects a [Node Identifier](/docs/concepts/endpoint#node-identifiers) to something we can dial. There are a few different types of discovery services, but for all of them you put a `NodeID` in, and get back either the home relay of that node, or IP addresses to dial.

Expand Down Expand Up @@ -41,27 +41,22 @@ This will change our `Cargo.toml` file `[dependencies]` section to look like thi
```toml
[dependencies]
anyhow = "1.0.95"
iroh = { version = "0.31.0", features = ["discovery-local-network"] }
iroh = { version = "0.32.1", features = ["discovery-local-network"] }
rand = "0.8.5"
tokio = "1.43.0"
```

And with that we can set up local discovery. It does add some complexity to the endpoint setup:
And with that we can set up local discovery:

```rust
use iroh::discovery::local_swarm_discovery::LocalSwarmDiscovery;
use iroh::{Endpoint, RelayMode, SecretKey};

#[tokio::main]
async fn main() -> anyhow::Result<()> {
let key = SecretKey::generate(rand::rngs::OsRng);
let id = key.public();

let builder = Endpoint::builder()
.secret_key(key)
.relay_mode(RelayMode::Default)
.discovery_n0()
.discovery(Box::new(LocalSwarmDiscovery::new(id)?));
.discovery_local_network()

let endpoint = builder.bind().await?;
println!("node id: {:?}", endpoint.node_id());
Expand Down
Loading
Loading