diff --git a/src/app/docs/concepts/discovery/page.mdx b/src/app/docs/concepts/discovery/page.mdx index dbea0640..9a2c2800 100644 --- a/src/app/docs/concepts/discovery/page.mdx +++ b/src/app/docs/concepts/discovery/page.mdx @@ -16,7 +16,7 @@ There are four different implementations of the discovery service in iroh, all o | --- | --- | --- | | 1 | [DNS](#dns-discovery) | uses a custom Domain Name System server | | 2 | [Local](#local-discovery) | uses an mDNS-like system to find nodes on the local network | -| 3 | [Pkarr](#pkarr-discovery) | uses Pkarr Servers over HTTP | +| 3 | [Pkarr](#pkarr-discovery) | uses Pkarr Servers over HTTP | | 4 | [DHT](#dht-discovery) | uses the BitTorrent Mainline DHT | By Default, iroh uses the DNS discovery system to resolve NodeIds to addresses. And can be configured to use any of the other discovery systems. @@ -35,26 +35,18 @@ Local Discovery is _not_ enabled by default, and must be enabled by the user. Yo ```toml [dependencies] -iroh = { version = "0.1", features = ["discovery-local-network"] } +# Make sure to use the most recent version here instead of nn. (at the time of writing: 0.32) +iroh = { version = "0.nn", features = ["discovery-local-network"] } ``` Then configure your endpoint to use local discovery concurrently with DNS discovery: ```rust -use iroh::{ - discovery::{dns::DnsDiscovery, LocalSwarmDiscovery, pkarr::PkarrPublisher, ConcurrentDiscovery}, - Endpoint, SecretKey, -}; - -let secret_key = SecretKey::generate(rand::rngs::OsRng); -let discovery = ConcurrentDiscovery::from_services(vec![ - Box::new(DnsDiscovery::n0_dns()), - Box::new(LocalSwarmDiscovery::new(secret_key.public())?), -]); +use iroh::Endpoint; let ep = Endpoint::builder() - .secret_key(secret_key) - .discovery(Box::new(discovery)) + .discovery_n0() + .discovery_local_network() .bind() .await?; ``` @@ -69,27 +61,18 @@ DHT Discovery is _not_ enabled by default, and must be enabled by the user. You' ```toml [dependencies] -# Make sure to use the most recent version here instead of nn. +# Make sure to use the most recent version here instead of nn. (at the time of writing: 0.32) iroh = { version = "0.nn", features = ["discovery-pkarr-dht"] } ``` Then configure your endpoint to use DHT discovery concurrently with DNS discovery: ```rust -use iroh::{ - discovery::{dns::DnsDiscovery, pkarr::dht::DhtDiscovery, ConcurrentDiscovery}, - Endpoint, SecretKey, -}; - -let secret_key = SecretKey::generate(rand::rngs::OsRng); -let discovery = ConcurrentDiscovery::from_services(vec![ - Box::new(DnsDiscovery::n0_dns()), - Box::new(DhtDiscvoery::new(secret_key.public())?), -]); +use iroh::Endpoint; let ep = Endpoint::builder() - .secret_key(secret_key) - .discovery(Box::new(discovery)) + .discovery_n0() + .discovery_dht() .bind() .await?; ``` diff --git a/src/app/docs/concepts/router/page.mdx b/src/app/docs/concepts/router/page.mdx index fc528799..77606f58 100644 --- a/src/app/docs/concepts/router/page.mdx +++ b/src/app/docs/concepts/router/page.mdx @@ -21,14 +21,12 @@ async fn main() -> Result<()> { let endpoint = Endpoint::builder().discovery_n0().bind().await?; // configure the blobs protocol to run in-memory - let lp = LocalPool::default(); - let blobs = Blobs::memory() - .build(lp.handle(), &endpoint); + let blobs = Blobs::memory().build(&endpoint); // Build our router and add the blobs protocol, // identified by its ALPN. Spawn the router to start listening. let router = Router::builder(endpoint) - .accept(iroh_blobs::ALPN, blobs.clone()) + .accept(iroh_blobs::ALPN, blobs) .spawn() .await?; @@ -39,6 +37,11 @@ async fn main() -> Result<()> { // Wait for exit tokio::signal::ctrl_c().await?; + // Gracefully close the endpoint & protocols. + // This makes sure that remote nodes are notified about possibly still open connections + // and any data is written to disk fully (or any other shutdown procedure for protocols). + router.shutdown().await?; + Ok(()) } ``` diff --git a/src/app/docs/concepts/tickets/page.mdx b/src/app/docs/concepts/tickets/page.mdx index 02329fd2..74f4cee2 100644 --- a/src/app/docs/concepts/tickets/page.mdx +++ b/src/app/docs/concepts/tickets/page.mdx @@ -40,16 +40,6 @@ It's worth point out this setup is considerably better than full peer-2-peer sys When you create a document ticket, you're creating a secret that allows someone to read or write to a document. This means that you should be careful about sharing document tickets with people you don't trust. What's more, someone who has a document ticket can use it to create new tickets for the same document. This means that if you share a document ticket with someone, they can use it to create new tickets for the same document, and share those tickets with others. -## Creating Tickets - -| Type | Command | -| --- | --- | -| `node` | [`dumbpipe listen`](https://dumbpipe.dev) | -| `blob` | `iroh blob share` | -| `doc` | [`iroh doc share`](/docs/api/doc-share) | - -by default, tickets only include the nodeID If you still want to add relay and direct addresses to the ticket, you can pass `--addr-options RelayAndAddresses` to the ticket generation commands. - ## Tickets in Apps Using tickets in your app comes down to what you're trying to accomplish. For short-lived sessions where both devices are online at the same time, tickets are an incredibly powerful way to bootstrap connections, and require no additinonal servers for coordination. diff --git a/src/app/docs/quickstart/page.mdx b/src/app/docs/quickstart/page.mdx index 4b552596..ff746254 100644 --- a/src/app/docs/quickstart/page.mdx +++ b/src/app/docs/quickstart/page.mdx @@ -75,9 +75,9 @@ In that case, you'll need to make sure you're not only dialing by `NodeId`, but -## Using an existing protocol: [iroh-blobs](/proto/iroh-blobs) +## Using an existing protocol: iroh-blobs -Instead of writing our own protocol from scratch, let's use iroh-blobs, which already does what we want: +Instead of writing our own protocol from scratch, let's use [iroh-blobs](/proto/iroh-blobs), which already does what we want: It loads files from your file system and provides a protocol for seekable, resumable downloads of these files. ```rust @@ -88,8 +88,7 @@ async fn main() -> anyhow::Result<()> { let endpoint = Endpoint::builder().discovery_n0().bind().await?; // We initialize the Blobs protocol in-memory - let local_pool = LocalPool::default(); - let blobs = Blobs::memory().build(&local_pool, &endpoint); + let blobs = Blobs::memory().build(&endpoint); // ... @@ -118,8 +117,7 @@ async fn main() -> anyhow::Result<()> { let endpoint = Endpoint::builder().discovery_n0().bind().await?; // We initialize the Blobs protocol in-memory - let local_pool = LocalPool::default(); - let blobs = Blobs::memory().build(&local_pool, &endpoint); + let blobs = Blobs::memory().build(&endpoint); // Now we build a router that accepts blobs connections & routes them // to the blobs protocol. @@ -133,19 +131,18 @@ async fn main() -> anyhow::Result<()> { // Gracefully shut down the router println!("Shutting down."); router.shutdown().await?; - local_pool.shutdown().await; Ok(()) } ``` -I've also taken the liberty to make sure that we're gracefully shutting down the `Router` and all its protocols with it, as well as the `LocalPool` that the iroh-blobs library needs to operate. +I've also taken the liberty to make sure that we're gracefully shutting down the `Router` and all its protocols with it, in this case that's only iroh-blobs. ## Doing something So far, this code works, but doesn't actually do anything besides spinning up a node and immediately shutting down. -Even if we put in a `tokio::time::timeout` or `tokio::signal::ctrl_c().await` in there, it *would* actually respond to network requests for the blobs protocol, but even that is practically useless as we've stored no blobs to respond with. +If we put in a `tokio::time::timeout` or `tokio::signal::ctrl_c().await` in there, although it *would* actually respond to network requests for the blobs protocol, these responses are practically useless as we've stored no blobs to respond with. Here's our plan for turning this into a CLI that actually does what we set out to build: 1. We'll grab a [`Blobs::client`](https://docs.rs/iroh-blobs/latest/iroh_blobs/net_protocol/struct.Blobs.html#method.client) to interact with the iroh-blobs node we're running locally. @@ -162,18 +159,23 @@ Here's our plan for turning this into a CLI that actually does what we set out t Phew okay! Here's how we'll grab an iroh-blobs client and look at the CLI arguments: ```rust -let blobs = blobs.client(); +// We use a blobs client to interact with the blobs protocol we're running locally: +let blobs_client = blobs.client(); -let args = std::env::args().collect::>(); -match &args.iter().map(String::as_str).collect::>()[..] { - [_cmd, "send", path] => { +// Grab all passed in arguments, the first one is the binary itself, so we skip it. +let args: Vec = std::env::args().skip(1).collect(); +// Convert to &str, so we can pattern-match easily: +let arg_refs: Vec<&str> = args.iter().map(String::as_str).collect(); + +match arg_refs.as_slice() { + ["send", filename] => { todo!(); } - [_cmd, "receive", ticket, path] => { + ["receive", ticket, filename] => { todo!(); } _ => { - println!("Couldn't parse command line arguments."); + println!("Couldn't parse command line arguments: {args:?}"); println!("Usage:"); println!(" # to send:"); println!(" cargo run --example transfer -- send [FILE]"); @@ -189,23 +191,26 @@ Now all we need to do is fill in the `todo!()`s one-by-one: ### Getting ready to send -If we want to make a file available over the network with iroh-blobs, we first need to index this file. +If we want to make a file available over the network with iroh-blobs, we first need to hash this file. What does this step do? -It hashes the file using BLAKE3 and stores a so-called "outboard" for that file. -This outboard file contains information about hashes for parts of this file. +It hashes the file using [BLAKE3](https://en.wikipedia.org/wiki/BLAKE_(hash_function)) and stores a so-called ["outboard"](https://github.com/oconnor663/bao?tab=readme-ov-file#outboard-mode) for that file. +This outboard file contains information about hashes of parts of this file. All of this enables some extra features with iroh-blobs like automatically verifying the integrity of the file *during* streaming, verified range downloads and download resumption. ```rust -let abs_path = PathBuf::from_str(path)?.canonicalize()?; +let filename: PathBuf = filename.parse()?; +let abs_path = std::path::absolute(&filename)?; -println!("Indexing file."); +println!("Hashing file."); -let blob = blobs - .add_from_path(abs_path, true, SetTagOption::Auto, WrapOption::NoWrap) +// keep the file in place and link it, instead of copying it into the in-memory blobs database +let in_place = true; +let blob = blobs_client + .add_from_path(abs_path, in_place, SetTagOption::Auto, WrapOption::NoWrap) .await? .finish() .await?; @@ -221,7 +226,7 @@ This ticket contains the `NodeId` of our `Endpoint` as well as the file's BLAKE3 let node_id = router.endpoint().node_id(); let ticket = BlobTicket::new(node_id.into(), blob.hash, blob.format)?; -println!("File analyzed. Fetch this file by running:"); +println!("File hashed. Fetch this file by running:"); println!("cargo run --example transfer -- receive {ticket} {path}"); tokio::signal::ctrl_c().await?; @@ -236,12 +241,13 @@ On the connection side, we got the `ticket` and the `path` from the CLI argument With them parsed, we can call `blobs.download` with the information contained in the ticket and wait for the download to finish: ```rust -let path_buf = PathBuf::from_str(path)?; -let ticket = BlobTicket::from_str(ticket)?; +let filename: PathBuf = filename.parse()?; +let abs_path = std::path::absolute(filename)?; +let ticket: BlobTicket = ticket.parse()?; println!("Starting download."); -blobs +blobs_client .download(ticket.hash(), ticket.node_addr().clone()) .await? .finish() @@ -250,22 +256,29 @@ blobs println!("Finished download."); ``` -As a final step, we'll copy the file we just downloaded to the desired file path: +As a final step, we'll export the file we just downloaded into our blobs database to the desired file path: ```rust println!("Copying to destination."); -let mut file = tokio::fs::File::create(path_buf).await?; -let mut reader = blobs.read_at(ticket.hash(), 0, ReadAtLen::All).await?; -tokio::io::copy(&mut reader, &mut file).await?; +blobs_client + .export( + ticket.hash(), + abs_path, + ExportFormat::Blob, + ExportMode::Copy, + ) + .await? + .finish() + .await?; println!("Finished copying."); ``` -This first download the file completely into memory, then copy that memory into a file in two steps. +This first downloads the file completely into memory, then copies it from memory to file in a second step. -There's ways to make this work without having to store the whole file in memory, but that involves setting up `Blobs::persistent` instead of `Blobs::memory` and using `blobs.export` with `EntryMode::TryReference`. +There's ways to make this work without having to store the whole file in memory, but those involve setting up `Blobs::persistent` instead of `Blobs::memory` and using `blobs.export` with `EntryMode::TryReference`. We'll leave these changes as an exercise to the reader 😉 @@ -281,5 +294,3 @@ If you're hungry for more, check out - [other examples](/docs/examples), - other available [protocols](/proto) or - a longer guide on [how to write your own protocol](/docs/protocols/writing). - -If rust is not actually your jam, make sure to check out the [language bindings](/docs/sdks)! diff --git a/src/app/docs/tour/1-endpoints/page.mdx b/src/app/docs/tour/1-endpoints/page.mdx index 82a609c1..d0714524 100644 --- a/src/app/docs/tour/1-endpoints/page.mdx +++ b/src/app/docs/tour/1-endpoints/page.mdx @@ -1,6 +1,6 @@ import { PageLink } from '@/components/PageNavigation'; -# 2. Endpoints +# 1. Endpoints The journey of a connection starts with an endpoint. An endpoint is one of the two termination points for a connection. It’s one of the tin cans that the wire is connected to. Because this is peer-2-peer, endpoints both _initiate_ *and* _accept_ connections. The endpoint handles both. @@ -9,10 +9,10 @@ Let's first add iroh to our project. From the project root run `cargo add iroh` ```bash $ cargo add iroh Updating crates.io index - Adding iroh v0.31.0 to dependencies + Adding iroh v0.32.1 to dependencies Features: - + discovery-pkarr-dht + metrics + - discovery-pkarr-dht - discovery-local-network - examples - test-utils @@ -31,7 +31,7 @@ In the end your `Cargo.toml` file's `[dependencies]` section should look somethi ```toml [dependencies] anyhow = "1.0.95" -iroh = "0.31.0" +iroh = "0.32.1" rand = "0.8.5" tokio = "1.43.0" ``` diff --git a/src/app/docs/tour/3-discovery/page.mdx b/src/app/docs/tour/3-discovery/page.mdx index dcca6912..48d1edd9 100644 --- a/src/app/docs/tour/3-discovery/page.mdx +++ b/src/app/docs/tour/3-discovery/page.mdx @@ -1,6 +1,6 @@ import { PageLink } from '@/components/PageNavigation'; -# Discovery +# 3. Discovery Discovery is the glue that connects a [Node Identifier](/docs/concepts/endpoint#node-identifiers) to something we can dial. There are a few different types of discovery services, but for all of them you put a `NodeID` in, and get back either the home relay of that node, or IP addresses to dial. @@ -41,27 +41,22 @@ This will change our `Cargo.toml` file `[dependencies]` section to look like thi ```toml [dependencies] anyhow = "1.0.95" -iroh = { version = "0.31.0", features = ["discovery-local-network"] } +iroh = { version = "0.32.1", features = ["discovery-local-network"] } rand = "0.8.5" tokio = "1.43.0" ``` -And with that we can set up local discovery. It does add some complexity to the endpoint setup: +And with that we can set up local discovery: ```rust -use iroh::discovery::local_swarm_discovery::LocalSwarmDiscovery; use iroh::{Endpoint, RelayMode, SecretKey}; #[tokio::main] async fn main() -> anyhow::Result<()> { - let key = SecretKey::generate(rand::rngs::OsRng); - let id = key.public(); - let builder = Endpoint::builder() - .secret_key(key) .relay_mode(RelayMode::Default) .discovery_n0() - .discovery(Box::new(LocalSwarmDiscovery::new(id)?)); + .discovery_local_network() let endpoint = builder.bind().await?; println!("node id: {:?}", endpoint.node_id()); diff --git a/src/app/docs/tour/4-protocols/page.mdx b/src/app/docs/tour/4-protocols/page.mdx index f5224bb9..d49beeac 100644 --- a/src/app/docs/tour/4-protocols/page.mdx +++ b/src/app/docs/tour/4-protocols/page.mdx @@ -1,6 +1,6 @@ import { PageLink } from '@/components/PageNavigation'; -# Protocols +# 4. Protocols At this point, we’re connected, yay! Now we just have to… do something… with that connection. That’s where protocols come in. @@ -11,21 +11,20 @@ Coming from the world of HTTP client/server models, protocols are kinda like req Protocols are an ever-growing topic, but to give you a basic let's add the [blobs](/proto/iroh-blobs) protocol. First we need to add it to our depdendencies: ``` -cargo add iroh-blobs +cargo add iroh-blobs --features=rpc ``` then adjust our code: ```rust use iroh::{protocol::Router, Endpoint}; -use iroh_blobs::{net_protocol::Blobs, util::local_pool::LocalPool}; +use iroh_blobs::net_protocol::Blobs; #[tokio::main] async fn main() -> anyhow::Result<()> { let endpoint = Endpoint::builder().discovery_n0().bind().await?; - let local_pool = LocalPool::default(); - let blobs = Blobs::memory().build(local_pool.handle(), &endpoint); + let blobs = Blobs::memory().build(&endpoint); // build the router let router = Router::builder(endpoint) @@ -34,8 +33,7 @@ async fn main() -> anyhow::Result<()> { .await?; router.shutdown().await?; - drop(local_pool); - drop(tags_client); + Ok(()) } ``` diff --git a/src/app/docs/tour/5-routers/page.mdx b/src/app/docs/tour/5-routers/page.mdx index d22ddd92..2e33edcc 100644 --- a/src/app/docs/tour/5-routers/page.mdx +++ b/src/app/docs/tour/5-routers/page.mdx @@ -1,6 +1,6 @@ import { PageLink } from '@/components/PageNavigation'; -# Routers +# 5. Routers Most apps will use more than one protocol. A router let’s you stack protocols on top of iroh's peer-to-peer connections. Routers handle the *accept* side of an iroh endpoint, but the connection initiation side is still handled by the protocol instance itself. @@ -14,15 +14,14 @@ Then we can setup gossip & add it to our router: ```rust use iroh::{protocol::Router, Endpoint}; -use iroh_blobs::{net_protocol::Blobs, util::local_pool::LocalPool}; +use iroh_blobs::net_protocol::Blobs; use iroh_gossip::{net::Gossip, ALPN}; #[tokio::main] async fn main() -> anyhow::Result<()> { let endpoint = Endpoint::builder().discovery_n0().bind().await?; - let local_pool = LocalPool::default(); - let blobs = Blobs::memory().build(local_pool.handle(), &endpoint); + let blobs = Blobs::memory().build(&endpoint); let gossip = Gossip::builder().spawn(endpoint.clone()).await?; @@ -34,7 +33,6 @@ async fn main() -> anyhow::Result<()> { .await?; router.shutdown().await?; - drop(local_pool); Ok(()) } ``` diff --git a/src/app/docs/tour/6-conclusion/page.mdx b/src/app/docs/tour/6-conclusion/page.mdx index 17e603a8..04a123a9 100644 --- a/src/app/docs/tour/6-conclusion/page.mdx +++ b/src/app/docs/tour/6-conclusion/page.mdx @@ -1,6 +1,6 @@ import { PageLink } from '@/components/PageNavigation'; -# Things iroh doesn’t do out of the box +# 6. Things iroh doesn’t do out of the box Before we go, let’s talk through a little of what iroh *doesn’t* cover: diff --git a/src/app/docs/tour/page.mdx b/src/app/docs/tour/page.mdx index 03fa195a..146d7425 100644 --- a/src/app/docs/tour/page.mdx +++ b/src/app/docs/tour/page.mdx @@ -38,8 +38,12 @@ This paragraph touches on five key points worth understanding in iroh: We'll touch on each of these on the tour, and by the end you should have a good understanding of how they all fit together. The code we'll be writing here will build & execute, but it won't _do_ much. We'll link to examples and other resources along the way so you can explore further. -
-
- -
-
+ diff --git a/src/components/GithubStars.jsx b/src/components/GithubStars.jsx index 0dd0aeec..331aef49 100644 --- a/src/components/GithubStars.jsx +++ b/src/components/GithubStars.jsx @@ -6,7 +6,7 @@ export default function GithubStars(props) { return ( - 3.6k + 4.0k ) }