Skip to content

Passive Request

Stephen Oliver edited this page Mar 30, 2017 · 3 revisions

Proposed feature of Freenet 0.8: A kind of persistent request on the node level. When you do a request, with the persistent request flag enabled, and the request fails, it will persist on the chain of nodes where the data was searched for, and if the data is later on inserted, or found, then it will be returned, along that path, to the requestor.

Other variants of the proposal are detailed on this page too.

Passive request

There are two main reasons to have persistent requests, both relate to Freenet apps which would otherwise have to poll a series of keys:

  • To reduce polling latency.
  • To reduce load caused by polling.

Passive requests are also related to Publish-subscribe streams. A pub-sub stream can be created easily enough from a set of passive requests for a numbered series of keys (very similar to USKs, but much lower latency). Both passive requests and pub/sub streams must support an unsubscribe mechanism, and must cease to exist when there are no subscribers!

Reducing polling latency

Passive requests would obviously reduce polling latency, because an answer can come immediately on the data being inserted, or found.

If the objective of passive requests is solely to reduce polling latency, we can greatly simplify the implementation of passive requests: simple expendable passive requests - at the cost of having to occasionally send out poll requests in order to avoid stale passive requests due to nodes being disconnected, the optimal route changing etc.

Reducing polling load

If passive requests are persistent, that is to say if they automatically reroute themselves when the network changes, so the client does not need to keep sending them, then we can go considerably further: The client only needs to send a poll request once, and it will be automatically maintained by the network.

But even better, we can avoid many of those requests in the first place. If we get a request for a key, and we already have a subscription for that key to the node we would have routed it to, and we know that the request would not have any HTL left, then we can simply fail the request immediately. See Per-node failure tables for a related question, which this would supercede.

Unfortunately implementing a truly persistent passive requests mechanism is likely to be challenging: true persistent passive requests.


It might be safer, locally speaking, to send a bunch of passive requests once (or infrequently) than constantly polling for unpopular data. However, it might also make life easier for a powerful traffic analyst: He might just be able to trace the entire flow from the insert through all the subscribers back home. Although if he can do this, he can probably trace requestors anyway? Plainly we are in need of some countermeasures!


It is likely that sometimes nodes will subscribe to a whole bunch of keys e.g. the missing blocks in a splitfile, which the client has requested out of band be reinserted. We need to return the data in such a way that transfers don't timeout nor cause a problem for load management. I am reluctant to make the transfer optional, for the same reason that you can't probe for a key without requesting it... but there needs to be some way to schedule the transfers, or something.

See tree passive requests for more details.

Inverse passive request

Hypothetical, and possibly insecure, mechanism for improved persistence of data. Similar to passive requests, but upside-down: instead of remembering who wants the data, we'd remember who has it. A node whose clients want to upload semi-permanent data would do an insert with a special flag. If the flag is set, the nodes would remember who sent the data: when the data is purged from the store by the LRU, the memory of who sent the data remains. If the node gets a request for the data, it can retrieve it from the old path. The insert is critical here: there must be a bandwidth cost, and proof that the data existed in the first place, otherwise this sort of mechanism is way too easy to overflow. Obviously this sort of data would have to persist (and therefore be cancellable), and it represents a security risk for the inserter in that there is a persistent trail leading back to him, if his adversary compromises each node on the chain and retrieves the intact data. There may be other security issues (e.g. timing attacks to identify the originator given IPRs on many nodes), it has not been studied in detail. There are similar rerouting/etc issues with IPRs as with PassiveRequests: for the latter, ultra-lightweight passive requests avoid the issue by being deliberately unreliable and requiring the owner to periodically re-request (this has the additional benefit of avoiding any load management issues); the difference is that for IPRs to be useful, they have to persist over a timescale longer than the expiration of the data from the datastore, but that may still be a useful approach. Otherwise we need to deal with these problems directly through cancellation, explicit rerouting, and rationing, as with PassiveRequests.

The advantage is that while even IPRs do not provide permanent content storage (because the queue can overflow, and because of renewal issues), they should provide a very high level of confidence that data is available. And they save space. For example for a backup/snapshot system, the data on the filesystem can be inserted via IPRs, and pulled from the filesystem when it is requested (assuming that the daemon tracks all data via inotify and sends a cancellation on any change being made). Thus, while we cannot provide secure backup for personal files, any file which is still on a computer backed up by the snapshot system should be retrievable.

There have been discussions (or at least messages) on this on the mailing list, search the archives.

Ultra-lightweight passive request

We record, for a limited period (around an hour), which nodes we have routed a specific key to, and which have requested it. When the key is found, we offer the key to all of those nodes. If a node receives an offer for a key it has previously fetched, it will remember it for a limited period, and requests will try the offers first. If we still want the key for local requests, or if it's been requested by other nodes, we will queue a request to fetch the key. So the end result is that the key should propagate very quickly once it is found, across the graph of nodes that have requested the key recently. However, this is not a true passive request, because it is unreliable: We do not tell downstream nodes when a connection is lost. So it is still necessary to fetch the key periodically.

Since the key will be propagated quickly if it is found, we can limit the request sent for any given key:

Locally, if a key is queued to be fetched, with MaxRetries = -1 (i.e. keep fetching it forever until it is found), we will only send 3 requests for the key every half hour, regardless of its priority and regardless of how many individual requests or clients want that key.

At the node level, if a key is requested which we have recently requested from more than N nodes, we will send a RecentlyFailed to terminate the request. This is similar to a DataNotFound message but it has a timeout, which can be anything up to 30 minutes, after which the requester should try again. These are taken into account when deciding whether we have requested the key recently, as is the HTL of the request, whether the nodes have been restarted recently, and whether there is a node that is closer to the key than the nodes we have tried already. This is using the Per-node failure table. Currently N=3 but it is likely to be increased shortly as we have a problem with bogus RecentlyFailed's.

ULPRs greatly reduce the cost of polling for keys in multi-user chat applications such as Frost, FMS, etc. Note that like per-node failure tables in general, ULPRs require keeping a little more information about recent requests than we would otherwise have kept. This may or may not be a security issue depending on your threat model, a similar issue to keeping detailed logs: somebody could hack the node and read the data. For most purposes this is a non-issue: Anyone who can remotely compromise lots of nodes and doesn't care about detection can do pretty much whatever they want, there are much easier attacks on Freenet.

Simple expendable passive request

A very simple means to implement passive requests. Does not provide true persistence.

When a request fails, with the passive request enabled, we record a passive request entry:

  • the key
  • the node which sent the request to us
  • the nodes which we sent the request to
  • the time (the request will disappear after a certain period)

If we get the key later on, then we send it to all the nodes which have requested the key from us.

If a node disconnects:

  • If it had sent the request to us, we remove it from the set of nodes which have requested the key. If the set of nodes which have requested the key is then empty, we remove the passive request entry, because nobody is listening any more, and we tell each of the nodes which we have sent requests to for this key that to unsubscribe us.
  • If we had sent a request to it, we remove it from the list of nodes we have sent requests for this key to.

A node can send us an unsubscribe message; if we get one, we remove that node from the list of nodes which sent the request to us, and if they are all gone, we remove the passive request entry, and tell all the nodes we have sent the request to that we are unsubscribing from them too.

Now, why does this not provide true persistence? We never re-route a request. If the locations of our adjacent nodes change so that the request should have been routed differently, we don't change it. If a new node connects which we should have routed the request to, we don't change it either, and if a node disconnects, we don't route to the next-best. We rely entirely on new requests to establish the correct route. A truly persistent passive request would automatically reroute so that the clients don't have to keep on polling. So, this simplest possible passive requests implementation doesn't completely solve the "load caused by polling" problem; clients will have to send more requests from time to time, but it DOES reduce the polling latency, to the point that near-real-time applications (e.g. publish-subscribe streams used for for example broadcast audio streaming) should become feasible.

True persistent passive request

A true passive requests implementation, which not only reduces polling latency, but also eliminates the need to send re-requests. Requests will have a passive-request flag, just like with simpler implementations.

For each key which we have passive requests for, we record:

  • The key.
  • The list of nodes subscribed to that key, which we will forward the data to if it passes through this node.
  • The nodes that we routed the request to last time, which we would route it to now if we got a normal request for it. This is essentially a persistent route.
  • Any "peer" nodes; nodes which already had the request when we routed to them.

Just as with the simpler version, if a subscriber node disconnects, we remove it, and check whether there are any subscribers left. If we get the data, we forward it to all of the above nodes. If a routed-to node disconnects, we re-route the request to the next-best node which isn't already in the route according to routing. The problem is, we need to do this without causing loops which will persist forever even though there are no real subscribers - how do we do this? If the node locations change, or a new node is added, so that we need to re-route, again we do so.

The objective is to keep an up-to-date route. Any nodes which would not be part of the request path for a regular request shouldn't be part of the (outgoing) persistent route. But we also have to have a persistent tree of requestors. What we cannot do is have one ID per requestor, and all of them persist, right up to the most specialized node, because that might mean it recording hundreds of thousands of IDs, and worse, each of them would have to route a request to it. (This is what happens now actually, if a large number of nodes all make requests for a key that doesn't exist... that's half the justification for passive requests, see {{lsct||passive request}}s.

There has been much said on the mailing list about this, which should be added here.

Tree passive request

We have to solve coalescing problems here, that is what makes it so complicated. Failure modes are what this is all about: normal operation is easy, it's what happens when you lose a connection, you get a better route etc, that matters. The basic principle here is that the passive request metastructure is a tree. Each node on the tree knows the path to the current root. Not in terms of actual nodes, but in terms of UIDs which are generated at random and are different for each node on the passive request metastructure. Note that there may also be metastructure that isn't tree-form: we will happily accept offers of data through the other structures, if we need it, but it doesn't matter for routing purposes. And data, when found, will go up as well as down the tree, and sometimes perhaps sideways.

  • After = closer to the root (end of a request)
  • Before = closer to the branches (beginning of a request)

If we lose a node


The previous node, which loses the connection, is responsible for rerouting. It temporarily takes root. It notifies all downstream nodes of this action. So the upstream root is no longer accessible, the node in question is now the root. It then reroutes.

The main risk here is eating our tail. If we route to a node which is already subscribed, we will normally become subscribed to it and allow it to take root. A reroute request will be rejected if the node it comes from is known to be on our path-to-root i.e. if we have a circle. However, if the root hasn't been updated yet, it may be accepted. If we then get an update, we have to send a rejection. The node must be ready for a late (possibly indefinitely late) rejection, and reroute to a different node if it gets one.


We send a LostDownstream message towards the old root. Each node on the path evaluates whether or not it needs to still be subscribed. If it doesn't, it unsubscribes from upstream; the root may end up abdicating and dissolving this end of the network. In which case, if rerouting reaches us, we will have to re-establish it. Therefore it may make sense to have a cooldown time to allow rerouting to reach the old root.

If we find a better route

If because of swapping our upstream root is no longer the ideal route, we can again take the root, tell upstream we are no longer subscribing to them (triggering a cooldown period after which it will dissolve if we haven't gone back to it), and reroute. The same logic for safe rerouting applies.

Note that due to the likely level of churn on the network, we may need some inertia here.

Bandwidth cost of rerouting

We don't necessarily have to keep all the UIDs, we might just generate a reroute ID for this specific rerouting. However, to prevent eating our tail, as far as I can see, we will need to relay some sort of small message to all our downstream nodes. It may be possible to do this efficiently at the cost of some latency, but it will be necessary, and therefore costly on a big subscription tree??

Maybe it is possible to reroute, and then send a message up the tree when we latch on to an existing structure? If it comes back to us, we move on. This might use less bandwidth, proportional to the path length times the number of nodes we accidentally latch onto ... but it would probably be rather slow...


What if we end up routing to the ideal node, but it happens to be a very slow, very small, not well connected or malicious node? Well, the data may be found further down the tree. In which case, we propagate it across as much of the tree as we can get to (we should know if there are possible sideways links - these don't affect routing or rerouting but can be useful data conduits as with ULPRs). Also, routing should be biased such that lower capacity nodes get requests for a proportionately smaller part of the keyspace: Routing capacity bias.

Fundamentally Freenet is not 100% reliable. If an attacker can monopolise a specific part of the keyspace then he can censor keys in that area, and there isn't much we can do about this: rerouting only works well for either very popular keys, or for nodes trying to block keys that aren't exclusively their territory. Applications of passive requests will be subject to unreliability just like the rest of Freenet, and the standard solutions are available: FEC! Let's take an example of an audio stream: The stream is divided into frames. The top level metadata for a frame consists of 4 SSK data blocks and 2 SSK check blocks. We put these back together to get 4KB of metadata. This 4KB of metadata gives us around 60 CHKs to fetch, so each frame is 40*32K = 1.2MB - around 10 seconds for a 1 megabit video stream, or 100 seconds for a 100 kilobit audio stream. A listener would subscribe to all six blocks for a given frame. SSK blocks cannot be reinserted, so the listener would have to cancel his subscription for the remaining blocks.

Cost of storage

What is the ongoing cost of a subscription? Must we require a node to regularly renew their subscription in order to prevent this being used as a denial-of-service attack? Clearly there is limited space for subscriptions, although it's not that limited, we can handle many thousands of subscriptions without too much difficulty. A client, or node, should be able to unsubscribe explicitly. It also unsubscribes automatically on shutting down. Apart from that, if a passive request falls off the end, we can tell it that it has been unsubscribed; it can then reroute, if it's important. This is essentially a load management question, so it requires a load management solution: maybe some form of token passing?

Fortunately this is a relatively easy problem. Each node knows what its capacity is for storing and rerouting passive requests. It can divide that maximum capacity up, according to:

  • Fairness. If each node wants the full amount, each node gets the same number. But if some nodes don't use their allocation, the surplus can be - temporarily! - given to other nodes.
  • Quid pro quo. Badly performing nodes may be penalized by reducing their quota.
  • Priority. Local subscriptions may trump remote ones (although this may be dangerous so may be optional), and the declared priority may matter.
  • Trust status. Darknet nodes may get more subscription capacity.

When the node needs to reduce the quota, it can tell the node explicitly, and kill the least important (according to priority given) requests until it is down to the right level. A malicious node could clearly deprioritise other people's requests - but then it could just sabotage them by accepting them, not routing them and never relaying any data. As explained above, freenet will never be 100% reliable, and it won't function efficiently if a large proportion of the network is hostile cancer nodes; I don't think there's anything we can do about this.

Dynamic cost

Clearly there is also a dynamic cost to the node - the cost of rerouting etc. We can measure this and report it on a per request per hour basis. And we can use this information to control how many persistent requests we allow.

Clone this wiki locally
You can’t perform that action at this time.