Skip to content
Stephen Oliver edited this page Mar 30, 2017 · 2 revisions

General security status of Freenet. Other pages should be linked from here and the major security issues should be at least mentioned.

Copied from the old wiki.

Table of Contents


The short version is if you run darknet, it's always safer than opennet, and if your friends can be trusted, it provides a reasonable degree of security.

Link level

From build 1066, we use JFki, a Diffie-Hellman variant which supports pre-calculation of almost everything and is therefore highly resistant to denial of service attacks on both CPU and memory. This is partly thanks to a Summer of Code student. This is surrounded by an outer, symmetric encryption layer using both nodes' identities for invisibility. All Freenet 0.7 packets look like random data. Until late 2007, our DH code (in all versions of Freenet) was vulnerable to weak keys.

We have no fixed session bytes (unlike 0.5), even on connection setup (because of the outer encryption layer). However our profile may be detectable based on packet size and the fact that the connection is UDP, or the fact that it isn't anything else. It may help later on to support variable data packet sizes within bulk transfers and block transfers. Traffic flow analysis will identify Freenet traffic, if the attacker has the hardware.

Messages are padded (to obscure the types of messages) and sizes are randomised (to try to make trivial blocking a little more difficult). Messages are packed in such a way that if we have both data and messages to send on a connection, while we ensure the higher-priority messages arrive first, the packet will be full size and contain both. This limits the possibilities for tracing individual requests/transfers between nodes, especially when requests traverse busy nodes. "Bulk" requests (as opposed to "realtime" requests, there is a flag) tolerate significant latency so we can have many of them running at once, even on relatively slow nodes, which should help to ensure there is enough traffic. However, bursts are detectable by a powerful attacker who can both surveil data transfer between nodes and controls some nodes close to the burst originator (but usually bursts are a good thing for other reasons e.g. less time for a mobile attacker to approach the burst source).

New packet format changes gives each message has a unique IV generated from its sequence number (and from a key derived in connection setup), and generally improves things, but it still uses PCFB (which is looking increasingly doubtful, and we should replace with padding/CTS and CBC), and uses a relatively short hash length (10 bytes) for each packet.

Finally, we use Rijndael with 256-bit keys and 256-bit blocks. AES, the form of Rijndael which was approved by FIPS and the NSA (including for top secret documents), uses a 128-bit block size, because most cryptanalysis involved in the standardisation process focused on that block length. So theoretically there may be attacks on Rijndael with 256-bit block size that don't work with 128-bit block size. If you hear of any, tell us!

In July 2009, Bruce Schneier discussed a related-key attack on AES. This attack is stronger against 256-bit key AES than 128-bit key AES. Its applicability to 256-bit block size AES is not discussed. In practice, a related-key attack against AES would be very hard to implement, provided that the key selection scheme and random number generator are secure. See also the discussion on the devl list. More recently, more weaknesses in AES key setup with 256-bit keys have been discovered; we may add more rounds to avoid this, as discussed in Schneier's blog.

It may make sense to switch to standard 128-bit-block AES, partly because it is better studied, but mainly because it would allow us to use hardware acceleration. Adding more rounds to AES to deal with the above issue might be helpful (but would prevent hardware acceleration), but it is worth checking the current situation is before taking such decisions - it probably isn't necessary.

Request level

On Freenet 0.5, correlation attacks were possible. On Freenet 0.7, they are still possible, but they may be somewhat easier because routing works better on 0.7 (and because of how it works). Note that these attacks require the attacker to be close to the target already.

A correlation attack essentially is of the form "Peer A is doing a group of requests. I recognize that the splitfile he is fetching is X, and that he is fetching exactly 25% of it from me (and I know he has four peers). Therefore, it is probably a local request for that splitfile". This can also be helped by knowing the HTL, and another approach is to look at the closeness of the key to the specialization of the node the requests are being sent to. Essentially it's a statistical attack. The first form relies on the attacker knowing the splitfile; key closeness on the other hand may not have this requirement.

In any case the bottom line is if you fetch, or insert, a large splitfile, and your darknet peers (your supposedly trusted neighbours) do some statistical attacks, they can identify that the request is probably from you. This could be very bad!

Of course, you have to get a connection to the target in order to do this. On a small opennet (the current situation, sadly), this is easy: create an ubernode, make it pretend to be very many nodes, harvest the network, connect to every known node. On 0.7 darknet this is supposed to be very hard - especially if you don't know who the target is, which is the usual situation. Obviously stupid users will cause security problems by connecting to well known ubernodes which might be run by evildoers.

Many of the attacks described on the opennet attacks page may also work against darknets, albeit much more slowly. You can't harvest a darknet, but you can do an adaptive search the same as you would on opennet (the difference is each connection costs much more than it would have on opennet, as you have to persuade them to give you a connection, or compromise their computer, and it might take a long time).

Most request level attacks rely on being able to identify keys. This means either large predictable inserts (uploading big files), or many small predictable inserts over time (repeated posts on chat forums).

If files are inserted as SSK rather than CHK (SSK@ will generate a random SSK), and if they are never reinserted, the risk to the inserter is greatly reduced, although the risk to the requester is unchanged. Generally the inserter is the person in the more dangerous situation, although this is by no means always true. For chat forums, eventually we will have some form of tunnels, but in the meantime you should change your identity after from time to time.

In the medium term we expect to implement some form of tunneling. This will probably be used only for predictable keys on most nodes. It will not secure opennet, nor the case where your darknet peers log all your requests, but for the typical use case which Freenet is targeting - that you are using darknet and the bad guys are a long way away to start with - it could deliver significantly better security with only a small performance hit.


On Freenet 0.5, everything you requested or inserted had a good chance of being stored in your datastore, especially if it wasn't full. This means that if an attacker managed to seize your store, they could identify what you had been browsing - at least in terms of large splitfiles. On 0.7, as of build 1224, we don't store anything locally requester or inserted, or requested or inserted by nearby nodes: We cache after HTL drops to 16 on requests, and 15 on inserts (it starts at 18 but has a 50% chance of staying at 18 on each hop). We do however cache local and nearby requests in our "client cache", which isn't used by the network but only by the client layer, and can be encrypted or automatically wiped on restart, and we have a short-term "slashdot cache". It may be possible to trace an insert across the network by probing datastores, to the point of being a few hops away from the target and thus perhaps being able to guess it by other attacks, but this relies on data still being in caches (they have very limited lifetime in most cases), timing or overload attacks (or costly bloom filter transfers) to determine whether something is in the datastore, etc; it's a very expensive attack, the adaptive search is much faster and more efficient, but requires intercepting the data while it is being inserted (or requested). Encrypted tunnels hopefully will solve both problems, but will be expensive. Bundle routing would solve this problem and partly help with adaptive search; we have some options.

Location Swapping and network topology

The location swapping algorithm as currently implemented is not secure. An attacker can damage the network through bogus swap requests. In fact, this happens naturally through nodes joining and leaving the network. So we have made the node randomly reset its location every few days. According to our math guru this should protect the network against natural degeneration. It probably won't protect against a malicious attack, for information see the Pitch Black paper. This is a key requirement for darknet to work securely, and fortunately we have some ideas for how to fix it.

At the moment a large part of the topology is exposed through swapping: Because swap requests are routed randomly for 6 hops, there is no way of encrypting the exchange. Therefore any intermediate node can see the two nodes' locations and their peers' locations. So an attacker can see a big chunk of the network topology, although the data is out of date, more so the further away from his location you go.

Topology debugging/diagnostic tools

Probe requests: We provide a mechanism to send a request for the closest node location greater than a specified value. We send a probe request, this is routed normally, it returns the number of hops it took, the closest location found that is greater than the target, and the closest location found. At the moment this also returns hop by hop information on the locations, peers, and unique identifiers (see below) of the nodes the request passed through.

We have also added a unique identifier to each location in the swap requests. This will enable us to reconstruct the long-range structure of the network by watching the swap requests which pass through our node. Obviously an attacker can do the same analysis. However for short-range topology, this can already be done reasonably accurately, and it's the short range that is really interesting to an attacker (we hope). This code is disabled at the moment.

The objective of both of these mechanisms is to investigate the topology of the network in order to improve performance, diagnose problems etc. They will be removed before 1.0, and hopefully sooner. However, much similar information can be obtained on darknet from swapping.

Load balancing

Our current load balancing system does not fully suppress flooding attacks (fairness between peers helps limit such but an attacker could probably still cause widespread backoff and misrouting), and because it involves feedback going back to the request originator to tell them to slow down, it may give away some information about the request originator. New load management will solve both of these problems, when it is eventually deployed.

Cancer nodes

Cancer nodes could return DNF to every request. We have implemented a partial solution with FailureTables.


Opennet deserves an Opennet attacks attacks page all of its own.

Censorship by participating nodes

A party controlling many nodes or a larger group of Freenet users who want to censor certain content or are required to censor by local law may be able to censor publicly known content on Freenet. They can build a catalog of keys linking to content (or directly fingerprint the data) which they can refuse to relay. Incoming requests can be compared to such blacklist, if a match is found the connection to the requesting node can be terminated. For a thorough censorship of the whole Freenet datastore such a database of "bad keys" may exceed terabytes in size and require special DPI equipment - on each node. Matching based on keys is probably less computationally expensive but is also less reliable because multiple keys can refer to the same data.


The Frost message boards system is not part of Freenet proper, however some Freenet users still use it, and it is *seriously* vulnerable to spam as has recently demonstrated. At this point most boards are flooded with invisible spam, an effective DoS attack on any one-KSK-queue-per-board based chat system. Another chat system, FMS, does not have this problem, but is written in C++ and therefore cannot be bundled with the node or integrated into Frost. A third chat system, Freetalk, is under development and will be integrated into the node. Both FMS and Freetalk support "negative trust", meaning that once a newbie announces (via captchas), he can be silenced by a few people saying he is a spammer. This has provoked many flamewars, and alternatives have been discussed and may be implemented; it is a tradeoff between more spam (by everyone seeing newbies and making their own mind up e.g.) and more "community censorship" (by newbies being easily blocked because they might be spammers).

All in all, Freenet 0.7's security leaves much room for improvement, but in many ways is probably an improvement on 0.5. On a true darknet, whole classes of attacks are far more difficult and there is the possibility of hiding the network and running it in hostile environments where 0.5 may have been blocked very cheaply.

Click hereHistory of attacks for information on actual attacks which we think have been carried out.

Clone this wiki locally