Skip to content

Google Summer of Code 2011

Stephen Oliver edited this page Oct 31, 2016 · 1 revision

Table of Contents

Web interface, packaging, general usability

Improve the web interface generally
There is much to do to make it more user friendly. We will hopefully have a set of mock-up designs soon, but there are also many small things linked from this bug report; many are not, please have a look around, also check the mailing list archives. A recent but not very detailed, professional mockup focusing on the homepage is here: http://emu.freenetproject.org/freenet-ui-mockup-pupok-1.pdf. An older detailed suggestion might also be an inspiration.
Get the web-pushing mode working well
A previous Summer of Code student built the web-pushing ajax/comet/GWT system, which does things like showing progress on pages with lots of images to speed things up when browser connections are limited, and live updating of the downloads page etc. This is rather buggy especially on slow browsers, and tends to lock up, so fixing this, improving it and then extending it to other areas (e.g. Freetalk) would be interesting.
Freemail web interface
Freemail sort of works, has had some good bug fixes recently, but currently strictly requires an IMAP/SMTP email client. It would be better to have a web interface. It would be better still to tie into the WoT; Freemail might need to reuse some components of Freetalk. Also, Freetalk itself could probably use significant UI improvements.
Bandwidth
Different bandwidth limits at different times of the day/week would help many users. We also need support for monthly transfer limits (separate from the existing peak per second limits), and autodetection of the connection's capacity, possibly with options for very latency sensitive behaviour for e.g. gamers (like some Bittorrent clients do). All this must be really easy to use.

Plugins and applications

Freemail
This works but is buggy, although has recently had some attention. Much of it is good and correct however (the channel design prevents many information leaks and in theory it should retransmit until it succeeds; we have code for IMAP and SMTP which mostly works). It should integrate with the Web of Trust, should use CAPTCHAs or similar for introduction (or just rely on WoT), and above all needs a good easy to use web interface. This might be an extension of Freetalk using some existing code from Freemail.
A good filesharing/file search system
This should tie in with the Web of Trust, allowing users to publish indexes and search those of their anonymous friends, rate others' indexes, merge them into their own, set up long-term file searches, preload indexes for faster searches, and so on. It might also integrate with Freetalk to help with discussions on labelling or rating. The problems of spam/deliberately corrupt content are very similar on Freenet to on traditional p2p, although the solutions may be different, especially as it isn't possible to trace spammers; trusted community maintained indexes have developed as a working means of solving these problems on web-based filesharing. Note that we already have a scalable forkable on-freenet btree search system to use as a backend, but it is not yet used for anything, and it is not distributed or WoT-compatible.
Another interesting area for filesharing is a distributed, WoT-based way to download data by conventional hashes rather than CHKs, which could tie in with other networks; this is also related to the wierd stuff (backups) at the bottom.
Secure reinsert-on-demand filesharing, to improve the volume of content that is available. This is a lot harder than it sounds, but in any case we need searching first. (Reinserts using the same keys are seriously insecure, although getting the downloaders to do random inserts may help significantly)
Data retention plugin
Some very small changes at the node layer will allow probing for whether a key is available from a random node. Users could then maintain a list of content that they care about; the plugin would download it all as a binary blob, regularly probe for reachability, and reinsert it when needed (possibly single random not fetchable blocks). For this to work really well we might need selective reinsert support in the client layer, but that's not necessary for a basic implementation. What is important is supporting both files and sites, and letting users publish their lists and subscribe to others. This could even evolve into a full blown distributed backup system, with configurable encryption for files and the ability to recognise which files are popular and avoid inserting them (might need fixing GetCHKOnly).
A microblogging and/or real-time chat system
Both of these things would actually be implemented in a fairly similar way. Evan has done a fair amount of work on how to efficiently implement microblogging over Freenet. Sone does something like this but is fairly greedy with network resources. Flip does IRC.
FCP libraries
Good FCP libraries in more languages.
Easy-to-use tools
We need simple tools for inserting freesites (freenet-hosted web sites) and files: We already have a blogging tool, but it needs more work, and tools to make it easy to insert existing content etc would also be useful. This should support uploading files of any size, should avoid re-uploading larger files on every update, but should be configurable to do so on a schedule, should work from within the freenet web interface as a plugin, and may support WebDAV uploads direct from authoring software. The ability to mirror stuff from the web would also be useful.
Scalable fork-and-merge distributed revision control over Freenet
This would integrate the new scalable on-Freenet b-trees from the new Library format by infinity0, in order to scale up to at least Wikipedia scales (to implement a wiki over Freenet using a fork-and-merge model). It would tie in closely with the Web of Trust (the trust network backing Freetalk), integrating with its identities and announcing forks, and allowing users to easily see changes in other forks and integrate them. The most obvious use for this is a wiki-over-freenet (note that because of spam and denial of service attacks, instant anonymous editing of a wiki on freenet is not possible), it might also be useful for distributing spidering Freenet, for source code (e.g. if we want to deploy a new build only after a certain number of people we trust have signed it, and then build it from source), or for anything that needs a forkable database over Freenet. You might also need to optimise the btrees' data persistence by e.g. including the top level metadata for each chunk in the next layer up. Scalable fork and merge databases are closely related to this.
Better freesite searching
Lots of work has been done on this, but more could be done: Using the new library format, rewriting the indexes on the fly after gathering a few hours' data rather than writing it from the database over a week, support for long-term searches, web of trust integration, better support for stop-words (maybe aggregating them with common before/after words), tokenisation for tricky languages (Chinese, Japanese), distributing spidering across multiple users (as scaling is getting to be a serious problem now), etc.
Wiki over Freenet
A wiki over Freenet would be really awesome. In fact it could be a killer app. But it is not easy to implement, as there are several challenges. You can learn more there. There have been many attempts; some are hard to use and based on DSCMs, some are easier to use and not scalable.

Client layer

More content filters
We have to "filter" HTML, images, etc to ensure that they are safe for the web browser, and won't give away the user's IP address via inline images, scripting etc. Finishing the SVG filter written for 2009, implementing support for SVG embedded in XHTML embedded in ATOM (we have an ATOM filter but it is not integrated yet), maybe an RSS filter, would be useful. More audio and video formats would be very helpful (particularly WebM and H.264), and with HTML5-based video playback support could make embedded video almost viable. Last year's SoC included the beginnings of a javascript player but it is far from what it could be: Making it really viable would require deeper changes related to fetching data in order, access to partially downloaded content, and possibly an applet to show which parts have been downloaded and maybe to display those formats that we support (likely ogg) in browsers that don't support them. See here for more on embedded video playback: https://bugs.freenetproject.org/view.php?id=4038. PDF would be very valuable but the spec is huge, however it is believed that minimal sufficient functionality is not *so* huge. ODF is similarly a possibility but again is gigantic. Javascript is an option for JS geniuses (create a safe API and then force the JS sent to the browser to only use that API; please talk to us in detail as there are many side-issues with any sort of "safe scripting", we have some ideas about safe APIs though, either based on only being able to access keys related to specific users which are manually approved, or fixing the fetch times).

Low to mid level stuff

Much more transport layer stuff
The current transport layer has been improved significantly but leaves much room for improvement. Ideally we'd like to detect available bandwidth automatically. We have nothing remotely like Path MTU detection; we should automatically adapt to find the right packet size for the connection, both finding what will work at all, and what gives the best bandwidth/loss tradeoff. We tend to get transfer failures on slow connections (not low bandwidth limit, low bandwidth available on that specific connection). We probably should use something like cumulative acks, currently all packets are acked once, it should be possible to ack the same packet twice with 0 bandwidth cost in many cases using ranges. We may want to divide up blocks differently depending on how fast the connection is. We may want to make tradeoffs between fairness to all peers (the current policy) and allowing individual peers more bandwidth for a short period (e.g. because they have requested a bunch of fproxy pages), or have "idle priority" traffic which is only sent when *no* peer wants to send anything (e.g. bloom filter sharing), which may also impact on packet size. And so on. Generally, the transport layer needs to be more robust, especially on slow connections, and it needs to feed information into the load management layer more quickly so that we only accept requests that we can complete in a reasonable time, given the current state of the connection. There are various bugs about this on the bug tracker. Running as well as possible on fairly slow connections is particularly useful in some of the places where Freenet may be needed most.
Transport plugins
Currently Freenet only supports UDP. Make it able to use TCP, HTTP, various steganographic transports (e.g. VoIP). Freenet should provide all the heavy lifting crypto etc, it should be *EASY* to write a transport plugin, just register it with the appropriate type, give block size and so on, and Freenet will do the rest. Last year's work on new packet format should really help although some transports (really small packets e.g. pretending to be Skype) will still need to do their own splitting/reassembly (this should probably happen within the node too, although it should be possible to turn it off).
Simulators
Simulating different load management mechanisms would be particularly useful. Simulations of various attacks are also a very important area.
maybe also support networking protocols with huge delays. e.g. for packetradio (and of course freenet nodes on mars ;-))

Friend to friend stuff

More F2F functionality
Hopefully by the time of the SoC we will have much easier darknet peer adding, invites etc, but we need more functionality: Various forms of easy to use chat, possibly realtime, allowing conversations across nodes, both within the web interface using javascript/ajax and via external clients e.g. Jabber/XMPP, possibly with voice support; easy to use, reliable file transfers; labelling downloaded files and bookmarks so that they are visible to your friends of a particular trust level; searching these file lists and transferring the files; possibly automatically fetching files both from friends and freenet; virtual LAN (hamachi style) functionality; social networking style functionality, with very careful privacy protections - after all the friend-to-friend darknet is literally a social network, we should make it as useful as possible without jeopardising privacy.
Clone this wiki locally