I only came across the project today, so please bear with me :)
I'm interested in the definition of the HTTP interface, and would appreciate any relevant pointers (I've found the Concepts and JSON pages on the Wiki so far).
I'm interested not least because I've got a project under construction that I'd like to set up as a SFW peer . By that I mean that although the front-end and UI may well be very different from the SFW system's, at minimum I want it to support the HTTP/JSON aspects, so from the viewpoint of another SFW it looks like a regular peer and supports the r/w operations.
A slightly tangential observation: the JSON messages look very similar to RSS - inevitable I suppose, in that they're both bare-bones versions of typical Web pages. But this means that looked at the right (rather twisted) angle, it means that many existing sites already fulfill part of the SFW contract. Shove the RSS through XSLT (or whatever) to get the JSON. Kind of.
Anyhow, the way I fancy approaching this by mapping between the SWF messages and SPARQL 1.1  queries. These support reading and writing to and RDF store over HTTP. I think it will be pretty straightforward to set up a "headless" peer using the same approach I've used in , essentially a bit of HTTP middleware to template input (query) and output. I've not had a close look at how SFW works but I get the feeling that given the server-side comonents it might be straightforward to overlay the browser-side components, hence making a semi-port, if you see what I mean...
One point that isn't clear to me, with a Linked Data hat on - how are the IDs determined? Or rather, given a raw ID and the URL of an arbitrary SFW peer (and the rest of the WWW :) how does one get to the Story?
Right now the client expects remote servers to accept CORS requests for JSON files organized as Federated Wiki pages.
There is currently some further interpretation of the client-to-origin-server uri in order to distinguish drag-and-drop of Federated Wiki pages from one server to another. I'm not happy with this second requirement because it seems to have one server inappropriately depending on particulars of a remote server's conversation with its client and the browser's page history mechanism.
This issue suggests ways that federated wiki could interact with other services. All good ideas I'm sure.
One unanswered question was how are stories identified at the server level. The answer is that they have a sanitized key, constructed from the page title, that retrieves the whole page in JSON. The items in a page have ids that are used for tracking moves. They are not guaranteed to remain on a given page or even on a given server. It might be possible to conduct a search over some neighborhood for pages that contain paragraphs of a given id. This has not been demonstrated.
Without further discussion I will consider this issue closed.