When we follow a link to a non-existant page (a red-link in wikipedia terminology) the Sinatra version of the server makes a new page on the spot. This should really be handled in the client code in response to a 404. This has several advantages:
This is an excellent idea. In addition, I wonder if this is part of a larger move to give the client more responsibility.
Sven was (rightfully) complaining about having to run the same templates on multiple different versions of the server. I wonder if we could differentiate between two types of servers:
In this way, you might have a public web server that you maintain for others to access. You have an embedded server that just pushes json data. You log in to the web server and use its client, linking to a remote page on the embedded server, and then pulling that data into the version on your web server.
This greatly simplifies the task of building servers that can do interesting things with data, without having to duplicate any code necessary to support the web client.
If this is the direction we want to go in, I might suggest stepping back and reorganizing the client code. My recent commits have added features without substantially restructuring things, and it is getting a bit hard to follow. My goal would be to separate the overall miller column UI, the rest API, and the plugins so they know less about each other. It would also be nice to improve localStorage support so that you could bookmark a wiki as an "app" on a mobile device, and save all the pages locally so you can read it offline, then push updates when you come back online.
Yes, client is overdue for some refactoring. My hope has been to get useful federation functionality working before worrying too much about filling out the client. Abstracting the localStorage is part of the federation story. I've added some BDD style use-cases to the Federation Details. I don't say localStorage in them but I was thinking that. Local storage is a "place you own". But if you're logged into a federated wiki server then that is a second place you own.
Ok, well let me get my server up and running and I'll start actually using it to organize some of my research. Then when we have federation working and I have a large enough set of experience using the wiki to do real work, we'll know a bit more about what it wants to be.
Is this issue something that should get resolved before then? It should be quite easy.
How about this: if it slips in nicely, then go for it. If it requires a lot of re-thinking, let's hold off because we don't want this to interfere with getting federation right. But then, if the current scheme gets in the way of federation, then re-thinking will be motivated by federation thinking. That way we're opportunistic but still paying attention to the roadmap.
It might be worth discussing what the general organization of the client side might look like eventually. I had this conversation with Stephen Judkins over the weekend. We came up with a four-layer organization based on philosophical arguments. We suggested a "model" layer with two screen facing interface layers above it and one server facing interface layer below it. This is what it looks like:
code for interacting with paragraphs and other story items through plugins and various jquery-like interactions such as sortable and textEdit.
code for building and interacting with dynamic wiki pages and other browser interactions such as history. This layer might sequence callbacks while loading rendering libraries like d3 or mathjax.
code that models the known state of continuously changing pages under the influence of layers above and below. This layer isolates upward facing layers from the downward facing layer. Decision making relating to ambiguous situations and late or erroneous results resides here. This could include a scheduler with a work queue.
code for managing interaction with page storage in local memory and on servers. Transient errors may or may not be hidden from other layers.
How this code might be distributed within one or more files was not discussed. Nor was services that might be provided by custom built model objects or standard libraries such as backbone.js. My preference would be to organize the structure of a single client.js along the above lines as a starting point.
I'm about to move new page creation to the client code. I'm forging ahead on this because, when following internal links, I need to know which of several servers has the page.
I now GET from each possible server in preferred order and take the first one that doesn't 404. This appears to works great but it deserves lots of testing.
Its going to be a big commit. I've done it without major code reorganization. It has been a real mind-bender because order in which information is learned isn't aligned nicely with the current control flow.
(I'm following the implementation strategy enumerated in this comment. The last bullet is almost done.)
Very nice. I actually started on this last night but my solution was going nowhere so I decided to sleep on it... i guess that solved it :)
Well, I've got this working but now 10 tests fail. Darn.
It looks the same on the screen.
One small difference is that the old server would create an empty page even if you didn't have write permission. With client-side new page creation that page gets created in local storage. Sweet.
I'll keep working on it.
Now have this working through a series of commits ending at 9024dc1.