Skip to content

Commit

Permalink
oops
Browse files Browse the repository at this point in the history
  • Loading branch information
chrisdew committed Apr 5, 2012
1 parent 9a9b046 commit 0606aa9
Show file tree
Hide file tree
Showing 2 changed files with 31 additions and 9 deletions.
28 changes: 22 additions & 6 deletions README.md
Expand Up @@ -3,6 +3,8 @@ Notice

This project is under heavy development and does not yet do anything other than pass some tests, and make the DeltaQL Bootstrap project work.

I'm currently working on the DeltaQL site web application, which will showcase many different queries across a half-dozen pages.


DeltaQL
-------
Expand All @@ -14,8 +16,8 @@ The results of query, displayed in a web page. They change when (and only when)
Welcome to DeltaQL - no more F5, no more polling the DB.


Using Delta QL
--------------
Using DeltaQL
-------------

DeltaQL systems begin with Silos. These are simply unordered sets of rows, similar to a database.

Expand Down Expand Up @@ -47,13 +49,15 @@ A TCP link is a simple as:
// silo process
var silo = new Silo();
var users = silo.filter(function(row) { return row.table === 'users'; });
silo.listen(1234, '0.0.0.0');
users.listen(1235, '0.0.0.0');
silo.listen(1234, '127.0.0.1');
users.listen(1235, '127.0.0.1');

// web server process
var loggedIn = remoteRSet(1235, '127.0.0.1').filter(function(row) { return row.loggedIn; });
var numLoggedIn = loggedIn.count();

As demostrated above, frequently used filters can be placed in the silo process, to help limit unnecessary

All updates are done to Silos, never to intermediate results, otherwise observering components further 'up' the tree (more accurately a DAG) would not see the change.


Expand All @@ -80,7 +84,7 @@ DeltaQL operates in memory. It has hooks to add persistence to MySQL, Postgres

The two hooks are a 'save state' (i.e. on controlled shutdown) and a 'delta log' to recover data after a power outage.

The only built-in Persistor is the simple, but inefficient, JsonFilePersistor. Any others will need to be written (for MySQL, Postgres, etc.) but it's not hard.
Currently the only Persistor is the simple, but inefficient, JsonFilePersistor.


Origins
Expand Down Expand Up @@ -108,7 +112,7 @@ This may be useful if you start diving into the code.
* LivePage - any web page containing one or more ResultLists.
* Filter - a transform from a ResultSet to a possibly smaller ResultSet
* Sort - a transform from a ResultSet to a ResultList
* Head - a transform from a ResultList to a possibly smaller ResiltList
* Head - a transform from a ResultList to a possibly smaller ResultList
* Tail - a transform from a ResultList to a possibly smaller ResultList


Expand All @@ -122,7 +126,19 @@ Some of the aspects of the project which could do with major improvement include
* Testing - there is a test suite, but coverage is very far from 100%
* Examples - is would be good to have a whole directory of exmaples, rather than just a couple of example projects.
* IE - I'm not putting any effort into making anything work on IE, yet. I expect to support IE8+ later.
* Persistors - Add code to save state in MySQL, Postgres.
* Protocol adapters, E.g.
* Adapt an in-process SMTP server makes email available as a Silo.
* How about IRC channels as Silos?
* Perhaps a DNS server, serving data from a Silo, for locating RemoteRSet's ports and addresses via srv records?
* Twitter/RSS feeds as Silos?
* Failover remote Silos - at some point there will be need to be master/slave replication for when a remote Silo's hardware breaks. Could instead go with a more fine-grained event-oriented multi-master system.
* Webserver Affinity - At the moment the system relies on clients connecting back to the same server when using SocketIO. At the moment I have disabled all awkward transports, such as websockets, as they break this design when used with a load balancer. This needs to be solved, wither by better session affinity or storing dqlSessions in a remote Silo (rather than them being in-process).
* API documents - that would be nice
* Use https://github.com/chrisdew/multicast-eventemitter for RemoteRSets after the initial state has been gained via TCP - this would be hugely more efficient on local clusters. (We could add a sequence number to the multicast packets, and trigger a full reload (after a random interval, to stop thundering herds) via TCP if a sequence number was missed.)
* Add some geospatial filters, i.e. isWithinPolygon(), or isWithinXMetres().
* Add some chronologically aware filters, i.e. arrivedWithinX() and rowFieldNotOlderThanX() - these require setTimeouts to be set to remove rows X after they arrive (or similar condition) - rather than polling. I've done this before, so I know it's practical. I've (due to a bug) once had a result set which introduced a 5 hour delay in sending data to browser, due to a MySQL backlog. NodeJS is very resiliant with regard to callbacks and timeouts.
* Time should be milliseconds since epoch UTC, everywhere - let your browser-side code deal with its display (in localtime).
* Safety - at the moment most updates are done through simple event emitters. These provide no feedback on (for example) failure to write to disk.
* Efficiency - E.g.
* Pushing Filters Upstream - if a filter function captures no variables, and it has a remote parent, then it can be pushed across a network connection to reduce the number of number of ops sent over the wire.
Expand Down
12 changes: 9 additions & 3 deletions lib/jsonpersistor.js
Expand Up @@ -4,7 +4,7 @@ var util = require('util')

exports.JsonPersistor = JsonPersistor;

function JsonPersistor(parent, path) {
function JsonPersistor(parent, filename) {
this.parent = parent;
this.path = path;
this.rows = {};
Expand All @@ -15,12 +15,18 @@ function JsonPersistor(parent, path) {
this.processSop({sop:'state',rows:parent.getRows()});
}

this.writeStream = fs.WriteStream(this.path + "/" + this.FILENAME,
this.writeStream = fs.WriteStream(this.filename,
{ flags: 'w', encoding: 'utf8', mode: 0666});
}

util.inherits(JsonPersistor, rs.RSet);

JsonPersistor.prototype.processSop = function(sop) {
// just write the sops to a text file
this.writeStream.write(sop + '\n');
}

JsonPersistor.getSilo() {
var silo = new Silo();
var data = fs.readFileSync(filepath, 'utf8');
var lines = data.split('\n');
}

0 comments on commit 0606aa9

Please sign in to comment.