I used those tools to develop this app (see it live!). You can point this app to any dataset, and it will semi-automatically provide categorical data pivoting and filtering. (You will have to modify the metadata object in js-react/metadata.js, but that is an easy thing to do.)
To play with this app, you must first download the Babel transpiler, add the es2015, react, and transform-object-rest-spread components to it (look here for more information), and run these commands to generate the targeted js files:
$ cd ...place-where-app-lives $ babel js-react --watch --out-dir js
Then you should make sure that mongo is installed on your server, and has an instance at localhost:27017. You can load up your mongodb database (from the CSV files in this repo) thus:
$ cd ...place-where-app-lives/js $ node loaderwhich should copy all the CSV files from ./data into the mongodb instance.
The app has the following interesting properties:
React/Redux are used to manage user events and generate DOM changes. Pure functions are leveraged to great effect.
A metadata layer ensures that data is decoupled from input controls. Adding a metadata table and joining it to the actual input is a clean method of describing whether particular data can be added to tooltips, used for pivots, or used for aggregation. This allows a single point of control for any dataset.
Label placement is a pretty hard problem, solved here with a force graph. A crowded graph presents a tricky label positioning issue. We use a force graph to force label names apart (as in the screenshot directly to the right).
A force graph is also being used to model a simple network. In the app, the network is a simple hierarchy built from inherent parent-child relationships within the dataset.
The app supports auto-rollup of status within the network visualization. Statuses are plotted as red (bad), yellow (warning), green (ok), and gray (unknown). A parent node can have its own innate status, but it also displays (on its ringed outer border) the worst status of any of its descendents.
Data transforms use the mongodb aggregation pipeline, a super-efficient method for retrieving large datasets on the server. The aggregation pipeline handles filtering, pivoting, and aggregation. The work could have been done in the browser, but as the data scales, using fast-performing server techniques are better.