I'd like to write a MongoDB cache backend for gzippo, so I needed to make the cache storage modular.
I don't want to force a particular API on you, nor do I want to make the MongoDB backend and then need to rewrite large portions if the API is changed dramatically.
So, here's a first stab at a modular cache store, which I think was on your todo list, anyways? All of the tests still pass and its working on the dev version of our (yet-to-be-launched) site.
The MongoDB cache backend will probably be in a repo named: gzippo-cache-mongodb.
I can also see a file-based caching backend, which could be part of mainline, and is also on your todo list, afaict.
Converted the gzippoCache object into a replaceable Object with a sim…
…ple get/set API for future extension (such as a MongoDB driver)
Accidentally created a global variable. Fixing.
Whoops. I just noticed a mistake in the code. Closing and will re-open with the correction.
I didn't realize GitHub uses HEAD for pull requests. Re-opening.
This looks good, however I was thinking of moving the memory store into a new file in lib named memoryStore.js which would allow us to have a fileStore.js soon.
I really like the idea of a gzippo-cache-mongodb module. I was wondering would you be using GridFS in mongo at all?
I don't see any other way to do the MongoDB data storage, though perhaps there should be a parameter to disable it if all files are known to be <4MB for better performance?
I was planning on requiring just the MongoDB native Node.js driver and use its GridFS API. The actual database it writes to would be parameters passed in on creation of the cache module (either a database object, or the required fields to construct a database object and therefore not requiring a var mongodb = require('mongodb') declaration).
var mongodb = require('mongodb')
And I didn't split it off because I don't want to refactor your code too much.
I will possibly doing the refactor later tonight, I think if you are going to be using GridFS their will be no point disabling for files less than 4MB as the vast majority of gzipped assets will be possibly smaller than this, also If I am not mistaken GridFS splits the binary into 256k chunks.
I can split it off if you'd prefer, then.
Also, my idea wasn't to toggle between gridfs or not based on file size, but as a setup option on DB creation. If they have a large number of tiny files, the performance increase may be welcome (if they know they'd never need to split the file).
And from my reading, the chunk size is a configurable option for GridFS.
Split off the in-memory store to a separate source file.
Split the memory store off. I'm thinking it would be a good idea to have a separate test file for the store, so other store implementations can validate that they're compatible.
Modified the API to take callbacks so nonblocking IO will work and AP…
…I more Node-like.
I realized just now that while this API works for the in-memory store, it wouldn't work for MongoDB or any other non-blocking IO library in Node.js, so I adjusted it to use callbacks.
Let's be honest. No need to return the old version when its overwritten.
I have merged your changes into my develop branch as I going to keep master as stable at the moment. I have slightly adjusted the logic and code layout in this commit 06036ea and I have also removed references to undefined for some slight speed improvements b403aad
Seems reasonable. I hadn't seen an update to the develop branch for a while according to your Network graph, so I thought you were phasing it out.
Also, on that last commit, you need to change line 22 in memoryStore.js from return callback; to return callback();
Right now the set callback is not doing anything, but this code doesn't execute it (and instead returns the reference to the callback).
I think we can close this pull req.
Awesome, yeah I just moved the old develop branch into the branch streamingGzip. To bring develop back in line.