No description, website, or topics provided.
Switch branches/tags
Clone or download
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
frame tidy Oct 14, 2017
test check that loading a log after the restore is still valid now, and at… Oct 14, 2017
LICENSE open source Aug 12, 2016
README.md Update README Jul 7, 2017
bench.js
index.js This is what was released as 3.3.2 on npm Oct 2, 2018
inject.js This is what was released as 3.3.2 on npm Oct 2, 2018
legacy.js fix default code May 20, 2017
package-lock.json 3.3.1 Jun 6, 2018
package.json 3.3.2 Oct 2, 2018

README.md

flumelog-offset

An flumelog where the offset into the file is the key. Each value is appended to the log with a double ended framing, and the "sequence" is the position in the physical file where the value starts, this means if you can do a read in O(1) time!

Also, this is built on top of aligned-block-file so that caching works very well.

Usage

initialize with a file and a codec, and wrap with flumedb.

var OffsetLog = require('flumelog-offset')
var codec = require('flumecodec')
var Flume = require('flumedb')

var db = Flume(OffsetLog(filename, {codec: codec.json}))
  .use(...) //also add some flumeviews

db.append({greets: 'hello!'}, function (cb) {

})

Options

var OffsetLog = require('flumelog-offset')
var log = OffsetLog('/data/log', {
  blockSize: 1024,        // default is 1024*16
  codec: {encode, decode} // defaults to a json codec
  flags: 'r',             // default is 'a+',
  cache: {set, get}       // default is require('hashlru')(1024)
  offsetCodec: {          // default is require('./frame/offset-codecs')[32]
    byteWidth,            // with the default offset-codec, the file can have
    encode,               // a size of 4GB max.
    decodeAsync
  }
})

legacy

if you used flumelog-offset before 3, and want to read your old data, use require('flumelog-offset/legacy')

recovery

If your system crashes while an append is in progress, it's unlikely but possible to have a partially written state. flumelog-offset will rewind to the last good state on the next start up.

After running this for several months (in my personal secure-scuttlebutt instance) I eventually got an error, which lead to the changes in this version.

format

data is stored in a append only log, where the byte index of the start of a record is the primary key (offset).

offset-><data.length (UInt32BE)>
        <data ...>
        <data.length (UInt32BE)>
        <file_length (UInt32BE or Uint48BE or Uint53BE)>

by writing the length of the data both before and after each record it becomes possible to scan forward and backward (like a doubly linked list)

It's very handly to be able to scan backwards, as often you want to see the last N items, and so you don't need an index for this.

future ideas

  • secured file (hashes etc)
  • encrypted file
  • make the end of the record be the primary key. this might make other code nicer...

License

MIT