No description, website, or topics provided.
Switch branches/tags
Nothing to show
Clone or download
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
README.md

README.md

Problems with this proposal

  • The locking in this proposal is a problem in that it's mandatory, and locks don't work across multiple reads & writes. Locking should be optional and independent. Whether they should be opt-in or opt-out is another issue. #8.
  • A lower-level may be sync access in a worker, similar to mmap. #4.

Byte storage

The aim is to provide a low-level disk-backed storage system.

A measure of success would be being able to use this API to create a custom disk-backed data store, such as sqlite.

API

self.byteStorage:ByteStorage;

The entry point for the API.

Reading from a byte store

const readable:ReadableStream = await byteStorage.read(name:String, {
  start:Number = 0,
  end:Number
});
  • name - the identifier of the store. This can be any string. / has no special meaning.
  • start - start point within the store in bytes. If negative, treated as an offset from the end of the store.
  • end - end point within the store in bytes. If negative, treated as an offset from the end of the store. If not provided, treated as the end of the store.

Resolves once:

  • All write locks within intersecting byte ranges are released.
  • A read lock is granted for the start-end.

Rejects if:

  • name is not an existing byte store.
  • The computed start point is less than 0.
  • The computed start point is greater than the length of the byte store.
  • The computed end point is less than 0.
  • The computed end point is greater than the length of the byte store.
  • The computed end point is less than the start point.

readable is a ReadableString with a underlying byte source. Non-BYOB reads produce UInt8Arrays.

The read lock is released once the readable is closed.

Writing to a byte store

const writable:WritableStream = await byteStorage.write(name:String, {
  start:Number = 0,
  end:Number
});
  • name - the identifier of the store.
  • start - start point within the store in bytes. If negative, treated as an offset from the end of the store.
  • end - end point within the store in bytes. If negative, treated as an offset from the end of the store. If not provided, the store may continue to write beyond its current length, increasing its size.

Resolves once:

  • The byte store entry is created, if not already.
  • The space is allocated (zeroed), if end is provided and its computed value is greater than the length of the current store.
  • The space is allocated (zeroed), if the computed start is greater than the length of the current store.
  • All read and write locks for intersecting byte ranges are released.
  • A write lock for the start-end (or end of the store) is granted.

Rejects if:

  • The computed start point is less than 0.
  • The computed end point is less than 0.
  • The space cannot be allocated.

writable accepts chunks of ArrayBuffer or ArrayBufferView. Writing will error if additional allocation fails (this can only happen if end was not provided).

The writable will close once if end is provided, and end - start bytes have been queued.

If more than end - start bytes are queued, the writable errors.

The write lock is released once the writable is closed.

Transforming a byte store

byteStorage.transform(name:String, {
  readable:ReadableStream,
  writable:WritableStream
}, {
  start:Number = 0,
  end:Number
});

This functions the same as .write except:

  • Writes to the writable are buffered if they're beyond the current read point (unless it's the end of the store). This means if you read 1 byte, and write 2, the next read in the transform will not include your written byte.
  • Rejects if store name doesn't exist.
  • If the readable closes before end - start bytes are queued, the writable closes. TODO: should we leave untouched bytes alone in this case?
  • If the readable errors, then any already-written bytes are retained, as in this is not a transactional system, it won't undo changes so far.

Retrieving metadata on a byte store

const data = await byteStorage.status(name:String);

const {
  size:Number,
  created:Date,
  modified:Date
} = data;
  • name - the identifier of the store.

Resolves once:

  • All write locks for the store are released. TODO: or shall we just return the information we have, which may include half-written data?

data is null if the store does not exist.

Resizing a byte store

await byteStorage.resize(name:String, end:Number);
  • name - the identifier of the store.
  • end - end point within the store in bytes. If negative, treated as an offset from the end of the store.

Resolves once:

  • The space is allocated (zeroed), if end is greater than the length of the current store.
  • The space is allocated (zeroed), if the computed start is greater than the length of the current store.
  • All read and write locks for end until the end of the resource are released.
  • A write lock for end until the end of the resource is granted.
  • The space is allocated/deallocated.

Rejects if:

  • name is not an existing byte store.
  • The computed end point is less than 0.
  • The space cannot be allocated.

Deleting a byte store

const existed:Boolean = await byteStorage.delete(name:String);

Resolves once:

  • All read and write locks for intersecting byte ranges are released.
  • A write lock for the start-end (or end of the store) is granted.
  • The store is unlinked.

TODO: or should we be more agressive here, and error current reads & writes?

Getting all store names

We could add a .keys() method, or just use async iterators.

Helpers for simple reads and writes?

const data:UInt8Array = await byteStorage.readAll(name:String, {
  start:Number = 0,
  end:Number
});

await byteStorage.writeAll(name:String, data, {
  start:Number = 0
});

TODO: Do we need methods like above for making simple reads & writes?

Issues

Permalock

await byteStorage.write('foo');

The above locks the whole of "foo" until the client closes. We could work around this by:

  • Adding timeouts.
  • Provide a method to discard existing locks (erroring the related open streams).

Multi-action locking

Do we need an API to create locks independent of particular actions. Eg:

  • I have a 500 byte store containing PNG data. I want to lock the whole store while I compress the data, which includes reading, writing, and hopefully truncating.
  • I am transforming some data, but I'm also buffering what I read. If my write errors, I want to write back what I originally read within the same lock, effectively undoing the partial transform.

Examples

Writing a fetch response into byte storage

(async function() {
  const response = await fetch(url);
  const opts = {};
  
  if (response.headers.get('Content-Length')) {
   opts.end = Number(response.headers.get('Content-Length'));
  }
  
  await response.body.pipeTo(
    await byteStorage.write('some-data', opts)
  );
})();

Reading number of data chunks in a custom structure

Imagine a data structure that was an unsigned long, and then a set of data of length specified by that long (in bytes). The sequence ends with a unsigned long equalling zero.

async function itemsInStructure() {
  let start = 0;
  let num = 0;
  
  while (true) {
    const data = await byteStorage.readAll('data-structure', {start, end: start + 4});
    const nextChunkLen = new Uint32Array(data.buffer)[0];
    if (nextChunkLen === 0) return num;
    num++;
    start += 4 + nextChunkLen;
  }
}

This example is one that would benefit from a single lock across multiple reads.