Skip to content

Releases: arindas/laminarmq

0.0.5

24 May 15:55
1e816fc
Compare
Choose a tag to compare

What's Changed

  • Provide necessary abstractions for the REST API server by @arindas in #7
  • Return spawned task from HyperExecutor::serve() to better handle task cleanup. by @arindas in #8
  • Adds storage::impls::glommio::{dma, buffered} by @arindas in #16
  • doc: updates README to reflect new design changes by @arindas in #17
  • feat: lowers read times, binds to 0.0.0.0, introduces support for non-blocking connection throttling by @arindas in #18
  • feat: adds tokio impl for storage apis by @arindas in #19
  • feat: adds benchmarks, performance improvements for segmented log by @arindas in #21
  • style: improve readability by @arindas in #23
  • chore: merge develop to update main by @arindas in #24
  • doc: adds documentation to prepare for 0.0.5 release by @arindas in #25
  • chore: merge development updates by @arindas in #26

Full Changelog: 0.0.4...0.0.5

0.0.5-rc2

04 Oct 17:35
cdfbb2a
Compare
Choose a tag to compare
0.0.5-rc2 Pre-release
Pre-release

What's Changed

  • Adds storage::impls::glommio::{dma, buffered} by @arindas in #16
  • doc: updates README to reflect new design changes by @arindas in #17
  • feat: lowers read times, binds to 0.0.0.0, introduces support for non-blocking connection throttling by @arindas in #18
  • feat: adds tokio impl for storage apis by @arindas in #19
  • feat: adds benchmarks, performance improvements for segmented log by @arindas in #21

Full Changelog: 0.0.5-rc1...0.0.5-rc2

0.0.5-rc1

07 May 08:02
ed4beea
Compare
Choose a tag to compare
0.0.5-rc1 Pre-release
Pre-release

Release notes

The primary feature made available with this release is the "indexed-segmented-log"

laminarmq specific enhancements to the segmented_log data structure

While the conventional segmented_log data structure is quite performant for a commit_log implementation, it still requires the following properties to hold true for the record being appended:

  • We have the entire record in memory
  • We know the record bytes' length and record bytes' checksum before the record is appended

It's not possible to know this information when the record bytes are read from an asynchronous stream of bytes. Without the enhancements, we would have to concatenate intermediate byte buffers to a vector before appending to the segment. This would not only incur more allocations but also slow down our system.

Hence, to accommodate this use case, we introduced an intermediate indexing layer to our design.

segmented_log

Fig: Data organisation for persisting the segmented_log data structure on a *nix file system.

In the new design, instead of referring to records with a raw offset, we refer to them with indices. The index in each segment translates the record indices to the raw file position in the segment store file.

Now, the store append operation accepts an asynchronous stream of bytes instead of a contiguously laid-out slice of bytes. We use this operation to write the record bytes, and at the time of writing the record bytes, we calculate the record bytes' length and checksum. Once we are done writing the record bytes to the store, we write its corresponding record_header (containing the checksum and length), position and index as an index_record in the segment index.

This provides two quality-of-life enhancements:

  • Allow asynchronous streaming writes, without having to concatenate intermediate byte buffers
  • Records are accessed much more easily with easy-to-use indices

Now, to prevent a malicious user from overloading our storage capacity and memory with a maliciously crafted request which infinitely loops over some data and sends it to our server, we have provided an optional append_threshold parameter to all append operations. When provided, it prevents streaming append writes to write more bytes than the provided append_threshold.

At the segment level, this requires us to keep a segment overflow capacity. All segment append operations now use segment_capacity - segment.size + segment_overflow_capacity as the append_threshold value. A good segment_overflow_capacity value could be segment_capacity / 2.

The new indexed segmented log implementation is present under the storage::commit_log::segmented_log module.

Why is this a release candidate?

We intend to move from the old segmented log implementation to the new indexed version going forward. However, the new implementation is not API-compatible with the old implementation. Hence other modules which depend on the segmented log such as the server::* modules, might need to be re-rewritten. Hence, I am making this release candidate available before this drastic change as a safety measure.

The next release will focus on integration with the server modules, impls for different async runtimes and better documentation coverage.

What's Changed

  • Provide necessary abstractions for the REST API server by @arindas in #7
  • Return spawned task from HyperExecutor::serve() to better handle task cleanup. by @arindas in #8
  • Adds streaming append capability. by @arindas in #11
  • Add development updates from #11 by @arindas in #12
  • doc: fix doc badge urls, updates README by @arindas in #13
  • Adds doc updates from #13 by @arindas in #14

Full Changelog: 0.0.4...0.0.5-rc1

0.0.4

28 Dec 17:46
a778f81
Compare
Choose a tag to compare
0.0.4 Pre-release
Pre-release

What's Changed

  • Improve code coverage and remove unused modules by @arindas in #6

Full Changelog: 0.0.3...0.0.4

0.0.3

23 Dec 18:53
dfc2cfa
Compare
Choose a tag to compare
0.0.3 Pre-release
Pre-release

What's Changed

  • Refactor/record api by @arindas in #1
  • Merge development updates for refactors. by @arindas in #2
  • Overhaul documentation in different submodules by @arindas in #3
  • Added necessary pieces of RPC Server implementation by @arindas in #4
  • Stabilize RPC server components. by @arindas in #5

Full Changelog: 0.0.2...0.0.3

0.0.2

23 Sep 14:18
7347765
Compare
Choose a tag to compare
0.0.2 Pre-release
Pre-release

Release notes:

  • Update {store, segment, log}::read() API to return the offset or position (where applicable) for the next record.
    This makes client code more ergonomic as they always know which offset to read from if they know the first offset.
  • Removed leaky abstractions
    • Removed header_padded_record_length() from commit_log::store::common
    • Removed advance_to_offset() and offset_of_record_after() from
      commit_log::CommitLog. They were leaking implementation details.

0.0.1

21 Sep 18:48
aefb907
Compare
Choose a tag to compare
0.0.1 Pre-release
Pre-release

Release notes

  • Cross-platform segmented log implementation.
  • glommio-based segmented log store implementation.