Releases: luposlip/nd-db
v0.9.0-beta12
Changes:
- ns
nd-db.compress
has been removed and functionality extracted into new library -com.luposlip/clarch
- new fn
or-q
take a seq of dbs, and queries them in order until a value is returned
Full Changelog: v0.9.0-beta10...v0.9.0-beta12
v0.9.0-beta11
Wrapping gz outputstream with tar outputstream for new fn tar-gz-outputstream.
Full Changelog: v0.9.0-beta10...v0.9.0-beta11
v0.9.0-beta10
- Add helper fns in the
nd-db.compress
namespace for saving (not yet reading).tar.gz
archives. - Update dependencies
Full Changelog: v0.9.0-beta9...v0.9.0-beta10
v0.9.0-beta9
Updated dependencies, and throw sane error when nippy versions are incompatible.
Full Changelog: v0.9.0-beta8...v0.9.0-beta9
v0.9.0-beta8 - bugfix release
Fixes a bug with the index returned from appending docs.
v0.9.0-beta7 - append docs
This version makes it possible to add documents to an existing nd-db
database, efficiently appending to the database file and the index (nddbmeta
) file:
- Append documents to existing nd-db files (previously v1.0.0) g- Optional end-pointer parameter for versioning
- no parameter:
- use everything in the file, including new doc versions
- added docs will update index (thus prevent getting new db value until done)
- the index will contain only the newest version of each document
- a future version of nd-db might contain historical versions
- parameter:
- look for nddbmeta using same line (name of index reflecting lines)
- if nddbmeta doesn't exist, stop indexing after passed line number
- this will create a new .nddbmeta file with a hash and metadata reflecting
- no parameter:
Multiple documents are automatically written to db (and index) in batches of 128
.
v0.9.0-beta2 - compression
Added nd-db.compress
helper namespace for convenience.
v0.9.0-beta1 - True laziness & CSV support
This version adds true laziness (#14) and CSV/TSV file database support (#11).
This is the first beta version of the new v0.9.0, that has a lot of refactoring behind the scenes, but keeps the same public API.
True laziness
To use the true lazy lazy-ids
and lazy-docs
you'll have to either delete your pre-v0.9.0 .nddbmeta
files, or upgrade them.
You may prefer to upgrade if the databases they represent are really big (because then indexing might take a while).
To upgrade the .nddbmeta
files, you simply call nd-db.convert/upgrade-nddbmeta!
from a repl. The function takes a db
value parameter and takes care of the rest.
CSV Support
CSV databases are just as simple to use, and takes up less space than the other data formats, because they don't replicate the keys for every document.
You need an additional parameter to create a database value based on CSV (or TSV): :col-separator
:
(nd-db.core/db :filename "some-data.csv" :col-separator ";" :id-path :id)
Parsing defaults to a parser that simply parses column data as numbers or strings - nothing else. But you can pass your own column parser like this:
(nd-db.core/db :filename "some-data.csv" :col-separator ";" :id-path :id :col-parser my-col-parser-fn)
Refer to unit tests for more info.
To see what's needed before the final release of v0.9.0
, check out this pull request: #15
v0.9.0-alpha3
The new .nddbmeta
format for version v0.9.0
and forward, will not only make a lazy seq of documents available. Now even the index can be lazily read.
Also the indexes are now generated in parallel - meaning 2/3 faster than before (on a mbp m1 pro).
Did lots of refactoring so far - more is needed before the final release.