A bitemporal ledger database with MVCC, full SQL, real transactions, PITR, vector search, and sub-microsecond performance.
TensorDB is an embedded database that treats every write as an immutable fact. It separates system time (when data was recorded) from business-valid time (when data was true), giving you built-in time travel, auditability, and point-in-time recovery with zero application-level bookkeeping. Written in Rust, it ships as a library for Rust, Python, and Node.js — no server process required.
Full 4-way Criterion benchmark (optimized release build):
| Operation | TensorDB | SQLite | sled | redb |
|---|---|---|---|---|
| Point Read | 273 ns | 1,145 ns | 278 ns | 570 ns |
| Point Write | 3.6 µs | 41.9 µs | 4.3 µs | 1,392 µs |
| Batch Write (100) | 1,063 µs | 344 µs | 588 µs | 4,904 µs |
| Prefix Scan (1k) | 249 µs | 137 µs | 165 µs | 63 µs |
| Mixed 80r/20w | 34 µs | 9.5 µs | 1.1 µs | 277 µs |
| SQL SELECT (100 rows) | 56 µs | — | — | — |
| Throughput (reads/sec) | 1k keys | 10k keys | 50k keys |
|---|---|---|---|
| TensorDB | 3.8M | 3.5M | 1.9M |
| sled | 4.6M | 3.5M | 2.8M |
- 4.2x faster reads than SQLite, on par with sled
- 11.6x faster writes than SQLite via lock-free fast write path
- 273 ns point reads via direct shard bypass (no channel round-trip)
- 3.6 µs point writes via
FastWritePathwith group-commit WAL - 3.8M reads/sec sustained throughput at 1k dataset size
Benchmarks use Criterion 0.5. Run them yourself:
cargo bench --bench comparative # TensorDB vs SQLite
cargo bench --bench multi_engine # TensorDB vs SQLite vs sled vs redb
cargo bench --bench basic # Microbenchmarks# Python
pip install tensordb
# Node.js
npm install tensordb
# Rust
cargo add tensordb
# Interactive CLI
cargo install tensordb-cli
cargo run -p tensordb-cli -- --path ./mydb
# PostgreSQL wire protocol server
cargo run -p tensordb-server -- --data-dir ./mydb --port 5433
# Docker
docker compose up -d
psql -h localhost -p 5433-- Create a typed table
CREATE TABLE accounts (id INTEGER PRIMARY KEY, name TEXT NOT NULL, balance REAL);
INSERT INTO accounts (id, name, balance) VALUES (1, 'alice', 1000.0), (2, 'bob', 500.0);
-- Standard SQL: joins, window functions, CTEs
SELECT a.name, e.doc FROM accounts a
JOIN events e ON a.name = e.doc->>'user';
SELECT name, balance, ROW_NUMBER() OVER (ORDER BY balance DESC) AS rank FROM accounts;
WITH high_value AS (SELECT * FROM accounts WHERE balance > 500)
SELECT * FROM high_value;
-- Time travel: read state as of a specific commit or epoch
SELECT * FROM accounts AS OF 1;
SELECT * FROM accounts AS OF EPOCH 5;
-- Bitemporal: SQL:2011 temporal queries
SELECT * FROM accounts FOR SYSTEM_TIME AS OF 1;
SELECT * FROM accounts FOR APPLICATION_TIME AS OF 1000;
-- Transactions with savepoints
BEGIN;
UPDATE accounts SET balance = balance - 100 WHERE id = 1;
SAVEPOINT sp1;
UPDATE accounts SET balance = balance + 100 WHERE id = 2;
ROLLBACK TO sp1;
COMMIT;
-- Full-text search
CREATE FULLTEXT INDEX idx_docs ON events (doc);
SELECT pk, HIGHLIGHT(doc, 'signup') FROM events WHERE MATCH(doc, 'signup');
-- Vector search: VECTOR(n) columns, HNSW/IVF-PQ indexes, k-NN via <-> operator
CREATE TABLE docs (id INTEGER PRIMARY KEY, title TEXT, embedding VECTOR(384));
CREATE VECTOR INDEX idx ON docs (embedding) USING HNSW WITH (m = 32, ef_construction = 200, metric = 'cosine');
INSERT INTO docs (id, title, embedding) VALUES (1, 'intro', '[0.1, 0.2, 0.3, ...]');
SELECT id, title, embedding <-> '[0.1, 0.2, ...]' AS distance FROM docs ORDER BY distance LIMIT 10;
-- Hybrid search: combine vector similarity with BM25 text relevance
SELECT id, HYBRID_SCORE(embedding <-> '[0.1, ...]', MATCH(body, 'quantum'), 0.7, 0.3) AS score
FROM docs WHERE MATCH(body, 'quantum') ORDER BY score DESC LIMIT 10;
-- Vector search table function
SELECT * FROM vector_search('docs', 'embedding', '[0.1, 0.2, ...]', 10);
-- Time-series
CREATE TIMESERIES TABLE metrics (ts TIMESTAMP, value REAL) WITH (bucket_size = '1h');
SELECT TIME_BUCKET('1h', ts), AVG(value) FROM metrics GROUP BY 1;
-- Data interchange
COPY accounts TO '/tmp/accounts.csv' FORMAT CSV;
SELECT * FROM read_parquet('data.parquet');
-- Incremental backup
BACKUP TO '/tmp/backup.json' SINCE EPOCH 3;# Run built-in examples
cargo run --example quickstart # Core features: SQL, time-travel, prepared statements
cargo run --example bitemporal # Bitemporal ledger: AS OF + VALID AT queries- Immutable Fact Ledger — Append-only WAL with CRC-framed records. Data is never overwritten.
- EOAC Transactions — Epoch-Ordered Append-Only Concurrency with global epoch counter,
BEGIN/COMMIT/ROLLBACK/SAVEPOINT. - MVCC Snapshot Reads — Query any past state with
AS OF <commit_ts>orAS OF EPOCH <n>. - Point-in-Time Recovery —
SELECT ... AS OF EPOCH <n>for cross-shard consistent snapshots. - Incremental Backup —
BACKUP TO '<path>' SINCE EPOCH <n>for delta exports. - Bitemporal Filtering — SQL:2011
SYSTEM_TIMEandAPPLICATION_TIMEtemporal clauses. - LSM Storage Engine — Memtable → SSTable (L0–L6) with bloom filters, prefix compression, mmap reads, LZ4 block compression.
- Block & Index Caching — LRU caches with configurable memory budgets, hit/miss tracking via
SHOW STATS. - Write Batch API — Atomic multi-key writes with a single WAL frame.
- Encryption at Rest — AES-256-GCM block-level encryption (
--features encryption).
- Full SQL — DDL, DML, SELECT, JOINs (inner/left/right/cross), GROUP BY, HAVING, CTEs, subqueries, UNION/INTERSECT/EXCEPT, window functions, CASE, CAST, LIKE/ILIKE, transactions.
- 60+ built-in functions — String, numeric, date/time, aggregate, window, conditional, type conversion, vector search.
- Cost-based query planner —
PlanNodetree with cost estimation,EXPLAINandEXPLAIN ANALYZE. - Prepared statements — Parse once, execute many with
$1, $2, ...parameter binding. - Temporal SQL — 7 SQL:2011 temporal clause variants for both system time and application time.
- Vectorized execution — Columnar
RecordBatchengine with vectorized filter, project, aggregate, join, and sort.
- Full-Text Search —
CREATE FULLTEXT INDEX,MATCH(),HIGHLIGHT(), BM25 ranking, multi-column with per-column boosting. - Time-Series —
CREATE TIMESERIES TABLE,TIME_BUCKET(), gap filling (LOCF,INTERPOLATE),DELTA(),RATE(). - Vector Search —
VECTOR(n)column type, HNSW and IVF-PQ indexes,<->distance operator,vector_search()table function, hybrid search (vector + BM25 viaHYBRID_SCORE), temporal vector queries, cosine/Euclidean/dot-product distance, FP16/INT8 quantization. - Event Sourcing — Aggregate projections, snapshot support, idempotency keys, cross-aggregate event queries.
- Schema Evolution — Migration manager with versioned SQL migrations, schema diff, rollback support.
- Change Data Capture — Prefix-filtered subscriptions, durable cursors, consumer groups with rebalancing.
- Data Interchange —
COPY TO/FROMCSV, JSON, Parquet. Table functions:read_csv(),read_json(),read_parquet(). - PostgreSQL Wire Protocol —
tensordb-servercrate accepts Postgres client connections via pgwire, with/healthHTTP endpoint on port+1. - Authentication & RBAC — User management, role-based access control, table-level permissions, session management.
- Audit Log — Append-only audit trail for all DDL changes, auth events, policy changes, and GDPR erasures. Queryable via
SHOW AUDIT LOGorSELECT * FROM __audit_log. - Row-Level Security —
CREATE POLICYfor per-row access control based on session user, role, or arbitrary SQL predicates. - GDPR Erasure —
FORGET KEY '<key>' FROM <table>to cryptographically erase all temporal versions while preserving ledger structure. - Connection Pooling — Configurable pool with warmup, idle eviction, and RAII connection guards.
- Structured Error Codes — Stable numeric codes (
T1001–T6002) with categories (syntax, schema, constraint, execution, auth), suggestions, and position tracking. - "Did You Mean?" Suggestions — Levenshtein-based fuzzy matching for misspelled table and column names.
- Per-Query Resource Limits —
SET QUERY_TIMEOUTandSET QUERY_MAX_MEMORYto prevent runaway queries. - Online DDL —
ALTER TABLE DROP COLUMNandRENAME COLUMNwithout table locks or data rewriting. - Plan Stability —
CREATE PLAN GUIDEto pin query plans and prevent regressions. SUGGEST INDEX— Analyze a query and recommend optimal indexes based on WHERE/JOIN/ORDER BY columns.VERIFY BACKUP— Validate backup integrity without restoring.VACUUM— Tombstone cleanup with compaction scheduling viaSET COMPACTION_WINDOW.
- Rust — Native embedded library (
tensordb-core). - Python — PyO3 bindings (
tensordb-python) —open(),put(),get(),sql(). - Node.js — napi-rs bindings (
tensordb-node) —open(),put(),get(),sql(). - Interactive CLI — TAB completion, persistent history, table/line/JSON output modes.
- Optional C++ Acceleration —
--features nativeviacxxfor Hasher, Compressor, BloomProbe. - Optional io_uring —
--features io-uringfor Linux async I/O. - Optional SIMD —
--features simdfor hardware-accelerated bloom probes and checksums (AVX2/NEON).
|
Drop-in embedded database for any app that needs real SQL — no server process, no Docker, no network. Use it from Rust, Python, or Node.js. Like SQLite, but with 4x faster reads and built-in version history. |
Every write is preserved. Roll back to any previous state with a single query. Build version history, audit trails, or time-travel debugging into your app without extra bookkeeping. |
|
Store vectors alongside your regular data. Run semantic search, full-text search, and SQL queries in one database. No need to sync between a vector store, a search engine, and a relational DB. |
Sub-microsecond writes handle sensors, logs, metrics, and event streams at scale. The time-series engine adds bucketed aggregation, gap filling, and rate calculations out of the box. |
|
Ship a full-featured database as a library — no infrastructure to manage. Works on desktops, IoT gateways, edge nodes, and anywhere you need data processing without a network round-trip. |
Immutable append-only storage with bitemporal queries satisfies audit and compliance requirements. Reconstruct the exact state of any record at any point in time — system time and business time tracked separately. |
TensorDB is organized around four core principles: immutable truth (the append-only ledger), epoch ordering (global epoch counter unifying transactions, MVCC, and recovery), temporal indexing (bitemporal metadata on every fact), and faceted queries (pluggable query planes over the same data).
graph TB
subgraph Client Layer
CLI[Interactive Shell]
API[Rust API<br/><code>db.sql(...)</code>]
PY[Python Bindings<br/>PyO3]
NODE[Node.js Bindings<br/>napi-rs]
PG[pgwire Server<br/>PostgreSQL Protocol]
end
subgraph Query Engine
Parser[SQL Parser<br/>60+ functions · CTEs · windows]
Planner[Cost-Based Planner<br/>PlanNode tree · EXPLAIN ANALYZE]
Executor[Query Executor<br/>scans · joins · aggregates · windows]
VecEngine[Vectorized Engine<br/>columnar batches · RecordBatch]
end
subgraph Facet Layer
RF[Relational Facet<br/>typed tables · views · indexes]
FTS[Full-Text Search<br/>inverted index · BM25 · stemmer]
TS[Time-Series<br/>time_bucket · gap fill · rate]
VS[Vector Search<br/>HNSW · IVF-PQ · hybrid · temporal]
ES[Event Sourcing<br/>aggregates · snapshots]
end
subgraph Shard Engine
direction LR
FW[Fast Write Path<br/>lock-free · 1.9µs]
S0[Shard 0]
S1[Shard 1]
SN[Shard N]
end
CF[Change Feeds<br/>durable cursors · consumer groups]
subgraph Storage Engine
WAL[Write-Ahead Log<br/>CRC-framed · group commit · fdatasync]
MT[Memtable<br/>sorted in-memory map]
BC[Block & Index Cache<br/>LRU · configurable budgets]
SST[SSTables<br/>bloom · LZ4 · mmap · zone maps]
C[Multi-Level Compaction<br/>L0 → L1 → ... → L6]
end
MF[Manifest<br/>per-level file tracking]
CLI --> Parser
API --> Parser
PY --> API
NODE --> API
PG --> Parser
Parser --> Planner
Planner --> Executor
Executor --> VecEngine
Executor --> RF
RF --> FW
FTS --> S0
TS --> S0
VS --> S0
ES --> S0
FW --> S0
FW --> S1
FW --> SN
S0 --> CF
S1 --> CF
SN --> CF
S0 --> WAL
S1 --> WAL
SN --> WAL
WAL --> MT
MT -->|flush| SST
SST --> BC
SST --> C
S0 --> MF
S1 --> MF
SN --> MF
- Route — Key is hashed to a shard (
hash(key) % shard_count). - Fast Path — If
fast_write_enabled, the lock-freeFastWritePathwrites directly to the shard's memtable via atomic operations (~1.9 µs). Falls back to channel path when memtable is full or subscribers are active. - WAL — Group-commit
DurabilityThreadbatches WAL records across shards, onefdatasyncper flush cycle. - Notify — Matching change feed subscribers receive the event (when active).
- Buffer — Entry is inserted into the in-memory memtable.
- Flush — When memtable exceeds
memtable_max_bytes, it is frozen and written as an LZ4-compressed SSTable. - Compact — Multi-level compaction promotes SSTables through L0 → L1 → ... → L6 with size-budgeted thresholds. All temporal versions are preserved.
- Direct Bypass —
ShardReadHandlereads directly from shared state — no channel round-trip (276 ns). - Cache Check — LRU block and index caches serve hot data without disk I/O.
- Bloom Check — If the bloom filter says the key is absent, skip the SSTable.
- Memtable Scan — Check the active and immutable memtables for the latest version.
- Level Lookup — L0: search all files newest-first. L1+: binary search for the single overlapping file per level.
- Temporal Filter — Apply
AS OF(system time) andVALID AT(business time) predicates. - Merge — Return the most recent version satisfying all filters.
| Decision | Rationale |
|---|---|
| Append-only writes | Immutability simplifies recovery, enables time travel, eliminates in-place update corruption |
| Lock-free fast write path | Bypasses crossbeam channel for 20x improvement over channel-based writes |
| Single writer per shard | Avoids fine-grained locking while allowing parallel writes across shards |
| Group-commit WAL | One fdatasync per batch interval across all shards reduces I/O overhead |
| Bitemporal timestamps | Separates "when recorded" from "when true" — required for audit and compliance |
| Multi-level compaction | Size-budgeted leveling reduces read amplification while preserving all temporal versions |
| Direct shard reads | ShardReadHandle bypasses the actor channel entirely for sub-microsecond reads |
| Dual schema modes | JSON documents for flexibility; typed columns for structure and performance |
| Epoch-ordered concurrency | Global epoch counter unifies transactions, PITR, and incremental backup under one mechanism |
| Cross-shard epoch sync | advance_epoch() bumps all shard commit counters for consistent cross-shard point-in-time snapshots |
String Functions (17)
UPPER, LOWER, LENGTH, SUBSTR/SUBSTRING, TRIM, LTRIM, RTRIM, REPLACE, CONCAT, CONCAT_WS, LEFT, RIGHT, LPAD, RPAD, REVERSE, SPLIT_PART, REPEAT, POSITION/STRPOS, INITCAP
Numeric Functions (13)
ABS, ROUND, CEIL/CEILING, FLOOR, MOD, POWER/POW, SQRT, LOG/LOG10, LN, EXP, SIGN, RANDOM, PI
Date/Time Functions (5)
NOW/CURRENT_TIMESTAMP, EPOCH, EXTRACT/DATE_PART, DATE_TRUNC, TO_CHAR
Aggregate Functions (10)
COUNT(*)/COUNT(col)/COUNT(DISTINCT col), SUM, AVG, MIN, MAX, STRING_AGG/GROUP_CONCAT, STDDEV_POP, STDDEV_SAMP, VAR_POP, VAR_SAMP
Window Functions (5)
ROW_NUMBER(), RANK(), DENSE_RANK(), LEAD(), LAG()
Time-Series Functions (6)
TIME_BUCKET, TIME_BUCKET_GAPFILL, LOCF, INTERPOLATE, DELTA, RATE
Full-Text Search Functions (2)
MATCH(column, query), HIGHLIGHT(column, query)
Vector Search Functions (5)
VECTOR_DISTANCE(v1, v2, metric), COSINE_SIMILARITY(v1, v2), VECTOR_NORM(v), VECTOR_DIMS(v), HYBRID_SCORE(vector_dist, bm25_score, vector_weight, text_weight)
Conditional & Utility (7)
COALESCE, NULLIF, GREATEST, LEAST, IF/IIF, TYPEOF, CAST
TensorDB is configured through the Config struct. All parameters have sensible defaults.
All Configuration Parameters
| Parameter | Type | Default | Description |
|---|---|---|---|
shard_count |
usize |
4 |
Number of write shards |
wal_fsync_every_n_records |
usize |
128 |
WAL fsync frequency |
memtable_max_bytes |
usize |
4 MB |
Max memtable size before flush |
sstable_block_bytes |
usize |
16 KB |
SSTable block size |
sstable_max_file_bytes |
u64 |
64 MB |
Max SSTable file size |
bloom_bits_per_key |
usize |
10 |
Bloom filter bits per key |
block_cache_bytes |
usize |
32 MB |
Block cache memory budget |
index_cache_entries |
usize |
1024 |
Index cache entry count |
compaction_l0_threshold |
usize |
8 |
L0 SSTable count before compaction |
compaction_l1_target_bytes |
u64 |
10 MB |
L1 target size |
compaction_size_ratio |
u64 |
10 |
Level size ratio multiplier |
compaction_max_levels |
usize |
7 |
Maximum compaction levels (L0–L6) |
fast_write_enabled |
bool |
true |
Enable lock-free fast write path |
fast_write_wal_batch_interval_us |
u64 |
1000 |
WAL group commit batch interval (µs) |
slow_query_threshold_us |
u64 |
10_000 |
Slow query log threshold (µs) |
strict_mode |
bool |
false |
Fail on silent type coercion |
compaction_window_start_hour |
Option<u8> |
None |
Compaction window start (0–23) |
compaction_window_end_hour |
Option<u8> |
None |
Compaction window end (0–23) |
wal_archive_enabled |
bool |
false |
Enable WAL archival |
wal_archive_dir |
Option<String> |
None |
WAL archive directory |
wal_retention_count |
usize |
10 |
Max archived WAL files |
wal_max_bytes |
Option<u64> |
None |
Force flush when WAL exceeds size |
tensordb/
├── crates/
│ ├── tensordb-core/ # Database engine (main crate, ~31k lines)
│ │ └── src/
│ │ ├── ai/ # Anomaly detection, learned cost model
│ │ ├── engine/ # Database, shard, fast write path, change feeds
│ │ ├── storage/ # SSTable, WAL, compaction, levels, cache, columnar, group WAL
│ │ ├── sql/ # Parser, executor, evaluator, planner, vectorized engine
│ │ ├── facet/ # Relational, FTS, time-series, vector search, event sourcing, schema evolution
│ │ ├── cluster/ # Raft consensus, replication, scaling, membership
│ │ ├── auth/ # Authentication, RBAC, session management
│ │ ├── cdc/ # Change data capture, durable cursors, consumer groups
│ │ ├── io/ # io_uring async I/O (optional)
│ │ ├── ledger/ # Key encoding with bitemporal metadata
│ │ └── util/ # Varint encoding, metrics, time utilities
│ ├── tensordb-cli/ # Interactive shell and CLI commands
│ ├── tensordb-server/ # PostgreSQL wire protocol server (pgwire)
│ ├── tensordb-native/ # Optional C++ acceleration (cxx)
│ ├── tensordb-distributed/ # Horizontal scaling: routing, 2PC, rebalancing
│ ├── tensordb-python/ # Python bindings (PyO3 / maturin)
│ └── tensordb-node/ # Node.js bindings (napi-rs)
├── tests/ # 800+ tests across 51 suites
├── benches/ # Criterion benchmarks (basic, comparative, multi-engine)
├── examples/ # quickstart.rs, bitemporal.rs, fastapi, express
├── docs/ # Interactive documentation site (Starlight/Astro)
├── scripts/ # Benchmark matrix, overnight burn-in
├── Dockerfile # Multi-stage Docker image for tensordb-server
├── docker-compose.yml # Docker Compose example with volume and healthcheck
└── .github/workflows/ # CI, crates.io publish, Docker image publish
# Pure Rust (default)
cargo build
cargo test --workspace --all-targets
# With C++ acceleration
cargo test --workspace --all-targets --features native
# With SIMD-accelerated bloom probes and checksums
cargo test --features simd
# With io_uring async I/O (Linux only)
cargo test --features io-uring
# With Parquet support (Apache Arrow)
cargo test --features parquet
# Lint and format (CI enforces these)
cargo fmt --all --check
cargo clippy --workspace --all-targets -- -D warnings
# Run benchmarks
cargo bench --bench comparative
cargo bench --bench multi_engine
cargo bench --bench basic
# Build Python bindings
cd crates/tensordb-python && maturin develop
# Build Node.js bindings
cd crates/tensordb-node && npm run build
# Build documentation site
cd docs && npm install && npm run buildInteractive Documentation Site — 58 pages with live SQL playground, animated architecture diagrams, performance comparisons, and interactive configuration explorer.
cd docs && npm install && npm run dev
# Opens at http://localhost:4321| Document | Description |
|---|---|
| docs/ | Interactive documentation site (Starlight/Astro) |
| design.md | Internal architecture, data model, storage format |
| perf.md | Tuning knobs, benchmark methodology, optimization notes |
| TEST_PLAN.md | Correctness, recovery, temporal, and soak test strategy |
| CONTRIBUTING.md | Development setup and contribution guidelines |
| CHANGELOG.md | Release history |
On every push and PR to main:
- test-rust —
cargo fmt --check→cargo clippy -D warnings→cargo test --workspace - test-native — C++ toolchain →
cargo clippy --features native→cargo test --features native
On release:
- publish-crates — Publish
tensordb-coreandtensordbto crates.io - publish-docker — Build and push multi-arch Docker image to
ghcr.io
Strategy: Close SQL gaps → Speak Postgres fluently → Make it fast → Harden for enterprise → Scale out → Own the niche.
Informed by a comprehensive enterprise evaluation testing TensorDB against Oracle, PostgreSQL, Redis, and SQLite.
-
v0.1–v0.10 — Core engine, SQL, storage, query planner, prepared statements
-
v0.11–v0.18 — Temporal SQL:2011, FTS, time-series, pgwire, data interchange, vectorized execution
-
v0.19–v0.26 — Columnar storage, CDC, event sourcing, auth/RBAC, connection pooling, monitoring, schema evolution
-
v0.27–v0.28 — Replication foundations, fast write engine, secondary indexes, DECIMAL type
-
v0.29 — EOAC transactions, PITR, incremental backup, encryption at rest
-
v0.30 — Vector search (HNSW/IVF-PQ, temporal vectors, hybrid search), horizontal scaling, Python/Node.js
-
v0.31 — Observability (8 SHOW commands, /health endpoint, cache tracking)
-
v0.32 — Structured errors (T1xxx–T6xxx), "Did you mean?" suggestions, audit log, RLS, GDPR erasure, online DDL, plan guides, VACUUM, compaction scheduling, WAL management
-
v0.33 — SQL completeness (multi-value INSERT, subqueries, OFFSET, IF EXISTS, FULL OUTER JOIN, upsert, RETURNING, persistent sessions)
-
v0.34–v0.35 — Advanced SQL (recursive CTEs, foreign keys, materialized views, generated columns, triggers, UDFs, native date/time, JSON/JSONB)
-
v0.36–v0.38 — Performance (query parallelism, batch writes, external merge sort, expression compilation, Zstd compression)
-
v0.39–v0.41 — Enterprise security (TLS/mTLS, encryption key rotation, column-level encryption, audit log tamper detection)
-
v0.42–v0.45 — Distributed & cloud (Raft consensus, object store backend, WAL replication, C FFI)
-
v0.46+ — Category differentiation (learned cost model, anomaly detection, graph queries)
SQL completeness, TLS, encryption key rotation, stable on-disk format, Jepsen testing, TPC-H/YCSB benchmarks, published packages on crates.io/PyPI/npm.
See the full roadmap for details and CHANGELOG.md for release history.
We welcome contributions. Please read CONTRIBUTING.md before opening a pull request.
TensorDB is licensed under the PolyForm Noncommercial License 1.0.0. You may use it freely for personal, educational, research, and non-commercial purposes. Commercial use requires a paid license — contact walebadr@users.noreply.github.com.