notepack is a compact binary note format for nostr notes, with a specification and reference implementation in Rust.
It ships with:
- π¦ A Rust crate β for embedding notepack logic into apps, relays, or tooling.
- π» A CLI tool β for piping JSON β
notepack_β¦strings in scripts.
π See SPEC.md for the full format specification.
- Copyβpasteable string starting with
notepack_+ Base64 (RFC 4648, no padding). - Compact: Every integer is ULEB128 varint. hex strings in tags are encoded as bytes.
- 50% size reduction: Many large events like contact lists see a 50% reduction in size
- Simple: So simple its a candidate for nostr's canonical binary representation
$ cargo bench
Contact list note with 1022 tags:
- notepack-decode: 2GB/s, 15 microseconds
- json-decode: 711MB/s, 100 microseconds
The numbers start to count when you are decoding lots of notes: 1000 notes would take 100ms with json, 15ms with notepack.
If not iterating tags, notepack gets up to 1TB/s at 30 nanoseconds. That's only 0.03ms for 1000 notes if you're not verifying and just want to check a few fields as they stream in.
...but didn't feel like that was a fair comparison since you'll likely need to iterate the tags to verify the note.
I have lots of hacks in nostrdb to do incremental json parsing for note de-duplication/note rejection. with notepack you can get the ID and skip it in less than 15 microseconds.
$ notepack <<<'{"id": "f1e7bc2a9756453fcc0e80ecf62183fa95b9a1278a01281dbc310b6777320e80","pubkey": "7fe437db5884ee013f701a75f8d1a84ecb434e997f2a31411685551ffff1b841","created_at": 1753900182,"kind": 1,"tags": [],"content": "hi","sig": "75507f84d78211a68f2f964221f5587aa957a66c1941d01125caa07b9aabdf5a98c3e63d1fe1e307cbf01b74b0a1b95ffe636eb6746c00167e0d48e5b11032d5"}'
notepack_AfHnvCqXVkU/zA6A7PYhg/qVuaEnigEoHbwxC2d3Mg6Af+Q321iE7gE/cBp1+NGoTstDTpl/KjFBFoVVH//xuEF1UH+E14IRpo8vlkIh9Vh6qVembBlB0BElyqB7mqvfWpjD5j0f4eMHy/AbdLChuV/+Y262dGwAFn4NSOWxEDLVlsmpxAYBAmhpAA
- json string: 363 bytes
- notepack string: 124 bytes raw, 196 base64-encoded
For large contact lists, you can crunch them down from 74kb to about 36kb.
use notepack::{NoteBuf, pack_note_to_string};
let note = NoteBuf {
id: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa".into(),
pubkey: "bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb".into(),
created_at: 1753898766,
kind: 1,
tags: vec![vec!["tag".into(), "value".into()]],
content: "Hello, world!".into(),
sig: "cccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccc".into(),
};
let encoded = pack_note_to_string(¬e).unwrap();
println!("{encoded}"); // => notepack_AAECAw...When you already have binary data (from crypto libraries or databases), use NoteBinary to skip hex decoding:
use notepack::NoteBinary;
let id = [0xaa; 32];
let pubkey = [0xbb; 32];
let sig = [0xcc; 64];
let tags = vec![vec!["t".into(), "nostr".into()]];
let note = NoteBinary {
id: &id,
pubkey: &pubkey,
sig: &sig,
created_at: 1720000000,
kind: 1,
tags: &tags,
content: "Hello, Nostr!",
};
let bytes = note.pack(); // Returns Vec<u8>
// Or reuse a buffer:
// let mut buf = Vec::new();
// note.pack_into(&mut buf);use notepack::{NoteParser, ParsedField};
let b64 = "notepack_..."; // from wire
let bytes = NoteParser::decode(b64).unwrap();
let parser = NoteParser::new(&bytes);
for field in parser {
match field.unwrap() {
ParsedField::Id(id) => println!("id: {}", hex::encode(id)),
ParsedField::Content(c) => println!("content: {}", c),
_ => {}
}
}For relay workloads that filter millions of events, read specific fields without full deserialization:
use notepack::NoteParser;
// Filter events by kind and author without parsing tags/content
for event_bytes in database.scan() {
let parser = NoteParser::new(event_bytes);
// O(1) access to fixed-offset fields
let pubkey = parser.read_pubkey().unwrap();
let kind = parser.read_kind().unwrap();
if kind == 1 && pubkey == &target_pubkey {
// Only deserialize matching events
let note = parser.into_note().unwrap();
results.push(note);
}
}Available fast accessors:
read_id()- O(1), fixed offset at byte 1read_pubkey()- O(1), fixed offset at byte 33read_sig()- O(1), fixed offset at byte 65read_created_at()- parses 1 varintread_kind()- parses 2 varintsread_kind_and_pubkey()- combined for common filter patternread_created_at_and_kind()- combined for time+type filtering
The binary is also called notepack.
echo '{"id":"...","pubkey":"...","created_at":123,"kind":1,"tags":[],"content":"Hi","sig":"..."}' \
| notepackecho 'notepack_AAECA...' | notepacksrc
βββ SPEC.md # Full binary format spec
βββ error.rs # Unified error type for encoding/decoding
βββ lib.rs # Crate entrypoint
βββ main.rs # CLI tool: JSON β notepack
βββ note.rs # `Note` struct (Nostr event model)
βββ parser.rs # Streaming `NoteParser`
βββ stringtype.rs # String vs raw byte tags
βββ varint.rs # LEB128 varint helpers
MIT β do whatever you want, but attribution is appreciated.
This repo includes a cargo-fuzz setup under fuzz/ with targets that stress the decoder/parser
with arbitrary event content.
cargo install cargo-fuzz
rustup toolchain install nightly
rustup component add llvm-tools-preview --toolchain nightlycd fuzz
cargo +nightly fuzz run notepack_parserYou can also fuzz Base64 decode/prefix handling:
cd fuzz
cargo +nightly fuzz run notepack_decode_stringAnd you can fuzz encoding (structured NoteBuf generation) + encode/decode roundtrips:
cd fuzz
cargo +nightly fuzz run notepack_encoder