Tinychain Binary Object Notation (TBON) is a compact and versatile stream-friendly binary serialization format.
- See
CHANGELOG.mdfor behavior and dependency changes. - TBON is a binary format (not JSON) and is designed to be portable across architectures.
- Duplicate map keys are not rejected; behavior depends on the target type.
- Decoding enforces a maximum nesting depth of 1024 by default; use
tbon::de::decode_with_max_depth/tbon::de::try_decode_with_max_depth(andtbon::de::read_from_with_max_depthwithtokio-io) to override. - There are no explicit size limits; hostile inputs may require significant CPU/memory.
- Decoding is strict about consuming the entire input stream: trailing bytes after the first value are treated as an error. To encode multiple values, wrap them in a tuple/list/map.
- Default
destreamimpl conventions used by this codec:i128/u128encode as strings; decode accepts either strings or in-range integer tokensDurationencodes as(secs, nanos)withnanos < 1_000_000_000
Example:
let expected = ("one".to_string(), 2.0, vec![3, 4], Bytes::from(vec![5u8]));
let stream = tbon::en::encode(&expected).unwrap();
let actual = tbon::de::try_decode((), stream).await.unwrap();
assert_eq!(expected, actual);Example (multiple values):
let expected = vec![
("one".to_string(), 2.0),
("two".to_string(), 3.0),
];
let stream = tbon::en::encode(&expected).unwrap();
let actual = tbon::de::try_decode((), stream).await.unwrap();
assert_eq!(expected, actual);To inspect decode performance sensitivity to input chunk size:
cargo test --test bench_chunk_size -- --ignored --nocapture
For more stable measurements (and throughput reporting):
cargo bench --bench chunk_size
If your transport does one write per stream item, buffering encoder output can reduce chunk count:
tbon::en::encode_buffered(value, 1024)