A JSON library for Elixir powered by Rust NIFs, designed as a drop-in replacement for Jason.
The Problem: JSON encoding in Elixir can be memory-intensive. When encoding large data structures, Jason (and other pure-Elixir encoders) create many intermediate binary allocations that pressure the garbage collector. For high-throughput applications processing large JSON payloads, this memory overhead becomes significant.
Why not existing Rust JSON NIFs? After OTP 24, Erlang's binary handling improved significantly, closing the performance gap between NIFs and pure-Elixir implementations. Libraries like jiffy and the original jsonrs struggled to outperform Jason on modern BEAM versions. Additionally, the original jsonrs is incompatible with Rustler 0.37+, which is required by many other packages.
RustyJson's approach: Rather than trying to beat Jason on speed alone, RustyJson focuses on:
- Lower memory usage during encoding (2-4x less BEAM memory for large payloads)
- Reduced BEAM scheduler load (100-2000x fewer reductions - work happens in native code)
- Faster encoding/decoding (2-3x faster for medium/large data)
- Full Jason API compatibility as a true drop-in replacement
- Modern Rustler 0.37+ support for compatibility with the ecosystem
def deps do
[{:rustyjson, "~> 0.1"}]
endPre-built binaries are provided via Rustler Precompiled. To build from source, set FORCE_RUSTYJSON_BUILD=true.
RustyJson implements the same API as Jason:
# These work identically to Jason
RustyJson.encode(term) # => {:ok, json} | {:error, reason}
RustyJson.encode!(term) # => json | raises
RustyJson.decode(json) # => {:ok, term} | {:error, reason}
RustyJson.decode!(json) # => term | raises
# Phoenix interface
RustyJson.encode_to_iodata(term)
RustyJson.encode_to_iodata!(term)
# Options match Jason
RustyJson.encode!(data, pretty: true)
RustyJson.decode!(json, keys: :atoms)# config/config.exs
config :phoenix, :json_library, RustyJsonFind/replace Jason → RustyJson in your codebase:
# Before
@derive {Jason.Encoder, only: [:name, :email]}
Jason.encode!(data)
Jason.Fragment.new(json)
# After
@derive {RustyJson.Encoder, only: [:name, :email]}
RustyJson.encode!(data)
RustyJson.Fragment.new(json)Inject pre-encoded JSON directly:
fragment = RustyJson.Fragment.new(~s({"pre":"encoded"}))
RustyJson.encode!(%{data: fragment})
# => {"data":{"pre":"encoded"}}Pretty-print or minify JSON strings:
RustyJson.Formatter.pretty_print(json_string)
RustyJson.Formatter.minify(json_string)All benchmarks on Apple Silicon M1. RustyJson's advantage grows with payload size.
| Dataset | RustyJson | Jason | Speed | Memory |
|---|---|---|---|---|
| Settlement report (10 MB) | 24 ms | 131 ms | 5.5x faster | 2-3x less |
| canada.json (2.1 MB) | 6 ms | 18 ms | 3x faster | 2-3x less |
| twitter.json (617 KB) | 1.2 ms | 3.5 ms | 2.9x faster | similar |
| Dataset | RustyJson | Jason | Speed |
|---|---|---|---|
| Settlement report (10 MB) | 61 ms | 152 ms | 2.5x faster |
| canada.json (2.1 MB) | 8 ms | 29 ms | 3.5x faster |
Both libraries produce identical Elixir data structures, so memory usage is similar for decoding.
# Reductions (BEAM work units) for encoding 10 MB settlement report:
RustyJson.encode!(data) # 404 reductions
Jason.encode!(data) # 11,570,847 reductions (28,000x fewer!)The real benefit is reduced BEAM scheduler load - JSON processing happens in native code, freeing your schedulers for other work.
- Best for: Large payloads (1MB+), API responses, data exports
- Equal to Jason: Small payloads (<1KB) due to NIF call overhead
- Biggest wins: Encoding large, complex data structures
See docs/BENCHMARKS.md for detailed methodology.
These types are handled natively in Rust without protocol overhead:
| Type | JSON Output |
|---|---|
DateTime |
"2024-01-15T14:30:00Z" |
NaiveDateTime |
"2024-01-15T14:30:00" |
Date |
"2024-01-15" |
Time |
"14:30:00" |
Decimal |
"123.45" |
URI |
"https://example.com" |
MapSet |
[1, 2, 3] |
Range |
{"first": 1, "last": 10} |
| Structs | Object without __struct__ |
| Tuples | Arrays |
Encoding:
pretty: true | integer- Pretty print with indentationescape: :json | :html_safe | :javascript_safe | :unicode_safe- Escape modecompress: :gzip | {:gzip, 0..9}- Gzip compressionlean: true- Skip special type handling for max speedprotocol: true- Enable customRustyJson.Encoderprotocol
Decoding:
keys: :strings | :atoms | :atoms!- Key handling
For custom types, implement the RustyJson.Encoder protocol and use protocol: true:
defimpl RustyJson.Encoder, for: Money do
def encode(%Money{amount: amount, currency: currency}) do
%{amount: Decimal.to_string(amount), currency: currency}
end
end
RustyJson.encode!(money, protocol: true)Or use @derive:
defmodule User do
@derive {RustyJson.Encoder, only: [:name, :email]}
defstruct [:name, :email, :password_hash]
endRustyJson is fully compliant with RFC 8259 and passes 283/283 mandatory tests from JSONTestSuite:
- 95/95
y_tests (must accept) - 188/188
n_tests (must reject) - Rejects lone surrogates per RFC 7493 I-JSON
See docs/ARCHITECTURE.md for detailed compliance information.
Most Rust JSON libraries for Elixir use serde to convert between Rust and Erlang types. This requires:
- Erlang term → Rust struct (allocation)
- Rust struct → JSON bytes (allocation)
- JSON bytes → Erlang binary (allocation)
RustyJson eliminates the middle step by walking the Erlang term tree directly and writing JSON bytes without intermediate Rust structures.
Custom Direct Encoder:
- Walks Erlang terms directly via Rustler's term API
- Writes to a single buffer without intermediate allocations
- Uses itoa and ryu for fast number formatting
- 256-byte lookup table for O(1) escape detection
Custom Direct Decoder:
- Parses JSON while building Erlang terms (no intermediate AST)
- Zero-copy strings for unescaped content
- lexical-core for fast number parsing
Memory Allocator: Uses mimalloc by default. Alternatives available via Cargo features:
[features]
default = ["mimalloc"]
# Or: "jemalloc", "snmalloc"The bottleneck for JSON NIFs isn't parsing—it's building Erlang terms. SIMD-accelerated parsers like simd-json and sonic-rs actually performed worse because they use serde, requiring double conversion (JSON → Rust types → BEAM terms).
The wins come from:
- Skipping serde entirely - Walk JSON and build BEAM terms directly in one pass
- No intermediate allocations - No Rust structs, no AST
- Good memory allocator - mimalloc reduces fragmentation
- Maximum nesting depth: 128 levels (per RFC 7159)
- Decoding very large payloads (>500 KB) may be only marginally faster than Jason
- Benchmarks are on Apple Silicon M1; results on other architectures may differ
- Rustler - Erlang NIF bindings for Rust
- Jason - API design and behavior reference
- Original Jsonrs - Initial inspiration