A distributed cache library for Elixir with pluggable backends, topologies, and near-cache support.
- Pluggable backends — ETS (in-memory), DETS (disk-persistent), Redis (external)
- Pluggable topologies — Local, Replicated, Partitioned (consistent hash ring)
- Near cache — local ETS read-through layer in front of any topology
- Stampede protection — only one process fetches on a cache miss, others wait
- Tag-based invalidation — bulk-evict groups of entries across all topologies
- TTL management — per-entry TTL with background sweep
- Telemetry — built-in instrumentation for all operations
- Stats — lock-free atomic counters for hits, misses, writes, deletes
Add vela to your list of dependencies in mix.exs:
def deps do
[
{:vela, "~> 0.1.0"}
]
enddefmodule MyApp.Cache do
use Vela.Cache,
backend: Vela.Backend.ETS,
default_ttl: :timer.minutes(5)
endAdd it to your supervision tree:
children = [
MyApp.Cache
]Use it:
MyApp.Cache.put(:user_1, %{name: "Alice"})
MyApp.Cache.get(:user_1)
# => {:ok, %{name: "Alice"}}
MyApp.Cache.get(:missing)
# => {:error, :not_found}get(key, opts \\ []) # {:ok, value} | {:error, :not_found}
get!(key, opts \\ []) # value | raises KeyError
put(key, value, opts \\ []) # :ok | {:error, reason}
delete(key, opts \\ []) # :ok
exists?(key, opts \\ []) # boolean
flush(opts \\ []) # :ok
size() # integer
stats() # %{hits: n, misses: n, writes: n, ...}
get_or_fetch(key, fetch_fn, opts \\ []) # {:ok, value} | {:error, reason}
invalidate_tag(tag, opts \\ []) # {:ok, evicted_count}ttl:— time-to-live in milliseconds, or:infinity(default: cache'sdefault_ttl)tags:— list of atoms for group invalidation (e.g.,tags: [:users, :active])
In-memory, fastest. Data lost on restart.
use Vela.Cache, backend: Vela.Backend.ETSDisk-persistent. Survives restarts. Slower than ETS.
use Vela.Cache,
backend: Vela.Backend.DETS,
backend_opts: [data_dir: "/var/data/my_cache"]External Redis server. Requires the :redix dependency.
# Add to deps: {:redix, "~> 1.3"}
use Vela.Cache,
backend: Vela.Backend.Redis,
backend_opts: [url: "redis://localhost:6379"]Single node. No distribution. Fastest.
use Vela.Cache, topology: Vela.Topology.LocalEvery node holds a full copy. Writes broadcast to all nodes. Reads are always local.
Best for: small datasets, read-heavy workloads, feature flags, config caches.
use Vela.Cache, topology: Vela.Topology.ReplicatedEach key lives on one node, determined by a consistent hash ring. Reads for remote keys use RPC.
Best for: large datasets that don't fit on a single node.
use Vela.Cache, topology: Vela.Topology.PartitionedThe hash ring updates automatically when nodes join or leave the cluster.
Adds a local ETS read-through layer (L1) in front of the real topology (L2). Hot keys are served from local memory without hitting the network.
use Vela.Cache,
backend: Vela.Backend.Redis,
topology: Vela.Topology.Local,
near_cache: true,
near_cache_l1_ttl: :timer.seconds(30)Read flow: L1 hit -> return | L1 miss -> L2 -> promote to L1 -> return.
When a cached value expires and many processes request it simultaneously, only one fetches from the source. Others wait and read from cache.
MyApp.Cache.get_or_fetch(:expensive_key, fn _key ->
{:ok, MyApp.Repo.get_expensive_data()}
end)Enabled by default. Configure with:
use Vela.Cache,
stampede_protection: true,
stampede_timeout: 5_000 # max wait time in msGroup related entries with tags, then invalidate them in bulk:
MyApp.Cache.put(:user_1, alice, tags: [:users])
MyApp.Cache.put(:user_2, bob, tags: [:users])
MyApp.Cache.put(:product_1, widget, tags: [:products])
MyApp.Cache.invalidate_tag(:users)
# => {:ok, 2} — both user entries removed, product untouchedWorks across all topologies. Replicated and Partitioned broadcast the invalidation to all nodes.
All events are prefixed with [:vela, :cache] by default (configurable via telemetry_prefix).
| Event | Measurements | Metadata |
|---|---|---|
[:vela, :cache, :get, :stop] |
duration |
cache, key, result |
[:vela, :cache, :put, :stop] |
duration |
cache, key, ttl |
[:vela, :cache, :fetch, :stop] |
duration |
cache, key, result |
[:vela, :cache, :invalidate_tag, :stop] |
count |
cache, tag |
All options with defaults:
use Vela.Cache,
backend: Vela.Backend.ETS,
backend_opts: [],
topology: Vela.Topology.Local,
topology_opts: [],
default_ttl: :infinity,
max_size: :infinity,
eviction_policy: :ttl_only,
stampede_protection: true,
stampede_timeout: 5_000,
stats_enabled: true,
telemetry_prefix: [:vela, :cache],
sweep_interval: 30_000,
near_cache: false,
near_cache_l1_ttl: 60_000Options can be overridden at runtime:
MyApp.Cache.start_link(default_ttl: :timer.minutes(10))Vela reacts to BEAM node connections. Use libcluster for automatic node discovery in production.
# Connect nodes manually:
Node.connect(:"node2@hostname")
# Or use libcluster in your supervision treeOnce nodes are connected, Replicated and Partitioned topologies work automatically.
Run benchmarks with:
mix run benchmarks/get_bench.exs
mix run benchmarks/topology_bench.exs# Unit tests
mix test
# Including distributed multi-node tests
elixir --sname primary -S mix test --include distributedMIT