A multi-namespace protobuf schema registry for Go, with versioning, staging, backward compatibility enforcement, and hot-swap capabilities.
Status:
v0.xpre-stable. The gRPC service inproto/protoregistry/v1/is the durable integration point; the Go library API may change at minor versions untilv1.0. SeeSTABILITY.mdfor the full contract.
Protoregistry compiles .proto files at runtime using protocompile, stores versioned schemas in PostgreSQL with content-addressable deduplication, and serves compiled descriptors for dynamic message creation and validation via gRPC.
- Multi-namespace isolation — each namespace is a self-contained scope (chroot model); proto imports resolve only within the same namespace
- Two-phase staging — publish compiles and stages; promote atomically swaps all staged versions to current, enabling coordinated multi-schema changes
- Backward compatibility enforcement — breaking changes (field deletion, type changes, cardinality changes) are rejected at promote time
- Content-addressable storage — proto sources are normalized, SHA-256 hashed, and deduplicated; rollback is a pointer move with zero data duplication
- Hot-swap — readers access compiled descriptors via
atomic.Pointer; swaps are instant and lock-free - Dynamic message support — create
dynamicpb.Messageinstances from any registered schema at runtime - Custom built-in types — extend the standard Google well-known types with your own shared protos via the reserved
__builtins__namespace - Well-known type shadowing protection — publishing files that shadow Google well-known types is rejected by default; requires explicit
forceflag - Startup recovery — rebuilds in-memory state from pre-compiled descriptors in Postgres without recompilation
- CLI tool —
protoregistrybinary for managing the registry and running the gRPC server - Go client SDK —
protoregistry/clientprovides a remote-backedprotoreflect.MessageTypeResolver/protodesc.Resolverwith eager population, polling refresh, version pinning, and atomic hot-swap (see Go client SDK)
# Build the binary
go build -o protoregistry ./cmd/protoregistry/
# Start the server (runs migrations and listens on :50051)
protoregistry serve --db "postgres://localhost:5432/protoregistry?sslmode=disable" --migrate --listen :50051
# Optionally bootstrap built-in types from a directory
protoregistry serve --db "$DATABASE_URL" --migrate --builtins ./company-types/# Create a namespace
protoregistry namespace create acme
# Push proto files (publish + stage)
protoregistry push acme billing ./protos/billing/
# Promote staged changes to current
protoregistry promote acme
# Load an entire directory of proto files in dependency order
protoregistry load acme ./protos/ --promote
# List namespaces and schemas
protoregistry namespace list
protoregistry schema list acme
protoregistry schema info acme billing
# Retrieve source or compiled descriptors
protoregistry schema source acme billing --version 2
protoregistry schema descriptor acme billing --out billing.binpb
# Rollback to a previous version
protoregistry rollback acme billing 1 --promote
# Discard all staged changes
protoregistry discard acmepackage main
import (
"context"
"fmt"
"log"
"github.com/jackc/pgx/v5/pgxpool"
protoregistry "github.com/trendvidia/protoregistry"
"github.com/trendvidia/protoregistry/store/postgres"
)
func main() {
ctx := context.Background()
pool, err := pgxpool.New(ctx, "postgres://localhost:5432/protoregistry?sslmode=disable")
if err != nil {
log.Fatal(err)
}
defer pool.Close()
store := postgres.New(pool)
reg := protoregistry.New(store)
if err := reg.Restore(ctx); err != nil {
log.Fatal(err)
}
result, err := reg.Publish(ctx, &protoregistry.PublishRequest{
NamespaceID: "acme",
SchemaID: "billing",
Sources: map[string][]byte{
"billing/config.proto": []byte(`
syntax = "proto3";
package billing;
message Config {
string name = 1;
int32 timeout_ms = 2;
}
`),
},
CreatedBy: "deploy-bot",
})
if err != nil {
log.Fatal(err)
}
fmt.Printf("Published version %d (no_change=%v)\n", result.Version, result.NoChange)
promoted, err := reg.Promote(ctx, "acme")
if err != nil {
log.Fatal(err)
}
fmt.Printf("Promoted %d schema(s)\n", len(promoted.Promoted))
snap := reg.Current("acme", "billing")
msg, _ := snap.NewMessage("billing.Config")
fmt.Printf("Created dynamic message: %s\n", msg.ProtoReflect().Descriptor().FullName())
}| Flag | Env var | Default | Description |
|---|---|---|---|
--server, -s |
PROTOREGISTRY_SERVER |
localhost:50051 |
gRPC server address |
--namespace, -n |
Default namespace for commands | ||
--output, -o |
table |
Output format: table or json |
|
--token |
PROTOREGISTRY_TOKEN |
Bearer token for authentication | |
--tls |
false |
Connect over TLS using the system root CA pool | |
--tls-ca |
PEM-encoded CA file to verify the server cert (implies --tls) |
||
--tls-cert |
PEM-encoded client certificate for mTLS (implies --tls) |
||
--tls-key |
PEM-encoded client key for mTLS (implies --tls) |
||
--tls-server-name |
Override the server name used for cert verification (implies --tls) |
||
--tls-skip-verify |
false |
Skip server cert verification — testing only (implies --tls) |
| Command | Description |
|---|---|
serve |
Start the gRPC registry server |
namespace list |
List all namespaces |
namespace create <id> |
Create a namespace |
schema list [namespace] |
List schemas in a namespace |
schema info [namespace] <schema> |
Show schema details |
schema source [namespace] <schema> |
Show proto source files |
schema descriptor [namespace] <schema> |
Get compiled FileDescriptorSet |
push [namespace] <schema> <path...> |
Publish proto files as a schema version |
load [namespace] <path> |
Load all protos from a directory in dependency order |
promote [namespace] |
Promote all staged versions to current |
discard [namespace] |
Discard all staged versions |
rollback [namespace] <schema> <version> |
Stage a previous version |
| Flag | Env var | Default | Description |
|---|---|---|---|
--db |
DATABASE_URL |
PostgreSQL connection URL (required) | |
--listen |
:50051 |
gRPC listen address | |
--builtins |
Directory of built-in .proto files to bootstrap |
||
--migrate |
false |
Run database migrations on startup |
| Flag | Default | Description |
|---|---|---|
--created-by |
$USER |
Author of this version |
--promote |
false |
Promote immediately after publishing |
--force |
false |
Allow shadowing well-known types |
--metadata |
Key=value metadata pairs (push only) |
Schema updates follow a two-phase model, similar to git staging:
1. Publish -> compile + store + stage
2. Promote -> compat check + atomic swap (all staged -> current)
Multiple schemas can be staged independently, then promoted together as a coordinated set. The compiler resolves imports against the "proposed" state (staged where available, current otherwise), so cross-schema changes compile against each other before going live.
Developer pushes "common" v3 to staging
Developer pushes "billing" v5 to staging (compiles against common v3)
Developer promotes -> both go live atomically
Rollback stages a previous version, then promotes it:
protoregistry rollback acme billing 1 # stages v1
protoregistry promote acme # v1 becomes currentThe compiler provides Google well-known types (google/protobuf/timestamp.proto, etc.) automatically via protocompile. To add your own shared types available to all namespaces, publish them to the reserved __builtins__ namespace:
# Push company-wide shared types as built-ins
protoregistry push __builtins__ company-types ./protos/company/
protoregistry promote __builtins__
# Now any namespace can import them:
# import "company/base.proto";The import resolution order during compilation is:
- Namespace sources — the schema's own files + other schemas in the same namespace
- Built-ins — files from the
__builtins__namespace - Google well-known types —
google/protobuf/*.proto(provided by protocompile)
Publishing a file that shadows a Google well-known type (e.g.,
google/protobuf/timestamp.proto) is rejected by default. The check
exists because shadowing happens silently — the protocompile resolver picks
the namespace-local file before falling back to standard imports, so a
typo'd or malicious filename can replace the well-known type for every
schema in the namespace and break compilation in confusing ways down the
line.
When you genuinely need to substitute a well-known type (for example, to
provide a richer Timestamp with extra fields), pass --force:
protoregistry push __builtins__ custom-timestamp ./my-timestamp/ --forceThis flag is intended for operator use; it should not be exposed to self-service publishers.
The server can also bootstrap built-ins from a directory on disk at startup:
protoregistry serve --db "$DATABASE_URL" --builtins ./company-types/.proto source -> protocompile -> compiled descriptors
|
+-------------------+
| Registry |
| (orchestrator) |
+--------+----------+
+---------------+---------------+
v v v
+-------------+ +------------+ +--------------+
| Namespace | | Store | | Compat |
| (in-memory) | | (Postgres) | | (checker) |
+-------------+ +------------+ +--------------+
v
+-------------+
| Snapshot | <- atomic.Pointer, lock-free reads
| (immutable) |
+-------------+
v
+-------------+
| Resolver | <- protobuf-go bridge
| (dynamicpb) |
+-------------+
Protoregistry uses PostgreSQL with sqlc for type-safe queries and goose for migrations.
# Run migrations
goose -dir migrations postgres "$DATABASE_URL" upStorage uses a content-addressable design with a versioning indirection layer:
proto_blobs (namespace_id, sha256) -> original source
^
schema_version_files (version, filename) -> blob_sha256
^
schema_versions (version) -> compiled FileDescriptorSet + compiler_version
^
schemas (namespace_id, schema_id) -> current_version / staged_version
Same content submitted multiple times (or across tenants) is stored once. Rollback is a pointer move — no data is copied.
The RegistryService exposes the full lifecycle over gRPC:
| RPC | Description |
|---|---|
Publish |
Compile and stage a new schema version |
Promote |
Atomically move all staged versions to current |
DiscardStaging |
Clear all staged versions in a namespace |
Rollback |
Stage a previous version for promotion |
GetSchema |
Get schema metadata and version list |
ListSchemas |
List all schemas in a namespace |
GetDescriptor |
Get compiled FileDescriptorSet for a version |
GetSource |
Get original .proto source files for a version |
ListNamespaces |
List all namespaces |
CreateNamespace |
Create a new namespace |
See proto/protoregistry/v1/registry.proto for the full definition.
The resolve package bridges namespace snapshots with protobuf-go's standard resolver interfaces:
import "github.com/trendvidia/protoregistry/resolve"
// Namespace-wide resolver — searches all schemas.
r := resolve.NewResolver(namespace)
mt, _ := r.FindMessageByName("billing.Config")
msg := dynamicpb.NewMessage(mt.Descriptor())
// Schema-scoped resolver.
sr := resolve.NewSchemaResolver(namespace, "billing")
msg, _ := sr.NewMessage("billing.Config")Resolvers are live — they always read the current snapshot, so hot-swaps are immediately reflected.
github.com/trendvidia/protoregistry/client is the Go SDK for services that consume descriptors from a running registry, as opposed to embedding the registry library in-process. The client is namespace-scoped and implements the same standard resolver interfaces (protoreflect.MessageTypeResolver, protoregistry.ExtensionTypeResolver, protodesc.Resolver) as the in-process resolve package, so call sites that read descriptors don't change when you swap embedded for remote.
import (
"context"
"github.com/trendvidia/protoregistry/client"
)
ctx := context.Background()
r, err := client.Dial(ctx, "registry.internal:50051", "billing")
if err != nil { /* ... */ }
defer r.Close()
desc, _ := r.FindDescriptorByName("billing.Config")
msg, _ := r.NewMessage("billing.Config")Behavior:
- Eager population.
Dial/client.Newfetches every schema in the namespace up front, so lookup misses surface at startup, not in the request path. Restrict to a subset withclient.WithSchemas("foo", "bar"). - Polling refresh (default 30s;
client.WithRefreshInterval). A background goroutine re-fetches only schemas whose current version advanced and atomically swaps the snapshot. Failures are logged and survived (stale-while-error). Force a refresh withr.Refresh(ctx). r.Pin(ctx, map[string]uint64)returns a derived resolver frozen at a specific (schemaID→version) map — useful for reproducible reads or replaying captured payloads against the exact version they were produced with.r.Schema(schemaID)narrows lookups to one schema in the namespace — cheaper and immune to cross-schema FQN collisions.- Fail-loud collisions. If two schemas in the namespace export the same fully-qualified type name,
client.Newreturns an error rather than silently picking one.
A Resolver can fall back to a parent registry when a local lookup misses. Useful for shared / well-known types across multiple namespaces, or for chaining a tenant Resolver behind a "common types" Resolver. Both the namespace-wide aggregate (FindFileByPath, FindExtensionByNumber) and each per-schema view inherit the same parent, so the fallback is reachable from every lookup tier.
Three options:
// 1. Explicit parent registries — most general.
client.WithFallback(parentFiles, parentTypes)
// 2. Chain another Resolver as the parent. Convenience over (1) — passes
// the parent's nsFiles / nsTypes through. The parent must outlive
// every child.
client.WithParent(commonsResolver)
// 3. Fall back to upstream protoregistry.GlobalFiles / GlobalTypes,
// which have generated proto types compiled into the binary.
client.WithGlobalFallback()Example — every per-tenant Resolver inherits a commons namespace:
commons, _ := client.Dial(ctx, addr, "commons", client.WithRefreshInterval(0))
defer commons.Close()
billing, _ := client.Dial(ctx, addr, "billing",
client.WithParent(commons),
)
defer billing.Close()
// "shared.Trace" lives in the commons namespace; billing resolves it
// via the fallback chain.
desc, _ := billing.FindDescriptorByName("shared.Trace")Local entries always shadow the parent — there is no fail-loud collision check across the parent boundary. Two consecutive Resolvers can register the same FQN if it appears in both the local namespace and the parent; the local version wins.
Pinned Resolvers (returned by r.Pin(...)) inherit the parent's fallback chain. If the parent refreshes, the pinned view sees the new parent state via fallback even though its own local schemas are frozen. For a fully-frozen view, build an independent frozen parent and pass it via WithFallback.
Pairs cleanly with protowire-go (the pxf / sbe codecs accept any protoreflect.MessageDescriptor), protojson, anypb, and dynamicpb without adapter code:
import "github.com/trendvidia/protowire-go/encoding/pxf"
desc, _ := r.FindDescriptorByName("billing.Config")
msg, _ := pxf.UnmarshalDescriptor(pxfBytes, desc.(protoreflect.MessageDescriptor))protoregistry/client stores per-schema descriptors in
*protoregistry.NamespacedFiles / *protoregistry.NamespacedTypes —
the namespace-isolated registry types added in the
trendvidia/protobuf-go
fork. Those types do not exist in upstream google.golang.org/protobuf,
so this module's go.mod carries:
replace google.golang.org/protobuf => github.com/trendvidia/protobuf-go v1.36.12
Go's replace directive does not propagate across module boundaries,
so consuming binaries will still pull upstream protobuf-go by default
when they depend on protoregistry. Without the same replace in the
top-level binary's go.mod, the build fails — the namespace types
are referenced by name and have no upstream equivalent.
Add the same replace to the binary's go.mod when you depend on
protoregistry/client. The fork keeps the upstream import path,
tracks upstream tags closely, and adds only the namespace registry +
the dynamicpb.SetUnsafe family used by protowire-go. Code that
compiles against upstream compiles against the fork unchanged.
If your project must use upstream google.golang.org/protobuf
exactly, do not import protoregistry/client — call the gRPC service
directly via the generated stubs in
proto/protoregistry/v1 and store
descriptors yourself with upstream *protoregistry.Files.
go build ./... # Build all packages
go build ./cmd/protoregistry/ # Build the CLI/server binary
go test -race ./... # All tests (needs Docker for Postgres integration tests)
sqlc generate # Regenerate SQL query code- Go 1.26+
- Docker (for integration tests)
protoc+protoc-gen-go+protoc-gen-go-grpc(for proto regeneration)sqlc(for SQL code regeneration)
protoregistry is designed for teams that want a schema registry they can embed, scope per-tenant, and run as a small Go binary against an existing PostgreSQL — without adopting a broader platform.
| Need | protoregistry | Buf Schema Registry | Confluent Schema Registry |
|---|---|---|---|
| Self-hosted (single Go binary + Postgres) | ✓ | hosted / BSR Pro | ✓ (Kafka-coupled) |
| Multi-tenant namespace isolation (chroot) | ✓ | modules | subjects |
| Two-phase staging + atomic multi-schema promote | ✓ | drafts | — |
| Backward-compat enforcement at promote | ✓ | ✓ | ✓ (per-subject) |
| Embeddable as a Go library | ✓ | — | — |
| Lock-free hot-swap of compiled descriptors | ✓ | n/a | n/a |
| Built-in dynamic message creation | dynamicpb |
— | — |
| Wire-format support | protobuf | protobuf | Avro / JSON / protobuf |
If you need a polished SaaS, lint rules, code generation, or a wide ecosystem of integrations, the Buf Schema Registry is the better choice. If you are already standardized on Kafka, Confluent's registry integrates natively with the broker. protoregistry's niche is embed-and-control: a small library + service you can run inside your own infrastructure with strong tenant isolation and a coordinated promotion workflow.
This project is licensed under the MIT License — Copyright (c) 2026 TrendVidia, LLC.