The Kubernetes CRD for declarative resource mirroring across namespaces — any Kind, conflict-safe, watch-driven.
projection is a Kubernetes operator that mirrors any Kubernetes object — ConfigMap, Secret, Service, your custom resources — from a source location to a destination, declaratively, per resource. Each Projection CR is its own first-class object with status conditions, events, and a metric you can alert on. Edits to the source propagate to the destination in roughly 100 milliseconds.
It exists because every team eventually rebuilds this with a one-off controller or a Kyverno generate policy, and neither approach is the right shape. projection is meant to be the answer when somebody asks "how do you mirror a Secret across namespaces in this cluster?"
| projection | emberstack/Reflector | Kyverno generate |
|
|---|---|---|---|
| Works on any Kind | ✓ | ConfigMap & Secret only | ✓ |
Source-of-truth lives in a CR you can kubectl get |
✓ (Projection) |
✗ (annotations on the source) | ✗ (cluster-wide policy) |
| Per-resource status + Kubernetes Events | ✓ | partial | ✗ |
| Conflict-safe (refuses to overwrite unowned objects) | ✓ | ✗ | ✗ |
| Watch-driven propagation (~100ms) | ✓ | ✓ | ✓ |
| Admission-time validation of source fields | ✓ | n/a | ✓ |
| Prometheus metrics per reconcile outcome | ✓ | partial | ✓ |
| Footprint | one CRD, one Deployment | one CRD, one Deployment | full policy engine |
For the longer comparison — including the cases where Reflector or Kyverno is the better choice — see docs/comparison.md.
apiVersion: v1
kind: ConfigMap
metadata:
name: app-config
namespace: platform
annotations:
# Source opts in to projection (default source-mode is "allowlist").
# Set to "false" to veto projection as the source owner.
projection.sh/projectable: "true"
data:
log_level: info
---
apiVersion: projection.sh/v1
kind: Projection
metadata:
name: app-config-into-tenants
namespace: platform
spec:
source:
apiVersion: v1
kind: ConfigMap
name: app-config
namespace: platform
destination:
namespace: tenant-a
overlay:
labels:
projected-by: projection$ kubectl get projections -A
NAMESPACE NAME KIND SOURCE-NAMESPACE SOURCE-NAME DESTINATION READY AGE
platform app-config-into-tenants ConfigMap platform app-config app-config True 2s
$ kubectl get configmap -n tenant-a app-config -o jsonpath='{.metadata.annotations.projection\.sh/owned-by}'
platform/app-config-into-tenantsEdit the source — destination updates within ~100ms.
Delete the Projection — destination is removed (only if projection still owns it).
Pre-existing object at the destination? Ready=False reason=DestinationConflict. We don't overwrite strangers.
- Any Kind —
RESTMapper-driven GVR resolution. Works on built-in resources, your CRDs, anything the apiserver knows about. SourceapiVersionaccepts both pinned forms (apps/v1) and the unpinnedapps/*form, which follows the cluster's preferred served version. - Watch-driven — dynamic informer registration per source GVK. Edits propagate in ~100ms; no periodic polling.
- Selector-based fan-out — one
Projectioncan mirror its source into every namespace matching anamespaceSelector, with destinations added and removed as namespaces gain or lose the matching label. Bounded fan-out concurrency keeps the apiserver healthy at scale. - Source-owner consent — default
sourceMode=allowlistrequires sources to carryprojection.sh/projectable="true". Source owners can also veto with="false"regardless of mode. - Conflict-safe — ownership annotation marks our destinations. We refuse to overwrite objects we don't own and report
DestinationConflicton status. Source deletion (404) automatically cleans up every owned destination. - Clean deletion — finalizer removes destinations on
Projectiondeletion (sweeping all namespaces for selector-based fan-out). If ownership has been stripped, we leave the object alone. - Observable — three status conditions (
SourceResolved,DestinationWritten,Ready),events.k8s.io/v1Events withactionverbs (Create/Update/Delete/Get/Validate/Resolve/Write), and Prometheus metrics (projection_reconcile_total{result},projection_watched_gvks). - Validated at admission —
Sourcefields are pattern-validated (DNS-1123 names, PascalCase Kinds) so typos fail atkubectl apply, not at runtime. CEL enforcesdestination.namespace⊕destination.namespaceSelectormutual exclusion. - Smart copy — strips server-owned metadata, drops
.status, removeskubectl.kubernetes.io/last-applied-configuration, strips Kind-specific apiserver-allocated spec fields (ServiceclusterIP/clusterIPs, PVCvolumeName, PodnodeName, Jobselector+controller-uid labels), and preserves them on update. - Production-grade Helm chart — opt-in
ServiceMonitor,NetworkPolicy(egress lockdown), andPodDisruptionBudgettemplates. Operational tuning viarequeueIntervalandleaderElection.leaseDuration. RBAC scope narrowable viasupportedKinds. - Small — one CRD, one Deployment, one container. Distroless image, multi-arch (amd64, arm64).
helm install projection oci://ghcr.io/projection-operator/charts/projection \
--version 0.2.0 \
--namespace projection-system --create-namespacekubectl apply -f https://github.com/projection-operator/projection/releases/download/v0.2.0/install.yamlThen create your first Projection:
kubectl apply -f https://raw.githubusercontent.com/projection-operator/projection/main/examples/configmap-cross-namespace.yaml
kubectl get projections -AWhen you create a Projection, the controller resolves the source GVR via the RESTMapper, fetches the source object via the dynamic client, builds a sanitized destination object (overlay applied, ownership annotation stamped, server-owned metadata stripped), and creates or updates the destination — but only if projection already owns it. The first reconcile also registers a metadata-only watch on the source's GVK, so future edits to any source of that Kind enqueue the relevant Projections via a field-indexed lookup. Updates that wouldn't change the destination are skipped to avoid noisy events and metric churn.
See docs/concepts.md for the full picture, docs/observability.md for status/events/metrics, and docs/comparison.md for the deep comparison vs Reflector and Kyverno.
- Secrets across namespaces — distribute a TLS cert from
cert-managerto multiple application namespaces without manualkubectl create. - Shared config distribution — one
ConfigMapinplatform, mirrored into each tenant namespace with overlay labels for tenant tagging. - Service mirroring — expose a backend
Servicefrom one namespace into another without a manualExternalNamedance. - CR replication — mirror an
Issuer, aKafkaTopic, or any custom resource between namespaces in the same cluster.
- Same-cluster only. Cross-cluster mirroring is a non-goal for v0.
- Cluster-scoped Kinds rejected.
Projectiononly mirrors namespaced resources. Pointing at aNamespace,ClusterRole, orStorageClasssurfacesSourceResolved=False reason=SourceResolutionFailedwith a clear message. - Selector fan-out shares one overlay. All destinations in a
namespaceSelectorProjection get the same overlay; per-destination overlays require separate Projections. - A few Kinds need extra care.
Service,PersistentVolumeClaim,Pod, andJobhave apiserver-allocated spec fields handled out of the box. Jobs created withspec.manualSelector: trueare not supported. Other Kinds with similar fields (rare) may need an addition todroppedSpecFieldsByGVK— see limitations. - Pre-1.0. API stability commitments (which fields will not change, how breaking changes are handled) are documented in docs/api-stability.md. CRD storage version is
v1; future versions will be served alongside with conversion.
- Getting started
- Concepts
- API reference (auto-generated from
api/v1/projection_types.go) - CRD behavior and examples
- Use cases
- Comparison vs alternatives
- Observability
- Security model
- API stability
- Troubleshooting
- Scale and benchmarks
- Limitations & roadmap
Pull requests welcome. See CONTRIBUTING.md. Be excellent to each other — see CODE_OF_CONDUCT.md.
Found a vulnerability? Please report it privately via GitHub Security Advisories. See SECURITY.md.
Apache 2.0. See LICENSE.