Skip to content

projection-operator/projection

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

80 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

projection logo

projection

The Kubernetes CRD for declarative resource mirroring across namespaces — any Kind, conflict-safe, watch-driven.

CI Release API Go Report Card OpenSSF Scorecard OpenSSF Best Practices License Go Reference

projection demo: apply a Projection, edit the source, watch the destination ConfigMap update in ~100ms

projection is a Kubernetes operator that mirrors any Kubernetes object — ConfigMap, Secret, Service, your custom resources — from a source location to a destination, declaratively, per resource. Each Projection CR is its own first-class object with status conditions, events, and a metric you can alert on. Edits to the source propagate to the destination in roughly 100 milliseconds.

It exists because every team eventually rebuilds this with a one-off controller or a Kyverno generate policy, and neither approach is the right shape. projection is meant to be the answer when somebody asks "how do you mirror a Secret across namespaces in this cluster?"

Why projection

projection emberstack/Reflector Kyverno generate
Works on any Kind ConfigMap & Secret only
Source-of-truth lives in a CR you can kubectl get ✓ (Projection) ✗ (annotations on the source) ✗ (cluster-wide policy)
Per-resource status + Kubernetes Events partial
Conflict-safe (refuses to overwrite unowned objects)
Watch-driven propagation (~100ms)
Admission-time validation of source fields n/a
Prometheus metrics per reconcile outcome partial
Footprint one CRD, one Deployment one CRD, one Deployment full policy engine

For the longer comparison — including the cases where Reflector or Kyverno is the better choice — see docs/comparison.md.

60-second demo

apiVersion: v1
kind: ConfigMap
metadata:
  name: app-config
  namespace: platform
  annotations:
    # Source opts in to projection (default source-mode is "allowlist").
    # Set to "false" to veto projection as the source owner.
    projection.sh/projectable: "true"
data:
  log_level: info
---
apiVersion: projection.sh/v1
kind: Projection
metadata:
  name: app-config-into-tenants
  namespace: platform
spec:
  source:
    apiVersion: v1
    kind: ConfigMap
    name: app-config
    namespace: platform
  destination:
    namespace: tenant-a
  overlay:
    labels:
      projected-by: projection
$ kubectl get projections -A
NAMESPACE   NAME                       KIND        SOURCE-NAMESPACE   SOURCE-NAME   DESTINATION   READY   AGE
platform    app-config-into-tenants    ConfigMap   platform           app-config    app-config    True    2s

$ kubectl get configmap -n tenant-a app-config -o jsonpath='{.metadata.annotations.projection\.sh/owned-by}'
platform/app-config-into-tenants

Edit the source — destination updates within ~100ms. Delete the Projection — destination is removed (only if projection still owns it). Pre-existing object at the destination? Ready=False reason=DestinationConflict. We don't overwrite strangers.

Features

  • Any KindRESTMapper-driven GVR resolution. Works on built-in resources, your CRDs, anything the apiserver knows about. Source apiVersion accepts both pinned forms (apps/v1) and the unpinned apps/* form, which follows the cluster's preferred served version.
  • Watch-driven — dynamic informer registration per source GVK. Edits propagate in ~100ms; no periodic polling.
  • Selector-based fan-out — one Projection can mirror its source into every namespace matching a namespaceSelector, with destinations added and removed as namespaces gain or lose the matching label. Bounded fan-out concurrency keeps the apiserver healthy at scale.
  • Source-owner consent — default sourceMode=allowlist requires sources to carry projection.sh/projectable="true". Source owners can also veto with ="false" regardless of mode.
  • Conflict-safe — ownership annotation marks our destinations. We refuse to overwrite objects we don't own and report DestinationConflict on status. Source deletion (404) automatically cleans up every owned destination.
  • Clean deletion — finalizer removes destinations on Projection deletion (sweeping all namespaces for selector-based fan-out). If ownership has been stripped, we leave the object alone.
  • Observable — three status conditions (SourceResolved, DestinationWritten, Ready), events.k8s.io/v1 Events with action verbs (Create/Update/Delete/Get/Validate/Resolve/Write), and Prometheus metrics (projection_reconcile_total{result}, projection_watched_gvks).
  • Validated at admissionSource fields are pattern-validated (DNS-1123 names, PascalCase Kinds) so typos fail at kubectl apply, not at runtime. CEL enforces destination.namespacedestination.namespaceSelector mutual exclusion.
  • Smart copy — strips server-owned metadata, drops .status, removes kubectl.kubernetes.io/last-applied-configuration, strips Kind-specific apiserver-allocated spec fields (Service clusterIP/clusterIPs, PVC volumeName, Pod nodeName, Job selector+controller-uid labels), and preserves them on update.
  • Production-grade Helm chart — opt-in ServiceMonitor, NetworkPolicy (egress lockdown), and PodDisruptionBudget templates. Operational tuning via requeueInterval and leaderElection.leaseDuration. RBAC scope narrowable via supportedKinds.
  • Small — one CRD, one Deployment, one container. Distroless image, multi-arch (amd64, arm64).

Quick start

Helm

helm install projection oci://ghcr.io/projection-operator/charts/projection \
  --version 0.2.0 \
  --namespace projection-system --create-namespace

kubectl apply

kubectl apply -f https://github.com/projection-operator/projection/releases/download/v0.2.0/install.yaml

Then create your first Projection:

kubectl apply -f https://raw.githubusercontent.com/projection-operator/projection/main/examples/configmap-cross-namespace.yaml
kubectl get projections -A

How it works

When you create a Projection, the controller resolves the source GVR via the RESTMapper, fetches the source object via the dynamic client, builds a sanitized destination object (overlay applied, ownership annotation stamped, server-owned metadata stripped), and creates or updates the destination — but only if projection already owns it. The first reconcile also registers a metadata-only watch on the source's GVK, so future edits to any source of that Kind enqueue the relevant Projections via a field-indexed lookup. Updates that wouldn't change the destination are skipped to avoid noisy events and metric churn.

See docs/concepts.md for the full picture, docs/observability.md for status/events/metrics, and docs/comparison.md for the deep comparison vs Reflector and Kyverno.

Use cases

  • Secrets across namespaces — distribute a TLS cert from cert-manager to multiple application namespaces without manual kubectl create.
  • Shared config distribution — one ConfigMap in platform, mirrored into each tenant namespace with overlay labels for tenant tagging.
  • Service mirroring — expose a backend Service from one namespace into another without a manual ExternalName dance.
  • CR replication — mirror an Issuer, a KafkaTopic, or any custom resource between namespaces in the same cluster.

Limitations

  • Same-cluster only. Cross-cluster mirroring is a non-goal for v0.
  • Cluster-scoped Kinds rejected. Projection only mirrors namespaced resources. Pointing at a Namespace, ClusterRole, or StorageClass surfaces SourceResolved=False reason=SourceResolutionFailed with a clear message.
  • Selector fan-out shares one overlay. All destinations in a namespaceSelector Projection get the same overlay; per-destination overlays require separate Projections.
  • A few Kinds need extra care. Service, PersistentVolumeClaim, Pod, and Job have apiserver-allocated spec fields handled out of the box. Jobs created with spec.manualSelector: true are not supported. Other Kinds with similar fields (rare) may need an addition to droppedSpecFieldsByGVK — see limitations.
  • Pre-1.0. API stability commitments (which fields will not change, how breaking changes are handled) are documented in docs/api-stability.md. CRD storage version is v1; future versions will be served alongside with conversion.

Documentation

Contributing

Pull requests welcome. See CONTRIBUTING.md. Be excellent to each other — see CODE_OF_CONDUCT.md.

Security

Found a vulnerability? Please report it privately via GitHub Security Advisories. See SECURITY.md.

License

Apache 2.0. See LICENSE.

About

The Kubernetes CRD for declarative resource mirroring across namespaces — any Kind, conflict-safe, watch-driven.

Topics

Resources

License

Code of conduct

Contributing

Security policy

Stars

Watchers

Forks

Packages

 
 
 

Contributors