Navigation Menu

Skip to content


Repository files navigation

logo kube-mgmt

kube-mgmt manages policies / data of Open Policy Agent instances in Kubernetes.

Use kube-mgmt to:

  • Load policies and/or static data into OPA instance from ConfigMap.
  • Replicate Kubernetes resources including CustomResourceDefinitions (CRDs) into OPA instance.

Deployment Guide

Both OPA and kube-mgmt can be installed using opa-kube-mgmt Helm chart.

Follow README to install it into K8s cluster.

Policies and data loading

kube-mgmt automatically discovers policies and JSON data stored in ConfigMaps in Kubernetes and loads them into OPA.

kube-mgmt assumes a ConfigMap contains policy or JSON data if the ConfigMap is:

  • Created in a namespace listed in the --namespaces option. If you specify --namespaces=* then kube-mgmt will look for policies in ALL namespaces.
  • Labelled with for policies
  • Labelled with for JSON data

Policies or data discovery and loading can be disabled using --enable-policy=false or --enable-data=false flags respectively.

Label names and their values can be configured using --policy-label, --policy-value, --data-label, --data-value CLI options.

When a ConfigMap has been successfully loaded into OPA, the annotation is set to {"status": "ok"}.

If loading fails for some reason (e.g., because of a parse error), the annotation is set to {"status": "error", "error": ...} where the error field contains details about the failure.

Data loaded out of ConfigMaps is laid out as follows:


For example, if the following ConfigMap was created:

kind: ConfigMap
apiVersion: v1
  name: hello-data
  namespace: opa
  labels: opa
  x.json: |
    {"a": [1,2,3,4]}

Note: "x.json" may be any key.

You could refer to the data inside your policies as follows:

data.opa["hello-data"]["x.json"].a[0]  # evaluates to 1

K8s resource replication


K8s resource replication requires global cluster permission with ClusterRole and ClusterRoleBinding.

kube-mgmt can be configured to replicate Kubernetes resources into OPA so that you can express policies over an eventually consistent cache of Kubernetes state.

Replication is enabled with the following options:

# Replicate namespace-level resources. May be specified multiple times.

# Replicate cluster-level resources. May be specified multiple times.

By default resources are replicated from all namespaces. Use --replicate-ignore-namespaces option to exclude particular namespaces from replication.

Kubernetes resources replicated into OPA are laid out as follows:

<replicate-path>/<resource>/<namespace>/<name> # namespace scoped
<replicate-path>/<resource>/<name>             # cluster scoped
  • <replicate-path> is configurable (via --replicate-path) and defaults to kubernetes.
  • <resource> is the Kubernetes resource plural, e.g., nodes, pods, services, etc.
  • <namespace> is the namespace of the Kubernetes resource.
  • <name> is the name of the Kubernetes resource.

For example, to search for services with the label "foo" you could write:

some namespace, name
service :=[namespace][name]

An alternative way to visualize the layout is as single JSON document:

  "kubernetes": {
    "services": {
      "default": {
        "example-service": {...},
          "another-service": {...},

The example below would replicate Deployments, Services, and Nodes into OPA:


Custom Resource Definitions can also be replicated using the same --replicate and --replicate-cluster options.

Admission Control

To get started with admission control policy enforcement in Kubernetes 1.9 or later see the Kubernetes Admission Control tutorial. For older versions of Kubernetes, see Admission Control (1.7).

In the Kubernetes Admission Control tutorial, OPA is NOT running with an authorization policy configured and hence clients can read and write policies in OPA. When deploying OPA in an insecure environment, it is recommended to configure authentication and authorization on the OPA daemon. For an example of how OPA can be securely deployed as an admission controller see Admission Control Secure.

OPA API Endpoints and Least-privilege Configuration

kube-mgmt is a privileged component that can load policy and data into OPA. Other clients connecting to the OPA API only need to query for policy decisions.

To load policy and data into OPA, kube-mgmt uses the following OPA API endpoints:

  • PUT v1/policy/<path> - upserting policies
  • DELETE v1/policy/<path> - deleting policies
  • PUT v1/data/<path> - upserting data
  • PATCH v1/data/<path> - updating and removing data

Many users configure OPA with a simple API authorization policy that restricts access to the OPA APIs:

package system.authz

# Deny access by default.
default allow = false

# Allow anonymous access to decision `data.example.response`
# NOTE: the specific decision differs depending on your policies.
# NOTE: depending on how callers are configured, they may only require this or the default decision below.
allow {
  input.path == ["v0", "data", "example", "response"]
  input.method == "POST"

# Allow anonymous access to default decision.
allow {
  input.path == [""]
  input.method == "POST"

# This is only used for health check in liveness and readiness probe
allow {
  input.path == ["health"]
  input.method == "GET"

# This is only used for prometheus metrics
allow {
  input.path == ["metrics"]
  input.method == "GET"

# This is used by kube-mgmt to PUT/PATCH against /v1/data and PUT/DELETE against /v1/policies.
# NOTE: The $TOKEN value is replaced at deploy-time with the actual value that kube-mgmt will use. This is typically done by an initContainer.
allow {
  input.identity == "$TOKEN"


Required software

  • Go language toolchain.
  • just - generic command runner.
  • skaffold - build and publish docker images and more, v2.x and above is required.
  • helm- package manager for k8s.
  • k3d - local k8s cluster with docker registry.

This project uses just for building, testing and running kube-mgmt locally. It is configured from justfile in root directory. All available recipes can be inspected by running just without arguments.


To release a new version - create GitHub release with corresponding tag name that follows semantic versioning convention.

As soon as tag is pushed - CI pipeline will build and publish artifacts: docker images for supported architectures and helm chart.