Kubernetes Operator to manage Dynamic Admission Controllers using Open Policy Agent
Switch branches/tags
Nothing to show
Clone or download
marccampbell Merge pull request #13 from laverya/patch-1
Update gatekeeper_tls.go to fix a comment typo
Latest commit 98cf720 Nov 27, 2018

README.md

GateKeeper

CircleCI

GateKeeper is a Kubernetes Operator for installing, configuring and managing Open Policy Agent to provide dynamic admission controllers in a cluster.

Getting Started

The recommended way to configure GateKeeper is to use Replicated Ship:

brew tap replicatedhq/ship
brew install ship
ship init https://github.com/replicatedhq/gatekeeper/tree/master/docs/gatekeeper-k8s

Ship will download and give you an opportunity to review the Kubernetes manifests included to run GateKeeper. You can create patches and overlays to make any changes necessary for your environment. Once finished, follow the instructions in Ship and kubectl apply -f rendered.yaml.

You can then use ship watch && ship update to watch and configure updates as they are shipped here.

For more information on the components, and other methods to install GateKeeper, read the docs.

Deploying Policies

After installing GateKeeper to a cluster, a policy can be deployed using kubectl apply -f ./config/samples/policies_v1alpha2_admissionpolicy.yaml. (This is a sample policy that prevents any pod from using images tagged :latest). When the policy is applied, if OPA is running in the same namespace, the controller will delpoy the policy from the YAML to the OPA instance. If OPA is not found, the controller will provision a new OPA instance, and deploy the policy to that new instance, whne it's ready.

This handles the TLS configuration, webhook configuration, and all underlying Kubernetes resources that are required to create a dynamic admission controller.

GateKeeper CLI

View Current Policies

$ gatekeeper status
POLICY NAME        FAILURE POLICY     LAST_ALLOWED      LAST_BLOCKED          ALLOWED       BLOCKED
latest             Ignore             An hour ago       Never                 1023          0
helm               Fail               Just now          A day ago             1056          8

Motivations

The Open Policy Agent (OPA) project is an ambitious project that does much more than just Kubernetes Admission Controllers.

Simplify the task of installing and configuring OPA in Kubernetes.

Installing OPA into a Kubernetes cluster is more complex than many applications. The recommended installation includes creating a new certificate authority (CA) and then creating a cert, signed by that CA. This TLS configuration should be deployed and referenced in the openpolicyagent deployment and also manually copied into the webhook configuration. Managing this through automation can be difficult and prone to errors. The GateKeeper operator manages this in-cluster, so the keys never have to be transferred to the cluster, and the CA and certs are properly configured every time.

Dynamic admission controllers in Kubernetes are powerful, but can also be difficult to troubleshoot and configure. A goal of the GateKeeper operator is to make it easier to roll out new admission policies, with as little risk as possible.

Provide a custom resource to manage policy files (.rego) instead of using ConfigMaps

This allows for easier listing and management of individual policies. Instead of using the existing ConfigMap and in-cluster sync, the GateKeeper operator introduces a new type named admissionpolicies.policies.replicated.com. This makes it easy to just kubectl get admissionpolicies.policies.replicated.com and view all dynamic admission policies installed in the cluster.

Validation of policies before deployment

One future goal of GateKeeper is to validate new policies and changes to existing policies before deploying. This includes compiling the policy and also backtesting it against previous requests received to ensure that the policy will have the expected effects.

Contributing

Fork and clone this repo, and you can run it locally on a Kubernetes cluster:

make install  # this will install the CRDs to your cluster
skaffold dev  # this will start the manager and controllers in your cluster, and watch for file changes and redeploy