Gatekeeper Policy Manager is a simple read-only web UI for viewing OPA Gatekeeper policies' status in a Kubernetes Cluster.
The target Kubernetes Cluster can be the same where GPM is running or some other remote cluster(s) using a
kubeconfig file. You can also run GPM locally in a client machine and connect to a remote cluster.
GPM can display all the defined Constraint Templates with their rego code, all the Gatekeeper Configuration CRDs, and all the Constraints with their current status, violations, enforcement action, matches definitions, etc.
You'll need OPA Gatekeeper running in your cluster and at least some constraint templates and constraints defined to take advantage of this tool.
Deploy using Kustomize
To deploy Gatekeeper Policy Manager to your cluster, apply the provided
kustomization file running the following command:
kubectl apply -k .
By default, this will create a deployment and a service both with the name
gatekeper-policy-manager in the
gatekeeper-system namespace. We invite you to take a look into the
kustomization.yaml file to do further configuration.
The app can be run as a POD in a Kubernetes cluster or locally with a
kubeconfigfile. It will try its best to autodetect the correct configuration.
Once you've deployed the application, if you haven't set up an ingress, you can access the web-UI using port-forward:
kubectl -n gatekeeper-system port-forward svc/gatekeeper-policy-manager 8080:80
Then access it with your browser on: http://127.0.0.1:8080
Deploy using Helm
It is also possible to deploy GPM using the provided Helm Chart.
helm repo add gpm https://sighupio.github.io/gatekeeper-policy-manager helm upgrade --install --namespace gatekeeper-system --set image.tag=v1.0.0 --values my-values.yaml gpm/gatekeeper-policy-manager
GPM can also be run locally using docker and a
kubeconfig, assuming that the
kubeconfig file you want to use is located at
~/.kube/config the command to run GPM locally would be:
docker run -v ~/.kube/config:/home/gpm/.kube/config -p 8080:8080 quay.io/sighup/gatekeeper-policy-manager:v1.0.0
Then access it with your browser on: http://127.0.0.1:8080
You can also run the flask app directly, see the development section for further information.
GPM is a stateless application, but it can be configured using environment variables. The possible configurations are:
|Env Var Name||Description||Default|
||Enable Authentication current options: "Anonymous", "OIDC"||Anonymous|
||The secret key used to generate tokens. Change this value in production.||
||URL scheme to be used while generating links.||
||The server name under the app is being exposed. This is where the client will be redirected after authenticating|
||OIDC Issuer hostname|
||OIDC Authorizatoin Endpoint|
||OIDC JWKS URI|
||OIDC TOKEN Endpoint|
||OIDC Introspection Enpoint|
||OIDC Userinfo Endpoint|
||OIDC End Session Endpoint|
||The Client ID used to authenticate against the OIDC Provider|
||The Client Secret used to authenticate against the OIDC Provider|
||Log level (see python logging docs for available levels)||
||Path to a kubeconfig file, if provided while running inside a cluster this configuration file will be used instead of the cluster's API.|
⚠️Please notice that OIDC Authentication is in beta state. It has been tested to work with Keycloak as a provider.
These environment variables are already provided and ready to be set in the
v1.0.0 GPM has basic multi-cluster support when using a
kubeconfig with more than one context. GPM will let you chose the context right from the UI.
If you want to run GPM in a cluster but with multi-cluster support, it's as easy as mounting a
kubeconfig file in GPM's pod(s) with the cluster access configuration and set the environment variable
KUBECONFIG with the path to the mounted
kubeconfig file. Or you can simply mount it in
/home/gpm/.kube/config and GPM will detect it automatically.
Please remember that the user for the clusters should have the right permissions. You can use the
manifests/rabc.yamlfile as reference.
Also note that the cluster where GPM is running should be able to reach the other clusters.
When you run GPM locally, you are already using a
kubeconfig file to connect to the clusters, now you should see all your defined contexts and you can switch between them easily from the UI.
AWS IAM Authentication
If you want to use a Kubeconfig with IAM Authentication, you'll need to customize GPM's container image because the IAM authentication uses external AWS binaries that are not included by default in the image.
You can customize the container image with a
Dockerfile like the following:
FROM curlimages/curl:7.81.0 as downloader RUN curl https://github.com/kubernetes-sigs/aws-iam-authenticator/releases/download/v0.5.5/aws-iam-authenticator_0.5.5_linux_amd64 --output /tmp/aws-iam-authenticator RUN chmod +x /tmp/aws-iam-authenticator FROM quay.io/sighup/gatekeeper-policy-manager:v1.0.0 COPY --from=downloader --chown=root:root /tmp/aws-iam-authenticator /usr/local/bin/
You may need to add also the
aws CLI, you can use the same approach as before.
Make sure that your
kubeconfig has the
apiVersion set as
You can read more in this issue.
GPM is written in Python using the Flask framework for the backend and React with Fury Design System for the frontend. To develop GPM, you'll need to create a Python 3 virtual environment, install all the dependencies specified in the provided
requirements.txt and you are good to start hacking.
The following commands should get you up and running:
# Download static frontend dependencies with YARN $ pushd app/static $ yarn install $ mkdir -p webapp $ cp -r node_modules ./webapp/node_modules $ popd # Build frontend and copy over to static folder $ pushd app/web-client $ yarn install && yarn build $ cp -r build/ ../static/webapp $ popd # Create a virtualenv $ python3 -m venv env # Activate it $ source ./env/bin/activate # Install all the dependencies $ pip install -r app/requirements-dev.txt # Run the development server $ FLASK_APP=app/app.py flask run
Access to a Kubernetes cluster with Gatekeeper deployed is recommended to debug the application.
You'll need an OIDC provider to test the OIDC authentication. You can use our fury-kubernetes-keycloak module.
The following is a wishlist of features that we would like to add to GPM (in no particular order):
- List the constraints that are currently using a
- Polished OIDC authentication
- LDAP authentication
- Better syntax highlighting for the rego code snippets
- Root-less docker image
- Multi-cluster view
- Minimal write capabilities?
- Rewrite app in Golang?
Please, let us know if you are using GPM and what features would you like to have by creating an issue here on GitHub 💪🏻