A simple static analysis tool to explore a Kubernetes cluster.
The left part of the screen contains the controls for the main view:
- View: choose your view (workload or network policies)
- Filters: filter pods by namespace, labels and name
- Include ingress neighbors: display pods that can reach those in the current selection
- Include egress neighbors: display pods that can be reached by those in the current selection
- Auto refresh: refresh the view every 5 seconds
- Auto zoom: zoom automatically to fit all elements in the screen
- Show namespace prefix: include the namespace in pod names
- Highlight non isolated pods (ingress/egress): color pods with no ingress/egress network policy
- Always display large datasets: always try to display large sets of pods and routes (may slow down your browser)
The main view shows the graph of pods and allowed routes in your selection:
- Zoom in and out by scrolling
- Drag and drop graph elements to draw the perfect map of your cluster
- Hover over any graph element to display details: name, namespace, labels, isolation (ingress/egress)... and more!
In the top left part of the screen you will find action buttons to:
- Export the current graph as PNG to use it in slides or share it
- Go fullscreen and use Karto as an office (or situation room!) dashboard
There are two ways to install and run Karto:
- To deploy it inside the Kubernetes cluster to analyze, proceed to the Run inside a cluster section.
- To run it on any machine outside the Kubernetes cluster to analyze, refer to the Run outside a cluster section.
Simply apply the provided descriptor:
kubectl apply -f deploy/k8s.yml
This will:
- create a
karto
namespace - create a
karto
service account with a role allowing to watch the resources displayed by Karto (namespaces, pods, network policies, services, deployments...) - deploy an instance of the application in this namespace with this service account
Once deployed, the application must be exposed. For a quick try, use port-forward
:
kubectl -n karto port-forward <pod name> 8000:8000
The will exposed the app on your local machine on localhost:8000
.
For a long-term solution, investigate the use of a LoadBalancer service or an Ingress.
Remember to always secure the access to the application as it obviously displays sensitive data about your cluster.
Delete everything using the same descriptor:
kubectl delete -f deploy/k8s.yml
For this to work, a local kubeconfig
file with existing connection information to the target cluster must be present
on the machine (if you already use kubectl
locally, you are good to go!).
Simply download the Karto binary from the releases page and run it!
The following tools must be available locally:
In the front
directory, execute:
npm start
This will expose the app in dev mode on localhost:3000
with a proxy to localhost:8000
for the API calls.
In the back
directory, execute:
go build karto
./karto
To run the entire backend test suite, execute in the back
directory:
go test ./...
In production mode, the frontend is packaged in the go binary using pkger. In this
configuration, the frontend is served on the /
route and the API on the /api
route.
To compile the Karto binary from source, first compile the frontend source code. In the front
directory, execute:
npm run build
This will generate a build
directory in front
.
Then, package it inside the backend:
cp -R front/build back/frontendBuild
go install github.com/markbates/pkger/cmd/pkger
pkger
This will generate a pkged.go
file in back
with a binary content equivalent to the generated build
directory.
Finally, compile the go binary:
go build karto