Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

tetragon: docs, copy Cilium style k8s install #1561

Merged
merged 25 commits into from
Oct 18, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
25 commits
Select commit Hold shift + click to select a range
4b68f8b
tetragon: move Deployment into Advance Configuration
jrfastab Oct 6, 2023
c4f0735
tetragon: docs, remove caution note from try tetragon
jrfastab Oct 6, 2023
9794105
tetragon: Create a dedicated Developer section
jrfastab Oct 6, 2023
094957d
tetragon: docs, move events into concepts
jrfastab Oct 6, 2023
d40eab1
tetragon: docs, there should be a metrics section
jrfastab Oct 6, 2023
06a0a8b
tetragon: docs, namespace, pod filtering ia policy detail
jrfastab Oct 6, 2023
f880ddc
hubble-fgs: Create an Installation section
jrfastab Oct 6, 2023
4332fed
tetragon: docs, swap reference and contribution guide
jrfastab Oct 6, 2023
033ab7f
tetragon: install tetra cli move into installation
jrfastab Oct 6, 2023
3a5f8bb
tetragon: docs, add enforcment to concepts
jrfastab Oct 6, 2023
9a94dfc
tetragon: benchmarks section
jrfastab Oct 6, 2023
3999c08
tetragon: docs, simplify getting started guide
jrfastab Oct 6, 2023
3d6d127
tetragon: docs, fixes from Mahe
jrfastab Oct 11, 2023
b4f7a54
tetragon: docs updates for network example
jrfastab Oct 11, 2023
4c935dc
tetragon: docs, use service ip cidr instead of list of ips
jrfastab Oct 12, 2023
bfbce50
tetragon: add pop up for JSON block
jrfastab Oct 12, 2023
afa0bec
tetragon: docs, align sections and filenames
jrfastab Oct 12, 2023
efb7ba8
tetragon: docs fix typo tetragonon
jrfastab Oct 12, 2023
9720b29
tetragon: create Install section and move baremetal install out
jrfastab Oct 12, 2023
b2d86d5
docs: various form fixes and docs settings
mtardy Oct 17, 2023
ac51000
docs: remove advanced-config and replace with install
mtardy Oct 17, 2023
e8d88d0
docs: move try tetragon on Linux to tutorials
mtardy Oct 17, 2023
dc149ec
docs: move tutorial's metrics page to concept's metrics
mtardy Oct 17, 2023
792c76b
docs: moved quickstart folder under examples
mtardy Oct 17, 2023
c8bc444
Change GRPC to gRPC
michi-covalent Oct 18, 2023
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
10 changes: 10 additions & 0 deletions docs/content/en/docs/benchmarks/_index.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,10 @@
---
title: "Benchmarks"
icon: "resources"
weight: 4
description: >
This section presents benchmarks used to test Tetragon.
---

Tetragon benchmarks are run per release to ensure overhead created by Tetragon
is within expectations.
10 changes: 10 additions & 0 deletions docs/content/en/docs/concepts/enforcement/_index.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,10 @@
---
title: "Enforcement"
icon: "overview"
weight: 4
description: "Documentation for Tetragon enforcement system"
---

Tetragon allows enforcing events in the kernel inline with the operation itself.
This describes the types of enforcmenet provided by Tetragon and concerns
policy implementors must be aware of.
9 changes: 9 additions & 0 deletions docs/content/en/docs/concepts/events/_index.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
---
title: "Events"
icon: "overview"
weight: 1
description: "Documentation for Tetragon event system"
---

Tetragon's events are exposed to the system through either the gRPC
endpoint or JSON logs.
8 changes: 8 additions & 0 deletions docs/content/en/docs/concepts/events/grpc-events.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
---
title: "gRPC Events"
weight: 3
icon: "reference"
description: "Tetragon gRPC events"
---

A gRPC endpoint is exposed by the agent and is configurable.
Original file line number Diff line number Diff line change
@@ -1,8 +1,8 @@
---
title: "Explore security observability events"
title: "JSON events"
weight: 3
icon: "reference"
description: "Learn how to start exploring the Tetragon events"
description: "Tetragon JSON events"
---

After Tetragon and the [demo application is up and
Expand Down
Original file line number Diff line number Diff line change
@@ -1,17 +1,18 @@
---
title: "Tetragon Metrics"
weight: 1
title: "Metrics"
icon: "overview"
description: "Fetching and understanding Tetragon metrics"
weight: 2
description: "Documentation for Tetragon metrics"
---

Tetragon's metrics are exposed to the system through an HTTP endpoint. These
are used to expose event summaries and information about the state of the
Tetragon agent.

## Kubernetes

When installed with Helm as described in
[Deploying on Kubernetes]({{< ref
"/docs/getting-started/deployment/kubernetes" >}}), Tetragon pods expose a
metrics endpoint by default. The chart also creates a service named `tetragon`
that exposes metrics on the specified port.
Tetragon pods exposes a metrics endpoint by default. The chart also creates a
service named `tetragon` that exposes metrics on the specified port.

### Getting metrics port

Expand All @@ -28,8 +29,8 @@ tetragon ClusterIP 10.96.54.218 <none> 2112/TCP 3m
```

{{< note >}}
In the previous output it shows, 2112 is the port on which the service is
listening. It is also the port on which the Tetragon metrics server listens
In the previous output it shows, 2112 is the port on which the service is
listening. It is also the port on which the Tetragon metrics server listens
with the default Helm values.
{{< /note >}}

Expand All @@ -43,13 +44,11 @@ kubectl -n kube-system port-forward service/tetragon 2112:2112

## Package

Tetragon, when installed via release packages as mentioned in
[Package Deployment]({{< ref "/docs/getting-started/deployment/package" >}}).
By default, metrics are disabled, which can be enabled using `--metrics-server`
By default, metrics are disabled, which can be enabled using `--metrics-server`
flag, by specifying the address.

Alternatively, the [examples/configuration/tetragon.yaml](https://github.com/cilium/tetragon/blob/main/examples/configuration/tetragon.yaml)
file contains example entries showing the defaults for the address of
file contains example entries showing the defaults for the address of
metrics-server. Local overrides can be created by editing and copying this file
into `/etc/tetragon/tetragon.yaml`, or by editing and copying "drop-ins" from
the [examples/configuration/tetragon.conf.d](https://github.com/cilium/tetragon/tree/main/examples/configuration/tetragon.conf.d)
Expand All @@ -67,25 +66,25 @@ sudo tetragon --metrics-server localhost:2112
The output should be similar to this:

```
time="2023-09-21T13:17:08+05:30" level=info msg="Starting tetragon"
time="2023-09-21T13:17:08+05:30" level=info msg="Starting tetragon"
version=v0.11.0
time="2023-09-21T13:17:08+05:30" level=info msg="config settings"
time="2023-09-21T13:17:08+05:30" level=info msg="config settings"
config="mapeased
time="2023-09-22T23:16:24+05:30" level=info msg="Starting metrics server"
addr="localhost:2112"
time="2023-09-22T23:16:24+05:30" level=info msg="Starting metrics server"
addr="localhost:2112"
[...]
time="2023-09-21T13:17:08+05:30" level=info msg="Listening for events..."
```

Alternatively, a file named `server-address` can be created in `etc/tetragon/tetragon.conf.d/metrics-server` with content specifying
a port like this `localhost:2112`, or any port of your choice as mentioned
Alternatively, a file named `server-address` can be created in `etc/tetragon/tetragon.conf.d/metrics-server` with content specifying
a port like this `localhost:2112`, or any port of your choice as mentioned
above.

## Fetch the Metrics

After the metrics are exposed, either by port forwarding in case of
Kubernetes installation or by setting metrics address in case of Package
installation, the metrics can be fetched using
After the metrics are exposed, either by port forwarding in case of
Kubernetes installation or by setting metrics address in case of Package
installation, the metrics can be fetched using
`curl` on `localhost:2112/metrics`:

```shell-session
Expand Down
4 changes: 2 additions & 2 deletions docs/content/en/docs/concepts/tracing-policy/example.md
Original file line number Diff line number Diff line change
Expand Up @@ -115,8 +115,8 @@ echo eBPF! > /tmp/tetragon
Starting Tetragon with the above `TracingPolicy`, for example putting the
policy in the `example.yaml` file, compiling the project locally and starting
Tetragon with (you can do similar things with container image releases, see the
docker run command in the [Try Tetragon on Linux guide]({{< ref
"/docs/getting-started/try-tetragon-linux#observability-with-tracingpolicy" >}}):
docker run command in the [Try Tetragon on Linux guide]

```shell-session
sudo ./tetragon --bpf-lib bpf/objs --tracing-policy example.yaml
```
Expand Down
2 changes: 1 addition & 1 deletion docs/content/en/docs/contribution-guide/_index.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
---
title: "Contribution Guide"
linkTitle: "Contribution Guide"
weight: 7
weight: 6
icon: "contribution"
description: >
How to contribute to the project
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -111,10 +111,6 @@ To build Tetragon tarball:
make tarball
```

The produced tarball will be inside directory `./build/`, then follow the
[package deployment guide]({{< ref "/docs/getting-started/deployment/package" >}}) to
install it as a systemd service.

### Running Tetragon in kind

The scripts in contrib/localdev will help you run Tetragon locally in a kind
Expand Down
4 changes: 1 addition & 3 deletions docs/content/en/docs/faq/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -107,9 +107,7 @@ to [can I run Tetragon on Mac computers](#can-i-run-tetragon-on-mac-computers).

### Can I install and use Tetragon in standalone mode (outside of k8s)?

Yes! Check the [Container deployment](/docs/getting-started/deployment/container/) or
[Package deployment](/docs/getting-started/deployment/package/) guides
for alternative install methods.
Yes! TBD docs

Otherwise you can build Tetragon from source by running `make` to generate standalone
binaries.
Expand Down
6 changes: 0 additions & 6 deletions docs/content/en/docs/getting-started/deployment/_index.md

This file was deleted.

201 changes: 201 additions & 0 deletions docs/content/en/docs/getting-started/enforcement.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,201 @@
---
title: "Policy Enforcement"
weight: 6
description: "Policy Enforcement"
---

This adds a network and file policy enforcement on top of execution, file tracing
and networking policy already deployed in the quick start. In this use case we use
a namespace filter to limit the scope of the enforcement policy to just the `darkstar`
cluster we installed the demo application in from the
[Quick Kubernetes Install]({{< ref "docs/getting-started/install-k8s" >}}).

This highlights two important concepts of Tetragon. First in kernel filtering
provides a key performance improvement by limiting events from kernel to user
space. But, also allows for enforcing policies in the kernel. By issueing a
`SIGKILL` to the process at this point the application will be stopped from
continuing to run. If the operation is triggered through a syscall this means
the application will not return from the syscall and will be terminated.

Second, by including kubernetes filters, such as namespace and labels we can
segment a policy to apply to targeted namespaces and pods. This is critical
for effective policy segmentation.

For implementation details see the [Enforcement]({{< ref "/docs/concepts/enforcement" >}})
concept section.

## Kubernetes Enforcement

The following section is layed out with the following:
- A guide to promote the network observation policy that observer all network
traffic egressing the cluster to enforce this policy.
- A guide to promote the file access monitoring policy to block write and read
operations to sensitive files.

### Block TCP Connect outside Cluster

First we will deploy the [Network Monitoring]({{< ref "docs/getting-started/network" >}})
policy with enforcement on. For this case the policy is written to only apply
against the `empire` namespace. This limits the scope of the policy for the
getting started guide.

Ensure we have the proper Pod CIDRs

```shell-session
export PODCIDR=`kubectl get nodes -o jsonpath='{.items[*].spec.podCIDR}'`
```

and Service CIDRs configured.

{{< tabpane lang=shell-session >}}
{{< tab GKE >}}
export SERVICECIDR=$(gcloud container clusters describe ${NAME} --zone ${ZONE} --project ${PROJECT} | awk '/servicesIpv4CidrBlock/ { print $2; }')
{{< /tab >}}

{{< tab Kind >}}
export SERVICECIDR=$(kubectl describe pod -n kube-system kube-apiserver-kind-control-plane | awk -F= '/--service-cluster-ip-range/ {print $2; }')
{{< /tab >}}
{{< /tabpane >}}

Then we can apply the egress cluster enforcement policy

```shell-session
wget http://github.com/cilium/tetragon/examples/quickstart/network_egress_cluster_enforce.yaml
envsubst < network_egress_cluster_enforce.yaml | kubectl apply -n default -f -
```

With the enforcement policy applied we can attach tetra to observe events again:

```shell-session
kubectl exec -ti -n kube-system ds/tetragon -c tetragon -- tetra getevents -o compact --pods xwing
```

And once again execute a curl command in the xwing:

```shell-session
kubectl exec -ti xwing -- bash -c 'curl https://ebpf.io/applications/#tetragon'
```

The command returns an error code because the egress TCP connects are blocked shown here.
```
command terminated with exit code 137
```

Connect inside the cluster will work as expected,

```shell-session
kubectl exec -ti xwing -- bash -c 'curl -s -XPOST deathstar.default.svc.cluster.local/v1/request-landing'
```

The Tetra CLI will print the curl and annotate that the process that was issued
a Sigkill. The successful internal connect is filtered and will not be shown.

```
🚀 process default/xwing /bin/bash -c "curl https://ebpf.io/applications/#tetragon"
🚀 process default/xwing /usr/bin/curl https://ebpf.io/applications/#tetragon
🔌 connect default/xwing /usr/bin/curl tcp 10.32.0.28:45200 -> 104.198.14.52:443
💥 exit default/xwing /usr/bin/curl https://ebpf.io/applications/#tetragon SIGKILL
🚀 process default/xwing /bin/bash -c "curl -s -XPOST deathstar.default.svc.cluster.local/v1/request-landing"
🚀 process default/xwing /usr/bin/curl -s -XPOST deathstar.default.svc.cluster.local/v1/request-landing
```

The enforces TCP connects see [Enforce Sandbox]({{< ref "#enforce-common-security-policy" >}})
below to further restrict possible workaround such as writing through /dev
devices and raw sockets application may attempt.

### Enforce File Access Monitoring

The following extends the example from [File Access Monitoring]({{< ref "docs/getting-started/file-events" >}})
with enforcement to ensure sensitive files are not read. The policy used is the
[`file-monitoring-enforce.yaml`](https://github.com/cilium/tetragon/blob/main/examples/quickstart/file-monitoring-enforce.yaml)
it can be reviewed and extended as needed. The only difference between the
observation policy and the enforce policy is the addition of an action block
to sigkill the application and return an error on the op.

To apply the policy:

{{< tabpane lang=shell-session >}}

{{< tab Kubernetes >}}
kubectl delete -f http://github.com/cilium/tetragon/examples/quickstart/file_monitoring.yaml
kubectl apply -f http://github.com/cilium/tetragon/examples/quickstart/file_monitoring_enforce.yaml
{{< /tab >}}
{{< tab Docker >}}
wget http://github.com/cilium/tetragon/examples/quickstart/file-monitoring.yaml
docker stop tetragon-container
docker run --name tetragon-container --rm --pull always \
--pid=host --cgroupns=host --privileged \
-v ${PWD}/file_monitoring.yaml:/etc/tetragon/tetragon.tp.d/file_monitoring_enforce.yaml \
-v /sys/kernel/btf/vmlinux:/var/lib/tetragon/btf \
quay.io/cilium/tetragon-ci:latest
{{< /tab >}}
{{< /tabpane >}}

With the file applied we can attach tetra to observe events again,

{{< tabpane lang=shell-session >}}
{{< tab Kubernetes >}}
kubectl exec -ti -n kube-system ds/tetragon -c tetragon -- tetra getevents -o compact --pods xwing
{{< /tab >}}
{{< tab Docker >}}
docker exec tetragon-container tetra getevents -o compact
{{< /tab >}}
{{< /tabpane >}}

Then reading a sensitive file,

{{< tabpane lang=shell-session >}}
{{< tab Kubernetes >}}
kubectl exec -ti xwing -- bash -c 'cat /etc/shadow'
{{< /tab >}}
{{< tab Docker >}}
cat /etc/shadow
{{< /tab >}}
{{< /tabpane >}}

The command will fail with an error code because this is one of our sensitive files,
```shell-session
kubectl exec -ti xwing -- bash -c 'cat /etc/shadow'
```

The output should be similar to:

```
command terminated with exit code 137
```

This will generate a read event (Docker events will omit Kubernetes metadata),

```
🚀 process default/xwing /bin/bash -c "cat /etc/shadow"
🚀 process default/xwing /bin/cat /etc/shadow
📚 read default/xwing /bin/cat /etc/shadow
📚 read default/xwing /bin/cat /etc/shadow
📚 read default/xwing /bin/cat /etc/shadow
💥 exit default/xwing /bin/cat /etc/shadow SIGKILL
```

Writes and reads to files not part of the enforced file policy will not be
impacted.

```
🚀 process default/xwing /bin/bash -c "echo foo >> bar; cat bar"
🚀 process default/xwing /bin/cat bar
💥 exit default/xwing /bin/cat bar 0
💥 exit default/xwing /bin/bash -c "echo foo >> bar; cat bar" 0
```

## What's next

The completes the quick start guides. At this point we should be able to
observe execution traces in a Kubernetes cluster and extend the base deployment
of Tetragon with policies to observe and enforce different aspects of a
Kubernetes system.

The rest of the docs provide further documentation about installation and
using policies. Some useful links:

To explore details of writing and implementing policies the [Concepts]({{< ref "/docs/concepts" >}}) is a good jumping off point.
For installation into production environments we recommend reviewing [Advanced Installations]({{< ref "docs/installation" >}}).
For a more in depth discussion on Tetragon overhead and how we measure system load try [Benchmarks]({{< ref "docs/benchmarks" >}}).
Finally [Use Cases]({{< ref "docs/use-cases" >}}) and [Tutorials]({{< ref "docs/tutorials" >}}) cover different uses and deployment concerns related to Tetragon.