Join GitHub today
GitHub is home to over 28 million developers working together to host and review code, manage projects, and build software together.Sign up
How to Install Falco using Containers and or Orchestration
How to Install Falco using Containers
Container install (general)
If you have full control of your host operating system, then installing falco using the normal installation method is the recommended best practice. This method allows full visibility into all containers on the host OS. No changes to the standard automatic/manual installation procedures are required.
However, falco can also run inside a Docker container. To guarantee a smooth deployment, the kernel headers must be installed in the host operating system, before running Falco.
This can usually be done on Debian-like distributions with:
apt-get -y install linux-headers-$(uname -r)
Or, on RHEL-like distributions:
yum -y install kernel-devel-$(uname -r)
Falco can then be run with:
docker pull falcosecurity/falco docker run -i -t --name falco --privileged -v /var/run/docker.sock:/host/var/run/docker.sock -v /dev:/host/dev -v /proc:/host/proc:ro -v /boot:/host/boot:ro -v /lib/modules:/host/lib/modules:ro -v /usr:/host/usr:ro falcosecurity/falco
To see it in action, also run the [event generator](Generating Sample Events) to perform actions that trigger falco's ruleset:
docker pull sysdig/falco-event-generator docker run -it --name falco-event-generator sysdig/falco-event-generator
Using custom rules with docker container
The falco image has a built-in set of rules located at
/etc/falco/falco_rules.yaml which is suitable for most purposes. However, you may want to provide your own rules file and still use the falco image. In that case, you should add a volume mapping to map the external rules file to
/etc/falco/falco_rules.yaml within the container, by adding
-v path-to-falco-rules.yaml:/etc/falco/falco_rules.yaml to your docker run command.
Container install (CoreOS)
The recommended way to run falco on CoreOS is inside of its own Docker container using the install commands in the paragraph above. This method allows full visibility into all containers on the host OS.
This method is automatically updated, includes some nice features such as automatic setup and bash completion, and is a generic approach that can be used on other distributions outside CoreOS as well.
However, some users may prefer to run falco in the CoreOS toolbox. While not the recommended method, this can be achieved by installing Falco inside the toolbox using the normal installation method, and then manually running the falco-probe-loader script:
toolbox --bind=/dev --bind=/var/run/docker.sock curl -s https://s3.amazonaws.com/download.draios.com/stable/install-falco | bash falco-probe-loader
Container install (K8s)
If you'd like to run falco as a K8s DaemonSet, we have instructions and some sample yaml files here.
Additional Notes on Running Falco in Containers/K8s
Falco depends on a kernel module that intercepts all system calls, and that kernel module is usually built on-the-fly when falco is installed/run as a container. The VM used by minikube doesn't include kernel headers, so falco can't build the kernel module on the fly. We do have precompiled kernel modules that can be downloaded as a backup, but the kernel used by minikube isn't a standard one, so we can't easily create precompiled kernel modules.
Growing Memory Usage for Falco Container When Using File Output
If you notice that the memory usage for a container running Falco increases when using file output methods, even when the memory usage of the falco process itself does not increase, it could be due to the buffer page cache being counted against the memory usage of the container. See falco issue https://github.com/draios/falco/issues/338 for a longer discussion, and the underlying K8s bug/feature is discussed in https://github.com/kubernetes/kubernetes/issues/43916. You can safely cap the memory size of the container to a value like 160Mb, at which point the buffer page cache growth will be limited.