Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unable to get k8s information, showing <NA> on each field #1988

Closed
nicotruc opened this issue Apr 29, 2022 · 0 comments
Closed

Unable to get k8s information, showing <NA> on each field #1988

nicotruc opened this issue Apr 29, 2022 · 0 comments
Labels

Comments

@nicotruc
Copy link

I'm encountering trouble when running on Red Hat Core OS

How to reproduce it

To make the chart usable, as explained in #1505, i installed kernel header on cluster using a Machine config, so eBPF probe is successfully built in pod.

apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
metadata:
  labels:
    machineconfiguration.openshift.io/role: worker
  name: worker-extensions
spec:
  extensions:
    - kernel-devel

Next i installed the chart using Helm and some little values change. Here is an example of output i got :

* Setting up /usr/src links from host
* Running falco-driver-loader for: falco version=0.31.1, driver version=b7eb0dd65226a8dc254d228c8d950d07bf3521d2
* Running falco-driver-loader with: driver=bpf, compile=yes, download=yes
* Mounting debugfs
* Trying to download a prebuilt eBPF probe from https://download.falco.org/driver/b7eb0dd65226a8dc254d228c8d950d07bf3521d2/falco_rhcos_4.18.0-305.40.2.el8_4.x86_64_1.o
curl: (5) Could not resolve proxy: proxy-mutu.default.svc
Unable to find a prebuilt falco eBPF probe
* Trying to compile the eBPF probe (falco_rhcos_4.18.0-305.40.2.el8_4.x86_64_1.o)
In file included from /usr/src/falco-b7eb0dd65226a8dc254d228c8d950d07bf3521d2/bpf/probe.c:9:
In file included from /usr/src/falco-b7eb0dd65226a8dc254d228c8d950d07bf3521d2/bpf/quirks.h:36:
In file included from ./include/linux/types.h:6:
In file included from ./include/uapi/linux/types.h:14:
In file included from ./include/uapi/linux/posix_types.h:5:
In file included from ./include/linux/stddef.h:5:
In file included from ./include/uapi/linux/stddef.h:2:
In file included from ./include/linux/compiler_types.h:78:
./include/linux/compiler-clang.h:29:9: warning: '__no_sanitize_address' macro redefined [-Wmacro-redefined]
#define __no_sanitize_address
        ^
./include/linux/compiler-gcc.h:339:9: note: previous definition is here
#define __no_sanitize_address __attribute__((no_sanitize_address))
        ^
1 warning generated.
* eBPF probe located in /root/.falco/falco_rhcos_4.18.0-305.40.2.el8_4.x86_64_1.o
* Success: eBPF probe symlinked to /root/.falco/falco-bpf.o
Fri Apr 29 09:09:36 2022: Falco version 0.31.1 (driver version b7eb0dd65226a8dc254d228c8d950d07bf3521d2)
Fri Apr 29 09:09:36 2022: Falco initialized with configuration file /etc/falco/falco.yaml
Fri Apr 29 09:09:36 2022: Loading rules from file /etc/falco/falco_rules.yaml:
Fri Apr 29 09:09:36 2022: Loading rules from file /etc/falco/falco_rules.local.yaml:
Fri Apr 29 09:09:37 2022: Starting internal webserver, listening on port 8765
09:09:37.668972506: Notice Unexpected connection to K8s API Server from container (command=event_loop /usr/local/bin/fluentd --suppress-config-dump --no-supervisor -r /usr/local/share/gems/gems/fluent-plugin-elasticsearch-5.0.5/lib/fluent/plugin/elasticsearch_simple_sniffer.rb k8s.ns=<NA> k8s.pod=<NA> container=6fe8e13da038 image=<NA>:<NA> connection=10.133.2.4:42522->10.136.0.1:443) k8s.ns=<NA> k8s.pod=<NA> container=6fe8e13da038
09:09:37.684007993: Notice Unexpected connection to K8s API Server from container (command=event_loop /usr/local/bin/fluentd --suppress-config-dump --no-supervisor -r /usr/local/share/gems/gems/fluent-plugin-elasticsearch-5.0.5/lib/fluent/plugin/elasticsearch_simple_sniffer.rb k8s.ns=<NA> k8s.pod=<NA> container=6fe8e13da038 image=<NA>:<NA> connection=10.133.2.4:42524->10.136.0.1:443) k8s.ns=<NA> k8s.pod=<NA> container=6fe8e13da038
09:09:37.703449832: Notice Unexpected connection to K8s API Server from container (command=event_loop /usr/local/bin/fluentd --suppress-config-dump --no-supervisor -r /usr/local/share/gems/gems/fluent-plugin-elasticsearch-5.0.5/lib/fluent/plugin/elasticsearch_simple_sniffer.rb k8s.ns=<NA> k8s.pod=<NA> container=6fe8e13da038 image=<NA>:<NA> connection=10.133.2.4:42526->10.136.0.1:443) k8s.ns=<NA> k8s.pod=<NA> container=6fe8e13da038

Whether i installed with DKMS or with eBPF probe, got the same behavior.

Found also some similar issues like #1726 or #1421 but unable to understand how to fix this.

At the moment, i use the default Service account of the cluster.

Expected behaviour

Environment

  • Falco version:
Falco version: 0.31.1
Driver version: b7eb0dd65226a8dc254d228c8d950d07bf3521d2
  • System info:
{
  "machine": "x86_64",
  "nodename": "tbop4no003s.sys.meshcore.net",
  "release": "4.18.0-305.40.2.el8_4.x86_64",
  "sysname": "Linux",
  "version": "#1 SMP Tue Mar 8 14:29:54 EST 2022"
}
  • Cloud provider or hardware configuration: Vsphere
  • OS:
NAME="Red Hat Enterprise Linux CoreOS"
VERSION="48.84.202203140855-0"
ID="rhcos"
ID_LIKE="rhel fedora"
VERSION_ID="4.8"
PLATFORM_ID="platform:el8"
PRETTY_NAME="Red Hat Enterprise Linux CoreOS 48.84.202203140855-0 (Ootpa)"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:redhat:enterprise_linux:8::coreos"
HOME_URL="https://www.redhat.com/"
DOCUMENTATION_URL="https://docs.openshift.com/container-platform/4.8/"
BUG_REPORT_URL="https://bugzilla.redhat.com/"
REDHAT_BUGZILLA_PRODUCT="OpenShift Container Platform"
REDHAT_BUGZILLA_PRODUCT_VERSION="4.8"
REDHAT_SUPPORT_PRODUCT="OpenShift Container Platform"
REDHAT_SUPPORT_PRODUCT_VERSION="4.8"
OPENSHIFT_VERSION="4.8"
RHEL_VERSION="8.4"
OSTREE_VERSION='48.84.202203140855-0'
  • Kernel:

Linux tbop4no003s.sys.meshcore.net 4.18.0-305.40.2.el8_4.x86_64 #1 SMP Tue Mar 8 14:29:54 EST 2022 x86_64 GNU/Linux

  • Installation method:

Kubernetes Helm chart, as an eBPF probe. I made use of proxy, which work and i specified a node selector so there is only one node that run Falco.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

1 participant