-
Notifications
You must be signed in to change notification settings - Fork 886
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Empty values (<NA>) in JSON output fields #1029
Comments
I would like to work on this. |
/milestone 0.21.0 |
Similar issue reported in #865. |
The issue still occurs in Falco {
"output":"16:26:12.425643773: Notice Container with sensitive mount started (user=<NA> command=container:8cff1c1639fc k8s.ns=<NA> k8s.pod=<NA> container=8cff1c1639fc image=busybox:latest mounts=/var/lib/kubelet/pods/38e20a89-d20d-469e-bd32-c8b034eb6bf5/containers/busybox/84a3b097:/dev/termination-log:Z:true:rprivate,/etc:/host/etc::true:rprivate,/var/lib/kubelet/pods/38e20a89-d20d-469e-bd32-c8b034eb6bf5/volumes/kubernetes.io~secret/default-token-sp6tq:/var/run/secrets/kubernetes.io/serviceaccount:ro,Z:false:rprivate,/var/lib/kubelet/pods/38e20a89-d20d-469e-bd32-c8b034eb6bf5/etc-hosts:/etc/hosts:Z:true:rprivate) k8s.ns=<NA> k8s.pod=<NA> container=8cff1c1639fc k8s.ns=<NA> k8s.pod=<NA> container=8cff1c1639fc",
"priority":"Notice",
"rule":"Launch Sensitive Mount Container",
"time":"2020-03-09T16:26:12.425643773Z",
"output_fields":{
"container.id":"8cff1c1639fc",
"container.image.repository":"busybox",
"container.image.tag":"latest",
"container.mounts":"/var/lib/kubelet/pods/38e20a89-d20d-469e-bd32-c8b034eb6bf5/containers/busybox/84a3b097:/dev/termination-log:Z:true:rprivate,/etc:/host/etc::true:rprivate,/var/lib/kubelet/pods/38e20a89-d20d-469e-bd32-c8b034eb6bf5/volumes/kubernetes.io~secret/default-token-sp6tq:/var/run/secrets/kubernetes.io/serviceaccount:ro,Z:false:rprivate,/var/lib/kubelet/pods/38e20a89-d20d-469e-bd32-c8b034eb6bf5/etc-hosts:/etc/hosts:Z:true:rprivate",
"evt.time":1583771172425643773,
"k8s.ns.name":null,
"k8s.pod.name":null,
"proc.cmdline":"container:8cff1c1639fc",
"user.name":null
}
} |
I took a closer look and noticed that the K8S audit payload for latest K8S releases differs from those used in tests for Falco rules. For example create_nginx_pod_privileged.json: {
"kind":"Event",
"apiVersion":"audit.k8s.io/v1beta1",
"metadata":{
"creationTimestamp":"2018-10-25T14:09:12Z"
},
"level":"RequestResponse",
"timestamp":"2018-10-25T14:09:12Z",
"auditID":"a362d22b-db3c-4590-9505-23782f12925f",
"stage":"ResponseComplete",
"requestURI":"/api/v1/namespaces/default/pods",
"verb":"create",
"user":{
"username":"system:serviceaccount:kube-system:replicaset-controller",
"uid":"8d5e1349-d30f-11e8-96d9-080027728ac4",
"groups":[
"system:serviceaccounts",
"system:serviceaccounts:kube-system",
"system:authenticated"
]
},
"sourceIPs":[
"::1"
],
"objectRef":{
"resource":"pods",
"namespace":"default",
"apiVersion":"v1"
},
"responseStatus":{
"metadata":{
},
"code":201
},
"requestObject":{
"kind":"Pod",
"apiVersion":"v1",
"metadata":{
"generateName":"nginx-deployment-5cdcc99dbf-",
"creationTimestamp":null,
"labels":{
"app":"nginx",
"pod-template-hash":"1787755869"
},
"ownerReferences":[
{
"apiVersion":"apps/v1",
"kind":"ReplicaSet",
"name":"nginx-deployment-5cdcc99dbf",
"uid":"8c800470-d85f-11e8-88b6-080027728ac4",
"controller":true,
"blockOwnerDeletion":true
}
]
},
"spec":{
"containers":[
{
"name":"nginx",
"image":"nginx",
"resources":{
},
"terminationMessagePath":"/dev/termination-log",
"terminationMessagePolicy":"File",
"imagePullPolicy":"Always",
"securityContext":{
"privileged":true
}
}
],
"restartPolicy":"Always",
"terminationGracePeriodSeconds":30,
"dnsPolicy":"ClusterFirst",
"securityContext":{
},
"schedulerName":"default-scheduler"
},
"status":{
}
},
"responseObject":{
"kind":"Pod",
"apiVersion":"v1",
"metadata":{
"name":"nginx-deployment-5cdcc99dbf-rgw6z",
"generateName":"nginx-deployment-5cdcc99dbf-",
"namespace":"default",
"selfLink":"/api/v1/namespaces/default/pods/nginx-deployment-5cdcc99dbf-rgw6z",
"uid":"8c845395-d85f-11e8-88b6-080027728ac4",
"resourceVersion":"237252",
"creationTimestamp":"2018-10-25T14:09:12Z",
"labels":{
"app":"nginx",
"pod-template-hash":"1787755869"
},
"ownerReferences":[
{
"apiVersion":"apps/v1",
"kind":"ReplicaSet",
"name":"nginx-deployment-5cdcc99dbf",
"uid":"8c800470-d85f-11e8-88b6-080027728ac4",
"controller":true,
"blockOwnerDeletion":true
}
]
},
"spec":{
"volumes":[
{
"name":"default-token-g2sp7",
"secret":{
"secretName":"default-token-g2sp7",
"defaultMode":420
}
}
],
"containers":[
{
"name":"nginx",
"image":"nginx",
"resources":{
},
"volumeMounts":[
{
"name":"default-token-g2sp7",
"readOnly":true,
"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"
}
],
"terminationMessagePath":"/dev/termination-log",
"terminationMessagePolicy":"File",
"imagePullPolicy":"Always",
"securityContext":{
"privileged":true
}
}
],
"restartPolicy":"Always",
"terminationGracePeriodSeconds":30,
"dnsPolicy":"ClusterFirst",
"serviceAccountName":"default",
"serviceAccount":"default",
"securityContext":{
},
"schedulerName":"default-scheduler",
"tolerations":[
{
"key":"node.kubernetes.io/not-ready",
"operator":"Exists",
"effect":"NoExecute",
"tolerationSeconds":300
},
{
"key":"node.kubernetes.io/unreachable",
"operator":"Exists",
"effect":"NoExecute",
"tolerationSeconds":300
}
]
},
"status":{
"phase":"Pending",
"qosClass":"BestEffort"
}
},
"requestReceivedTimestamp":"2018-10-25T14:09:12.572676Z",
"stageTimestamp":"2018-10-25T14:09:12.581541Z",
"annotations":{
"authorization.k8s.io/decision":"allow",
"authorization.k8s.io/reason":"RBAC: allowed by ClusterRoleBinding \"system:controller:replicaset-controller\" of ClusterRole \"system:controller:replicaset-controller\" to ServiceAccount \"replicaset-controller/kube-system\""
}
} In Kubernetes 1.16, it's: {
"kind":"EventList",
"apiVersion":"audit.k8s.io/v1",
"metadata":{
},
"items":[
{
"level":"RequestResponse",
"auditID":"2c9e7dac-8da9-4301-b01f-0acdf074d55d",
"stage":"ResponseComplete",
"requestURI":"/api/v1/namespaces/falco-event-generator/pods",
"verb":"create",
"user":{
"username":"system:serviceaccount:kube-system:replicaset-controller",
"uid":"90f3b009-9fc3-454c-a819-4656a4699455",
"groups":[
"system:serviceaccounts",
"system:serviceaccounts:kube-system",
"system:authenticated"
]
},
"sourceIPs":[
"10.10.10.31"
],
"userAgent":"kube-controller-manager/v1.15.0 (linux/amd64) kubernetes/e8462b5/system:serviceaccount:kube-system:replicaset-controller",
"objectRef":{
"resource":"pods",
"namespace":"falco-event-generator",
"apiVersion":"v1"
},
"responseStatus":{
"metadata":{
},
"code":201
},
"requestObject":{
"kind":"Pod",
"apiVersion":"v1",
"metadata":{
"generateName":"disallowed-pod-deployment-776bcd5767-",
"creationTimestamp":null,
"labels":{
"app.kubernetes.io/name":"disallowed-pod-busybox",
"app.kubernetes.io/part-of":"falco-event-generator",
"pod-template-hash":"776bcd5767"
},
"ownerReferences":[
{
"apiVersion":"apps/v1",
"kind":"ReplicaSet",
"name":"disallowed-pod-deployment-776bcd5767",
"uid":"1b3a9a9b-97cc-400d-92cf-a41e272ee357",
"controller":true,
"blockOwnerDeletion":true
}
]
},
"spec":{
"containers":[
{
"name":"busybox",
"image":"busybox",
"command":[
"/bin/sh",
"-c",
"while true; do echo sleeping; sleep 3600; done"
],
"resources":{
},
"terminationMessagePath":"/dev/termination-log",
"terminationMessagePolicy":"File",
"imagePullPolicy":"Always"
}
],
"restartPolicy":"Always",
"terminationGracePeriodSeconds":30,
"dnsPolicy":"ClusterFirst",
"securityContext":{
},
"schedulerName":"default-scheduler",
"enableServiceLinks":true
},
"status":{
}
},
"responseObject":{
"kind":"Pod",
"apiVersion":"v1",
"metadata":{
"name":"disallowed-pod-deployment-776bcd5767-brnsw",
"generateName":"disallowed-pod-deployment-776bcd5767-",
"namespace":"falco-event-generator",
"selfLink":"/api/v1/namespaces/falco-event-generator/pods/disallowed-pod-deployment-776bcd5767-brnsw",
"uid":"210d241c-88b5-4c47-8bd8-e1a05b1e9965",
"resourceVersion":"1735006",
"creationTimestamp":"2020-03-09T19:37:09Z",
"labels":{
"app.kubernetes.io/name":"disallowed-pod-busybox",
"app.kubernetes.io/part-of":"falco-event-generator",
"pod-template-hash":"776bcd5767"
},
"ownerReferences":[
{
"apiVersion":"apps/v1",
"kind":"ReplicaSet",
"name":"disallowed-pod-deployment-776bcd5767",
"uid":"1b3a9a9b-97cc-400d-92cf-a41e272ee357",
"controller":true,
"blockOwnerDeletion":true
}
]
},
"spec":{
"volumes":[
{
"name":"default-token-24w9x",
"secret":{
"secretName":"default-token-24w9x",
"defaultMode":420
}
}
],
"containers":[
{
"name":"busybox",
"image":"busybox",
"command":[
"/bin/sh",
"-c",
"while true; do echo sleeping; sleep 3600; done"
],
"resources":{
},
"volumeMounts":[
{
"name":"default-token-24w9x",
"readOnly":true,
"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"
}
],
"terminationMessagePath":"/dev/termination-log",
"terminationMessagePolicy":"File",
"imagePullPolicy":"Always"
}
],
"restartPolicy":"Always",
"terminationGracePeriodSeconds":30,
"dnsPolicy":"ClusterFirst",
"serviceAccountName":"default",
"serviceAccount":"default",
"securityContext":{
},
"schedulerName":"default-scheduler",
"tolerations":[
{
"key":"node.kubernetes.io/not-ready",
"operator":"Exists",
"effect":"NoExecute",
"tolerationSeconds":300
},
{
"key":"node.kubernetes.io/unreachable",
"operator":"Exists",
"effect":"NoExecute",
"tolerationSeconds":300
}
],
"priority":0,
"enableServiceLinks":true
},
"status":{
"phase":"Pending",
"qosClass":"BestEffort"
}
},
"requestReceivedTimestamp":"2020-03-09T19:37:09.551309Z",
"stageTimestamp":"2020-03-09T19:37:09.690940Z",
"annotations":{
"authorization.k8s.io/decision":"allow",
"authorization.k8s.io/reason":"RBAC: allowed by ClusterRoleBinding \"system:controller:replicaset-controller\" of ClusterRole \"system:controller:replicaset-controller\" to ServiceAccount \"replicaset-controller/kube-system\""
}
}
]
} Note falco/userspace/engine/json_evt.cpp Line 1323 in e46641d
I am not sure but replacing these macros to align with the current audit structure could potentially fix the issue. |
@leodido This functionality is important to me. I am ready to contribute but need some guidance on where to start. As described above, the root cause of the problem is that newer versions of Kubernetes put data used in Falco aliases (such as e.g. Perhaps, updating the JSON paths in the alias lookup could fix the issue. Something like:
Instead of:
What do you think? Is such a solution acceptable to give it a try? |
Yes, it's definitely worth a try! |
Newer releases of Kubernetes emit audit objects of kind EventList instead of Event. Events are sent in batches instead of one by one. // EventList is a list of audit Events.
type EventList struct {
metav1.TypeMeta `json:",inline"`
// +optional
metav1.ListMeta `json:"metadata,omitempty" protobuf:"bytes,1,opt,name=metadata"`
Items []Event `json:"items" protobuf:"bytes,2,rep,name=items"` I'm curious, if there is an option in API Server to control that behaviour. Webhook batching looks promising. Also, introducing an intermediate component to consume event batches, split them, and redistribute to Falco backend could be an alternative. Here is a sample configuration for Logstash found in the Kubernetes documentation:
|
Batching options didn't work. Now, I discovered Falco actually handles falco/userspace/engine/falco_engine.cpp Line 397 in e46641d
|
This is strange. Now, I am unable to reproduce the issue. Output fields are nicely filled up:
I re-checked on both Falco Perhaps, I accidentally changed something in my environment which caused Falco trouble. |
Which Falco are you using now?
I'm asking because, recently, Lorenzo (@fntlnz) and I made some fixes about
NA values. So, this could actually be a good news :)
L.
…On Sat, Apr 11, 2020 at 8:08 PM Bartosz Żurkowski ***@***.***> wrote:
This is strange. Now, I am unable to reproduce the issue. Output fields
are nicely filled up:
{
"output":"17:27:42.656445952: Warning Pod started using host network (user=system:serviceaccount:kube-system:replicaset-controller pod=hostnetwork-deployment-748bd48fcb-wkn65 ns=default images=busybox)",
"priority":"Warning",
"rule":"Create HostNetwork Pod",
"time":"2020-04-11T17:27:42.656445952Z",
"output_fields":{
"jevt.time":"17:27:42.656445952",
"ka.req.pod.containers.image":"busybox",
"ka.resp.name":"hostnetwork-deployment-748bd48fcb-wkn65",
"ka.target.namespace":"default",
"ka.user.name":"system:serviceaccount:kube-system:replicaset-controller"
}
}
I re-checked on both Falco 0.18.0 and 0.20.0 without problems.
Perhaps, I accidentally changed something in my environment which caused
Falco trouble.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#1029 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAA5J47CQVXHJDMWXPHFFSDRMCWZBANCNFSM4KORTLCQ>
.
|
I observed the issue when using It is non-deterministic or it must have been caused by some temporal misconfiguration in my environment. |
Pod field seems to be filled correctly. Verified on several rules. Closing. {
"output":"07:03:05.063269120: Warning Pod started using host network (user=system:serviceaccount:kube-system:replicaset-controller pod=hostnetwork-deployment-748bd48fcb-hrj2q ns=falco-event-generator images=busybox)",
"priority":"Warning",
"rule":"Create HostNetwork Pod",
"time":"2020-05-07T07:03:05.063269120Z",
"output_fields":{
"jevt.time":"07:03:05.063269120",
"ka.req.pod.containers.image":"busybox",
"ka.resp.name":"hostnetwork-deployment-748bd48fcb-hrj2q",
"ka.target.namespace":"falco-event-generator",
"ka.user.name":"system:serviceaccount:kube-system:replicaset-controller"
}
} {
"output":"07:04:10.489446912: Warning Pod started with sensitive mount (user=system:serviceaccount:kube-system:replicaset-controller pod=sensitive-mount-deployment-76477d5d8-82k4d ns=falco-event-generator images=busybox volumes=[{\"hostPath\":{\"path\":\"/etc\",\"type\":\"\"},\"name\":\"etc\"}])",
"priority":"Warning",
"rule":"Create Sensitive Mount Pod",
"time":"2020-05-07T07:04:10.489446912Z",
"output_fields":{
"jevt.time":"07:04:10.489446912",
"jevt.value":"[{\"hostPath\":{\"path\":\"/etc\",\"type\":\"\"},\"name\":\"etc\"}]",
"ka.req.pod.containers.image":"busybox",
"ka.resp.name":"sensitive-mount-deployment-76477d5d8-82k4d",
"ka.target.namespace":"falco-event-generator",
"ka.user.name":"system:serviceaccount:kube-system:replicaset-controller"
}
} {
"output":"07:05:26.325649920: Warning Pod started with privileged container (user=system:serviceaccount:kube-system:replicaset-controller pod=privileged-deployment-7c5d789cf4-k4w5f ns=falco-event-generator images=busybox)",
"priority":"Warning",
"rule":"Create Privileged Pod",
"time":"2020-05-07T07:05:26.325649920Z",
"output_fields":{
"jevt.time":"07:05:26.325649920",
"ka.req.pod.containers.image":"busybox",
"ka.resp.name":"privileged-deployment-7c5d789cf4-k4w5f",
"ka.target.namespace":"falco-event-generator",
"ka.user.name":"system:serviceaccount:kube-system:replicaset-controller"
}
} |
It was likely a configuration issue on how Falco talks to the CRI socket. Glad it's working for you now @bzurkowski |
I installed falco using helm chart. I tried to get logs in json format after changing the jsonOutput value as true in the values.yaml. But on checking my logs it is not in json format. Can anyone help me to fix this issue? |
Describe the bug
When using Falco with dynamic K8S auditing enabled, JSON output in Falco alerts has missing values for some attributes, mainly
pod
attribute. These values are marked as<NA>
.For example, given the default auditing rule:
My backend receives an alert with the following content:
Note
"ka.resp.name":"<NA>"
should contain pod name.How to reproduce it
Expected behavior
All attributes of K8S objects used in rule expressions should be provided into the
output_fields
attribute of alert payload with correct values.Environment
The text was updated successfully, but these errors were encountered: