-
Notifications
You must be signed in to change notification settings - Fork 2.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
CI: K8sDatapathConfig Host firewall: Managed to reach #16159
Comments
This is all a bit strange... The Hubble capture shows policy verdicts for that connections:
Click to show more verbose JSON output.{
"time": "2021-05-14T17:39:18.408304832Z",
"verdict": "FORWARDED",
"ethernet": {
"source": "5e:bc:77:59:a5:10",
"destination": "5a:bd:cf:ff:c5:16"
},
"IP": {
"source": "10.0.0.168",
"destination": "192.168.36.12",
"ipVersion": "IPv4"
},
"l4": {
"UDP": {
"source_port": 43811,
"destination_port": 69
}
},
"source": {
"identity": 2,
"labels": [
"reserved:world"
]
},
"destination": {
"identity": 1,
"labels": [
"reserved:host"
]
},
"Type": "L3_L4",
"node_name": "k8s2",
"event_type": {
"type": 5
},
"traffic_direction": "INGRESS",
"policy_match_type": 1,
"is_reply": false,
"Summary": "UDP"
}
{
"time": "2021-05-14T17:39:18.408305735Z",
"verdict": "FORWARDED",
"ethernet": {
"source": "5e:bc:77:59:a5:10",
"destination": "5a:bd:cf:ff:c5:16"
},
"IP": {
"source": "10.0.0.168",
"destination": "192.168.36.12",
"ipVersion": "IPv4"
},
"l4": {
"UDP": {
"source_port": 43811,
"destination_port": 69
}
},
"source": {
"identity": 2,
"labels": [
"reserved:world"
]
},
"destination": {
"identity": 1,
"labels": [
"reserved:host"
]
},
"Type": "L3_L4",
"node_name": "k8s2",
"event_type": {
"type": 5
},
"traffic_direction": "INGRESS",
"policy_match_type": 1,
"is_reply": false,
"Summary": "UDP"
} Two things are weird here:
The first one is very likely the reason for the flake. Unfortunately, we don't have the Hubble flows on the source node:
If we had that, we could check the resolved source identity for those packets and maybe understand where |
Taking another sysdump gets us a bit further: On the destination node: {
"time": "2021-05-17T11:05:50.333292524Z",
"verdict": "FORWARDED",
"ethernet": {
"source": "3a:9f:09:8d:41:92",
"destination": "a6:8f:dc:66:a2:d3"
},
"IP": {
"source": "10.0.1.242",
"destination": "192.168.36.12",
"ipVersion": "IPv4"
},
"l4": {
"UDP": {
"source_port": 38673,
"destination_port": 69
}
},
"source": {
"identity": 2,
"labels": [
"reserved:world"
]
},
"destination": {
"identity": 1,
"labels": [
"reserved:host"
]
},
"Type": "L3_L4",
"node_name": "k8s2",
"event_type": {
"type": 5
},
"traffic_direction": "INGRESS",
"policy_match_type": 1,
"Summary": "UDP"
} On the source node: {
"time": "2021-05-17T11:05:50.333832826Z",
"verdict": "FORWARDED",
"ethernet": {
"source": "2a:4c:7d:b7:8a:cd",
"destination": "86:99:45:40:69:54"
},
"IP": {
"source": "10.0.1.242",
"destination": "192.168.36.12",
"ipVersion": "IPv4"
},
"l4": {
"UDP": {
"source_port": 38673,
"destination_port": 69
}
},
"source": {
"ID": 1773,
"identity": 7367,
"namespace": "202105171105k8sdatapathconfighostfirewallwithvxlan",
"labels": [
"k8s:io.cilium.k8s.policy.cluster=default",
"k8s:io.cilium.k8s.policy.serviceaccount=default",
"k8s:io.kubernetes.pod.namespace=202105171105k8sdatapathconfighostfirewallwithvxlan",
"k8s:zgroup=testClient"
],
"pod_name": "testclient-vd92d"
},
"destination": {
"identity": 6,
"labels": [
"reserved:remote-node"
]
},
"Type": "L3_L4",
"node_name": "k8s1",
"event_type": {
"type": 4,
"sub_type": 4
},
"trace_observation_point": "TO_OVERLAY",
"Summary": "UDP"
} So the source identity (7367) is somehow lost even though it was set in the tunnel metadata (happens just before the |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. |
This comment was marked as outdated.
This comment was marked as outdated.
This comment was marked as resolved.
This comment was marked as resolved.
This issue has been automatically marked as stale because it has not |
This issue has not seen any activity since it was marked stale. |
CI failure
https://jenkins.cilium.io/job/Cilium-PR-K8s-1.20-kernel-4.19/419/testReport/junit/Suite-k8s-1/20/K8sDatapathConfig_Host_firewall_With_VXLAN/
00898f73_K8sDatapathConfig_Host_firewall_With_VXLAN.zip
Seems to be the sibling of #15575
The text was updated successfully, but these errors were encountered: