You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: content/posts/2025-01-28-cli_use_cases.md
+10-4Lines changed: 10 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -166,10 +166,16 @@ INFO[0000] flows table created
166
166
167
167
At this stage, the collector wait for incoming data. If nothing shows yet, it means that no traffic is captured. Try to open the route of your application or update the filters of the capture.
168
168
169
-
Once some traffic is captured, the output look like:
169
+
- If you are using a standard cluster, cycle to the **packet drops** view. In this capture, we see that the traffic is dropped by OVS:
170
+
171
+
})
172
+
173
+
You will need to investigate to get the root cause but it's probably a configuration such as a network policy.
174
+
175
+
- If you are using `TechPreview` feature, cycle to the **network events** view. In this capture, we see that the traffic is blocked by a network policy:
Cycle to the **network events** view. In this capture, we see that the traffic is blocked by a network policy since it reports the `NetpolNamespace` event.
173
179
Edit your network policies and give another try.
174
180
175
181
Behind the scenes in our scenario, we used to have a deny all on the pod label:
@@ -178,7 +184,7 @@ kind: NetworkPolicy
178
184
apiVersion: networking.k8s.io/v1
179
185
metadata:
180
186
name: deny-nodejs
181
-
namespace: sample-app
187
+
namespace: connectivity-scenario
182
188
spec:
183
189
podSelector:
184
190
matchLabels:
@@ -191,7 +197,7 @@ spec:
191
197
Once you updated your policies, you can give another try to your route until you fix the issue:
0 commit comments