Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

bpf: Report dst identity in drop notifications #5052

Merged

Conversation

joestringer
Copy link
Member

@joestringer joestringer commented Jul 31, 2018

Previously, the destination identity for drop messages was omitted even
if it had been looked up during the egress handling of the packet.
Retain the destination identity and report it in the drop monitor
notifications.

To achieve this, the complexity of the programs must be reduced:

  • IPv6 ipcache on egress reduces the maximum number of unique prefix lengths to 4 (Linux < 4.11 only)
  • Introduce a relax_verifier() call into IPv6 conntrack lookup drop case

Fixes: #5007

Upgrade Impact

Users running Cilium on Linux versions older than 4.11 with CIDR policies containing many unique prefix lengths may find that upon upgrade, their policy cannot be installed as the limit for the number of unique prefix lengths has been lowered in Cilium 1.2. The limit in this PR is four unique prefix lengths.

This change is Reviewable

@joestringer joestringer requested a review from a team July 31, 2018 09:27
@joestringer joestringer added sig/datapath Impacts bpf/ or low-level forwarding details, including map management and monitor messages. needs-backport/1.2 labels Jul 31, 2018
@joestringer joestringer added this to the 1.2-bugfix milestone Jul 31, 2018
@joestringer joestringer changed the title 1.2-bugfix bpf: Report dst identity in drop notifications Jul 31, 2018
@joestringer
Copy link
Member Author

test-me-please

@joestringer
Copy link
Member Author

joestringer commented Jul 31, 2018

Likely legitimate failure (pod in not-ready state is often an indicator of BPF complexity explosion):

Stacktrace
/home/jenkins/workspace/Cilium-PR-Ginkgo-Tests-Validated/src/github.com/cilium/cilium/test/ginkgo-ext/scopes.go:376
Endpoints not ready after timeout
Expected
    <bool>: false
to be true
/home/jenkins/workspace/Cilium-PR-Ginkgo-Tests-Validated/src/github.com/cilium/cilium/test/runtime/connectivity.go:163
Standard Output
Standard Error
STEP: server: map["IPv6":"f00d::a0f:0:0:bb4e" "EndpointID":"aca773f0f2abefffcdebf6b0b2e2b5ae4e0c78bf02f10c74189cae688127a2cf" "IPv6Gateway":"f00d::a0f:0:0:1" "Name":"server" "IPv4":"10.15.108.57" "NetworkID":"cd27c0cae3f3ce4848a5e362a2386ca0493a2a724aa4b642df182369d82a56a6"]
STEP: client: map["IPv6Gateway":"f00d::a0f:0:0:1" "Name":"client" "NetworkID":"cd27c0cae3f3ce4848a5e362a2386ca0493a2a724aa4b642df182369d82a56a6" "IPv6":"f00d::a0f:0:0:6259" "EndpointID":"721cf859f3ecab6e6872fc49396ce5566bfa17b1c707071a64a2ce8cb9dac861" "IPv4":"10.15.28.60"]
=== Test Finished at 2018-07-31T10:36:42Z====
===================== TEST FAILED =====================
cmd: sudo cilium endpoint list
ENDPOINT   POLICY (ingress)   POLICY (egress)   IDENTITY   LABELS (source:key[=value])   IPv6                 IPv4           STATUS   
           ENFORCEMENT        ENFORCEMENT                                                                                    
25177      Disabled           Disabled          34295      container:id.client           f00d::a0f:0:0:6259   10.15.28.60    not-ready   
41932      Disabled           Disabled          4          reserved:health               f00d::a0f:0:0:a3cc   10.15.24.91    ready       
47950      Disabled           Disabled          45446      container:id.server           f00d::a0f:0:0:bb4e   10.15.108.57   ready       

===================== Existing AfterFailed =====================
STEP: removing container client
STEP: removing container server

[[ATTACHMENT|0e2d8e1e_RuntimeConnectivityTest_Basic_Connectivity_test_Test_NAT46_connectivity_between_containers.zip]]

0e2d8e1e_RuntimeConnectivityTest_Basic_Connectivity_test_Test_NAT46_connectivity_between_containers.zip

Failure from logs:

msg="Command execution failed" cmd="/var/lib/cilium/bpf/join_ep.sh /var/lib/cilium/bpf /var/run/cilium/state /var/run/cilium/state/25177_next lxc721cf true 25177" containerID=fabf8b0c97 endpointID=25177 error="exit status 1" ipv4=10.15.28.60 ipv6="f00d::a0f:0:0:6259" k8sPodName=/ policyRevision=3
msg="Join EP id=/var/run/cilium/state/25177_next ifname=lxc721cf" subsys=endpoint
msg="kernel version:  Linux runtime 4.9.17-040917-generic #201703220831 SMP Wed Mar 22 12:33:05 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux" subsys=endpoint
msg="clang version:  clang version 3.8.1 (tags/RELEASE_381/final) Target: x86_64-unknown-linux-gnu Thread model: posix InstalledDir: /usr/local/bin" subsys=endpoint
msg="'probe' is not a recognized processor for this target (ignoring processor)" subsys=endpoint
msg="'probe' is not a recognized processor for this target (ignoring processor)" subsys=endpoint
msg="'probe' is not a recognized processor for this target (ignoring processor)" subsys=endpoint
msg="'probe' is not a recognized processor for this target (ignoring processor)" subsys=endpoint
subsys=endpoint
msg="Prog section '2/10' rejected: Argument list too long (7)!" subsys=endpoint
msg=" - Type:         3" subsys=endpoint
msg=" - Instructions: 3179 (0 over limit)" subsys=endpoint
msg=" - License:      GPL" subsys=endpoint
subsys=endpoint
msg="Verifier analysis:" subsys=endpoint
subsys=endpoint
msg="Skipped 2669314 bytes, use 'verb' option for the full verbose log." subsys=endpoint
msg="[...]" subsys=endpoint
msg="d=0),min_value=0,max_value=0 R5=ctx R6=inv R7=ctx R8=inv,min_value=0 R9=map_value(ks=20,vs=24,id=0),min_value=0,max_value=0 R10=fp" subsys=endpoint
msg="433: (71) r1 = *(u8 *)(r9 +18)" subsys=endpoint
msg="434: (71) r2 = *(u8 *)(r9 +19)" subsys=endpoint
msg="435: (67) r2 <<= 8" subsys=endpoint
msg="436: (4f) r2 |= r1" subsys=endpoint
msg="437: (55) if r2 != 0x0 goto pc+26" subsys=endpoint
msg=" R0=map_value(ks=20,vs=24,id=0),min_value=0,max_value=0 R1=inv56 R2=inv,min_value=0,max_value=0 R5=ctx R6=inv R7=ctx R8=inv,min_value=0 R9=map_value(ks=20,vs=24,id=0),min_value=0,max_value=0 R10=fp" subsys=endpoint
msg="438: (bf) r7 = r5" subsys=endpoint
msg="439: (b7) r1 = 0" subsys=endpoint
msg="440: (6b) *(u16 *)(r10 -128) = r1" subsys=endpoint
msg="441: (bf) r2 = r10" subsys=endpoint
msg="442: (07) r2 += -144" subsys=endpoint
msg="443: (18) r1 = 0x83c6b900" subsys=endpoint
msg="445: (85) call 1" subsys=endpoint
msg="446: (bf) r9 = r0" subsys=endpoint
msg="447: (b7) r1 = 0" subsys=endpoint
msg="448: (7b) *(u64 *)(r10 -248) = r1" subsys=endpoint
msg="449: (b7) r1 = 0" subsys=endpoint
msg="450: (7b) *(u64 *)(r10 -240) = r1" subsys=endpoint
msg="451: (b7) r1 = 0" subsys=endpoint
msg="452: (7b) *(u64 *)(r10 -232) = r1" subsys=endpoint
msg="453: (bf) r5 = r7" subsys=endpoint
msg="454: (15) if r9 == 0x0 goto pc+338" subsys=endpoint
msg=" R0=map_value(ks=20,vs=24,id=0),min_value=0,max_value=0 R1=imm0,min_value=0,max_value=0 R5=ctx R6=inv R7=ctx R8=inv,min_value=0 R9=map_value(ks=20,vs=24,id=0),min_value=0,max_value=0 R10=fp" subsys=endpoint
msg="455: (71) r1 = *(u8 *)(r9 +18)" subsys=endpoint
msg="456: (71) r2 = *(u8 *)(r9 +19)" subsys=endpoint
msg="457: (67) r2 <<= 8" subsys=endpoint
msg="458: (4f) r2 |= r1" subsys=endpoint
msg="459: (b7) r1 = 0" subsys=endpoint
msg="460: (7b) *(u64 *)(r10 -240) = r1" subsys=endpoint
msg="461: (b7) r1 = 0" subsys=endpoint
msg="462: (7b) *(u64 *)(r10 -232) = r1" subsys=endpoint
msg="463: (15) if r2 == 0x0 goto pc+329" subsys=endpoint
msg=" R0=map_value(ks=20,vs=24,id=0),min_value=0,max_value=0 R1=imm0,min_value=0,max_value=0 R2=inv R5=ctx R6=inv R7=ctx R8=inv,min_value=0 R9=map_value(ks=20,vs=24,id=0),min_value=0,max_value=0 R10=fp" subsys=endpoint
msg="464: (71) r7 = *(u8 *)(r10 -172)" subsys=endpoint
msg="465: (71) r8 = *(u8 *)(r10 -171)" subsys=endpoint
msg="466: (b7) r6 = 0" subsys=endpoint
msg="467: (6b) *(u16 *)(r10 -112) = r6" subsys=endpoint
msg="468: (b7) r1 = 4" subsys=endpoint
msg="469: (73) *(u8 *)(r10 -171) = r1" subsys=endpoint
msg="470: (15) if r7 == 0x6 goto pc+27" subsys=endpoint
msg=" R0=map_value(ks=20,vs=24,id=0),min_value=0,max_value=0 R1=imm4,min_value=4,max_value=4 R2=inv R5=ctx R6=imm0,min_value=0,max_value=0 R7=inv56 R8=inv56 R9=map_value(ks=20,vs=24,id=0),min_value=0,max_value=0 R10=fp" subsys=endpoint
msg="471: (15) if r7 == 0x11 goto pc+72" subsys=endpoint
msg=" R0=map_value(ks=20,vs=24,id=0),min_value=0,max_value=0 R1=imm4,min_value=4,max_value=4 R2=inv R5=ctx R6=imm0,min_value=0,max_value=0 R7=inv56 R8=inv56 R9=map_value(ks=20,vs=24,id=0),min_value=0,max_value=0 R10=fp" subsys=endpoint
msg="472: (7b) *(u64 *)(r10 -224) = r5" subsys=endpoint
msg="473: (55) if r7 != 0x3a goto pc+298" subsys=endpoint
msg=" R0=map_value(ks=20,vs=24,id=0),min_value=0,max_value=0 R1=imm4,min_value=4,max_value=4 R2=inv R5=ctx R6=imm0,min_value=0,max_value=0 R7=inv56,min_value=58,max_value=58 R8=inv56 R9=map_value(ks=20,vs=24,id=0),min_value=0,max_value=0 R10=fp fp-224=ctx" subsys=endpoint
msg="474: (bf) r3 = r10" subsys=endpoint
msg="475: (07) r3 += -72" subsys=endpoint
msg="476: (b7) r6 = 1" subsys=endpoint
msg="477: (79) r1 = *(u64 *)(r10 -224)" subsys=endpoint
msg="478: (79) r2 = *(u64 *)(r10 -216)" subsys=endpoint
msg="479: (b7) r4 = 1" subsys=endpoint
msg="480: (85) call 26" subsys=endpoint
msg="481: safe" subsys=endpoint
subsys=endpoint
msg="from 473 to 772: safe" subsys=endpoint
subsys=endpoint
msg="from 471 to 544: safe" subsys=endpoint
subsys=endpoint
msg="from 470 to 498: safe" subsys=endpoint
subsys=endpoint
msg="from 463 to 793: R0=map_value(ks=20,vs=24,id=0),min_value=0,max_value=0 R1=imm0,min_value=0,max_value=0 R2=inv,min_value=0,max_value=0 R5=ctx R6=inv R7=ctx R8=inv,min_value=0 R9=map_value(ks=20,vs=24,id=0),min_value=0,max_value=0 R10=fp" subsys=endpoint
msg="793: (71) r7 = *(u8 *)(r10 -172)" subsys=endpoint
msg="794: (61) r9 = *(u32 *)(r10 -196)" subsys=endpoint
msg="795: (61) r3 = *(u32 *)(r10 -200)" subsys=endpoint
msg="796: (61) r2 = *(u32 *)(r10 -204)" subsys=endpoint
msg="797: (61) r1 = *(u32 *)(r10 -208)" subsys=endpoint
msg="798: (7b) *(u64 *)(r10 -280) = r1" subsys=endpoint
msg="799: (b7) r6 = 0" subsys=endpoint
msg="800: (6b) *(u16 *)(r10 -112) = r6" subsys=endpoint
msg="801: (b7) r1 = 1" subsys=endpoint
msg="802: (73) *(u8 *)(r10 -171) = r1" subsys=endpoint
msg="803: (15) if r7 == 0x6 goto pc+34" subsys=endpoint
msg=" R0=map_value(ks=20,vs=24,id=0),min_value=0,max_value=0 R1=imm1,min_value=1,max_value=1 R2=inv R3=inv R5=ctx R6=imm0,min_value=0,max_value=0 R7=inv56 R8=inv,min_value=0 R9=inv R10=fp" subsys=endpoint
msg="804: (7b) *(u64 *)(r10 -288) = r2" subsys=endpoint
msg="805: (15) if r7 == 0x11 goto pc+275" subsys=endpoint
msg=" R0=map_value(ks=20,vs=24,id=0),min_value=0,max_value=0 R1=imm1,min_value=1,max_value=1 R2=inv R3=inv R5=ctx R6=imm0,min_value=0,max_value=0 R7=inv56 R8=inv,min_value=0 R9=inv R10=fp" subsys=endpoint
msg="806: (7b) *(u64 *)(r10 -296) = r3" subsys=endpoint
msg="807: (18) r8 = 0xffffff77" subsys=endpoint
msg="809: (55) if r7 != 0x3a goto pc+495" subsys=endpoint
msg=" R0=map_value(ks=20,vs=24,id=0),min_value=0,max_value=0 R1=imm1,min_value=1,max_value=1 R2=inv R3=inv R5=ctx R6=imm0,min_value=0,max_value=0 R7=inv56,min_value=58,max_value=58 R8=inv R9=inv R10=fp" subsys=endpoint
msg="810: (bf) r3 = r10" subsys=endpoint
msg="811: (07) r3 += -72" subsys=endpoint
msg="812: (bf) r1 = r5" subsys=endpoint
msg="BPF program is too large. Proccessed 65537 insn" subsys=endpoint

@joestringer
Copy link
Member Author

Found that even on master, it's possible to configure a set of #defines that cause the IPv6 program to exceed complexity limits on Linux 4.9.17 (compiled with clang 3.8.1).

Interestingly, in this particular case it seems that the entire policy lookup is compiled out and yet the complexity limit is still exceeded.

@eloycoto
Copy link
Member

@joestringer @tgraf

What if we change the following line:
msg="Command execution failed" cmd="/var/lib/cilium/bpf/join_ep.sh /var/lib/cilium/bpf /var/run/cilium/state /var/run/cilium/state/25177_next lxc721cf true 25177" containerID=fabf8b0c97 endpointID=25177 error="exit status 1" ipv4=10.15.28.60 ipv6="f00d::a0f:0:0:6259" k8sPodName=/ policyRevision=3

And add some kind of assert message that checks after the test? Like deadlock, NACK or something similar?

Maybe if we can fail on that, added a warning at least.

Regards

@joestringer
Copy link
Member Author

joestringer commented Jul 31, 2018

@eloycoto yeah, I think that your suggestion would be helpful to identify when the error is due to bpf complexity issues. In practice, the test always fails, but we don't always know whether this was the reason. If we detected this in the after-tests checks, it would become more obvious.

My other approach is to try to maximise the complexity generated using the headers in the tree (bpf/*.h), then rely on the ginkgo BPF verifier test to try to check complexity. Some things may still slip through though, so it would still be useful to explicitly detect this in the after-test checks.

@joestringer
Copy link
Member Author

Issue at hand seems to require the following datapath diff to reproduce:

diff --git a/bpf/lxc_config.h b/bpf/lxc_config.h
index b7d657087feb..e3135e02fb99 100644
--- a/bpf/lxc_config.h
+++ b/bpf/lxc_config.h
@@ -32,6 +32,9 @@
 #endif
 #define POLICY_MAP cilium_policy_foo
 #define NODE_MAC { .addr = { 0xde, 0xad, 0xbe, 0xef, 0xc0, 0xde } }
+#ifndef SKIP_DEBUG
+#define DEBUG
+#endif
 #define DROP_NOTIFY
 #define TRACE_NOTIFY
 #define CT_MAP6 cilium_ct6_111
@@ -45,8 +45,8 @@
 #define LB_L4
 #define CONNTRACK
 #define CONNTRACK_ACCOUNTING
-#define POLICY_INGRESS
-#define POLICY_EGRESS
+#undef POLICY_INGRESS
+#undef POLICY_EGRESS
 #define ENABLE_IPv4
 #define HAVE_L4_POLICY
 
diff --git a/bpf/node_config.h b/bpf/node_config.h
index 9af74694f9e9..3d5763652141 100644
--- a/bpf/node_config.h
+++ b/bpf/node_config.h
@@ -63,6 +63,6 @@
 #define IPCACHE_MAP_SIZE 512000
 #define POLICY_PROG_MAP_SIZE ENDPOINTS_MAP_SIZE
 #ifndef SKIP_DEBUG
-#define LB_DEBUG
+#undef LB_DEBUG
 #endif
-#define MONITOR_AGGREGATION 5
+#define MONITOR_AGGREGATION 0

@joestringer
Copy link
Member Author

Next step: Investigate using relax_verifier() to introduce a pruning checkpoint for the verifier to hopefully reduce complexity on older kernels with the above config.

@joestringer joestringer force-pushed the submit/bpf-monitor-identity-on-drop branch from f76f490 to 93322cf Compare August 8, 2018 13:53
@joestringer
Copy link
Member Author

test-me-please

@joestringer joestringer added upgrade-impact This PR has potential upgrade or downgrade impact. pending-review labels Aug 8, 2018
@joestringer
Copy link
Member Author

joestringer commented Aug 8, 2018

Couldn't import a policy in the test runtime.RuntimePolicies Checks CIDR L3 Policy, presumably because it uses more prefixes than we allow now with this PR.

Strictly speaking, the limit is only lower for IPv6 so if we taught the limit about IPv4 vs IPv6 then this test would pass.

@joestringer joestringer force-pushed the submit/bpf-monitor-identity-on-drop branch from 93322cf to 45ac598 Compare August 8, 2018 15:59
@joestringer joestringer requested a review from a team August 8, 2018 15:59
@joestringer joestringer requested a review from a team as a code owner August 8, 2018 15:59
return net.IPv6len*8 + 1
if ipv6 {
return net.IPv6len*8 + 1
} else {

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if block ends with a return statement, so drop this else and outdent its block

@joestringer
Copy link
Member Author

test-me-please

@joestringer joestringer force-pushed the submit/bpf-monitor-identity-on-drop branch from 45ac598 to 3e48d61 Compare August 8, 2018 18:10
@joestringer
Copy link
Member Author

test-me-please

@tgraf
Copy link
Member

tgraf commented Aug 8, 2018

Hit #5137

Due to features merged for the 1.2 feature window, the complexity of the
datapath has increased. To prevent issues at runtime when attempting to
compile and load / verify the datapath code, reduce the maximum number
of prefix lengths that we support.

As always, this limitation will only affect users running Cilium on Linux
versions earlier than 4.11.

Signed-off-by: Joe Stringer <joe@covalent.io>
This significantly reduces the complexity of the IPV6_FROM_LXC tail call
when performing load/verification under particular sets of config options,
such as with egress policy disabled.

Before, Linux 4.15 with policy, monitor aggregation disabled:

  $ make -j2 -C bpf && sudo ./test/bpf/check-complexity.sh
  ...
  Prog section '2/10' (tail_call IPV6_FROM_LXC) loaded (28)!
  processed 57631 insns, stack depth 320
  ...

After:

  $ make -j2 -C bpf && sudo ./test/bpf/check-complexity.sh
  ...
  Prog section '2/10' (tail_call IPV6_FROM_LXC) loaded (28)!
  processed 23365 insns, stack depth 312
  ...

Signed-off-by: Joe Stringer <joe@covalent.io>
Due to the complexity of the datapaths, IPv6 and IPv4 have different
unique prefix length limits from each other. To maximise the number of
supported unique prefix lengths, split this logic by L3 protocol and
enforce limits using the known good values for each protocol.

Signed-off-by: Joe Stringer <joe@covalent.io>
Previously, the destination identity for drop messages was omitted even
if it had been looked up during the egress handling of the packet.
Retain the destination identity and report it in the drop monitor
notifications.

Fixes: cilium#5007
Signed-off-by: Joe Stringer <joe@covalent.io>
@joestringer joestringer force-pushed the submit/bpf-monitor-identity-on-drop branch from 3e48d61 to bcca4e4 Compare August 9, 2018 08:21
@joestringer
Copy link
Member Author

test-me-please

@joestringer
Copy link
Member Author

Hit #5065:

https://jenkins.cilium.io/job/Cilium-PR-Ginkgo-Tests-Validated/5522/testReport/junit/k8s-1/7/K8sDemosTest_Tests_Star_Wars_Demo/

$ grep "level=warning" kubelet*
kubelet-k8s1-1.7.log:Aug 09 09:49:36 k8s1 kubelet[3411]: level=warning msg="Unable to create endpoint" containerID=d230c7c3c3ab3d4fb7da69e2546552d2e7be775f8f6ac1aecc211111e9769f91 endpointID=683 error="[PUT /endpoint/{id}][500] putEndpointIdFailed  endpoint 683 did not synchronously regenerate after timeout" eventUUID=65e49dcd-9bb9-11e8-8077-0800275579e8 subsys=cilium-cni
kubelet-k8s1-1.7.log:Aug 09 09:49:56 k8s1 kubelet[3411]: level=warning msg="Unable to create endpoint" containerID=f8182bf253b466ff9248005f3777013a3d787c6f3b52aae268166a50eead8554 endpointID=47614 error="[PUT /endpoint/{id}][500] putEndpointIdFailed  endpoint 47614 did not synchronously regenerate after timeout" eventUUID=66634fc5-9bb9-11e8-a10e-0800275579e8 subsys=cilium-cni
kubelet-k8s1-1.7.log:Aug 09 09:50:01 k8s1 kubelet[3411]: level=warning msg="Unable to create endpoint" containerID=38fbfaf1fb1bc0074c892d7313e764d69f8f8832aa3ab9324b4894076e81ba31 endpointID=24965 error="[PUT /endpoint/{id}][500] putEndpointIdFailed  endpoint 24965 did not synchronously regenerate after timeout" eventUUID=6632b206-9bb9-11e8-91d2-0800275579e8 subsys=cilium-cni
kubelet-k8s1-1.7.log:Aug 09 09:50:35 k8s1 kubelet[3411]: level=warning msg="Unable to create endpoint" containerID=094615eea28cdd58553637b9e1c3860006023769735bdbfab436a6123c481ea5 endpointID=64098 error="[PUT /endpoint/{id}][500] putEndpointIdFailed  endpoint 64098 did not synchronously regenerate after timeout" eventUUID=88290ff7-9bb9-11e8-aebd-0800275579e8 subsys=cilium-cni
kubelet-k8s1-1.7.log:Aug 09 09:51:20 k8s1 kubelet[3411]: level=warning msg="Unable to create endpoint" containerID=49c52b2317741984c8f289ebe1f73cb88b6565fee83e9a3c386b2639f7c2c32d endpointID=9873 error="[PUT /endpoint/{id}][500] putEndpointIdFailed  endpoint 9873 did not synchronously regenerate after timeout" eventUUID=96b8da64-9bb9-11e8-bee4-0800275579e8 subsys=cilium-cni
kubelet-k8s1-1.7.log:Aug 09 09:52:09 k8s1 kubelet[3411]: level=warning msg="Unable to create endpoint" containerID=ab7b1e5b8e15c5e441050ae197769da4ab02648e38fa22de897c5dc80d4a54a4 endpointID=27749 error="context deadline exceeded" eventUUID=abf1f840-9bb9-11e8-b304-0800275579e8 subsys=cilium-cni
kubelet-k8s1-1.7.log:Aug 09 09:53:25 k8s1 kubelet[3411]: level=warning msg="Unable to create endpoint" containerID=cc0f2b4a9826742f345174356164592da139e10356033af8471fcda0c90f37ce endpointID=34923 error="[PUT /endpoint/{id}][500] putEndpointIdFailed  endpoint 34923 did not synchronously regenerate after timeout" eventUUID=e36a5043-9bb9-11e8-940b-0800275579e8 subsys=cilium-cni

57949187_K8sDemosTest_Tests_Star_Wars_Demo.zip

@joestringer
Copy link
Member Author

Hit also #5154, which should be harmless. Re-attempting CI run.

@joestringer
Copy link
Member Author

test-me-please

@tgraf tgraf merged commit f4408f2 into cilium:master Aug 9, 2018
@joestringer joestringer deleted the submit/bpf-monitor-identity-on-drop branch August 10, 2018 09:08
@tgraf tgraf mentioned this pull request Aug 11, 2018
@joestringer joestringer added the release-note/minor This PR changes functionality that users may find relevant to operating Cilium. label Aug 13, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
release-note/minor This PR changes functionality that users may find relevant to operating Cilium. sig/datapath Impacts bpf/ or low-level forwarding details, including map management and monitor messages. upgrade-impact This PR has potential upgrade or downgrade impact.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Monitor omits destination identity on egress
5 participants