Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Enable CiliumEndpointSlice feature #17658

Merged
merged 1 commit into from
Nov 11, 2021
Merged

Conversation

krishgobinath
Copy link
Contributor

@krishgobinath krishgobinath commented Oct 20, 2021

Enable CiliumEndpointSlice feature, see design in CFP.

  1. CiliumEndpointSlice object packs group of slim version of CiliumEndpoints, and broadcast these objects to all cilium-agents running in on the cluster.
  2. If CiliumEndpointSlice feature is enabled, Cilium-agent no longer watch for CiliumEndpoint updates instead they watch for CiliumEndpointSlice. CES watcher function calls endpointUpdated/ endpointDeleted functions for every CEP present in CES.
  3. Only cilium-Operator watches for CEPs, it Creates/Updates/Deletes CiliumEndpointSlice objects based on CiliumEndpoint updates.
  4. By default, CiliumEndpoints are grouped based on Security Identity ID. If the pods have same Security Identity ID they are put together in single CiliumEndpointSlice.
  5. By default, maximum of 100 CiliumEndpoints can be grouped in single CiliumEndpointSlice.

This entire feature is split across multiple PRs, each PR is reviewed separately and merged in cep-scalability branch.

List of PR's reviewed and merged in cep-scalability branch.

  1. Create CiliumEndpointBatch CRD Create CiliumEndpointBatch CRD #16864
  2. CiliumEndpointBatch implementation Cilium Batch Implementation #16945
  3. CiliumEndpointBatch support in Cilium-agent CiliumEndpointBatch support in cilium-agent #17207
  4. CiliumEndointBatch group CEPs by Identity CiliumEndointBatch group CEPs by Identity #17345
  5. Add CiliumEndpointBatch metrics in Operator Add CiliumEndpointBatch metrics in Operator #17410
  6. Fix bugs in CiliumEndpointBatch metrics Fix bugs in CiliumEndpointBatch metrics #17498
  7. Identity based batching race condition Issues Identity based batching race condition Issues #17520
  8. Refactor CEBtoCEPs and CEPtoCEB maps Refactor CEBtoCEPs and CEPtoCEB maps #17543
  9. Make CiliumEndpointBatches as Namespace scoped Make CiliumEndpointBatches as Namespace scoped #17554
  10. Process source node CiliumEndpoints in CEB watch events Process source node CiliumEndpoints in CEB watch events #17571
  11. Change name from CiliumEndpointBatch to CiliumEndpointSlice Change name from CiliumEndpointBatch to CiliumEndpointSlice #17638

List of pending work in CiliuemEndpointSlice feature

Few of them are tracked here

  1. Document racing Issue when identity of the pod changes at runtime.
  2. Known Issue with egress gateway tracked here
  3. Move CiliumEndpointSlice API to v2beta1 here
  4. Enable CES feature testing on K8S version 1.21 Enable CiliumEndpointSlice feature testing on Kuberneres version 1.21 #17698
  5. Watch CES object in operator, if modified by bad actors reprogram to original values. here

Signed-off-by: Gobinath Krishnamoorthy gobinathk@google.com

@maintainer-s-little-helper maintainer-s-little-helper bot added the dont-merge/needs-release-note-label The author needs to describe the release impact of these changes. label Oct 20, 2021
@Weil0ng
Copy link
Contributor

Weil0ng commented Oct 20, 2021

test-me-please

@Weil0ng
Copy link
Contributor

Weil0ng commented Oct 21, 2021

test-me-please

@krishgobinath
Copy link
Contributor Author

krishgobinath commented Oct 21, 2021

ConformanceEKS (ci-eks) Failure looks flaky. looked at the cilium-config from sysdump, CES feature not enabled at all.

https://github.com/cilium/cilium/actions/runs/1366449190

gke-stable (test-gke) Failures

  1. Suite-k8s-1.19.K8sDatapathConfig DirectRouting Check connectivity with direct routing and endpointRoutes
05:58:05 STEP: Creating namespace 202110210558k8sdatapathconfigdirectroutingcheckconnectivitywith
05:58:05 STEP: Deploying demo_ds.yaml in namespace 202110210558k8sdatapathconfigdirectroutingcheckconnectivitywith
05:58:09 STEP: Applying policy /home/jenkins/workspace/Cilium-PR-K8s-GKE/src/github.com/cilium/cilium/test/k8sT/manifests/l3-policy-demo.yaml
05:58:15 STEP: Waiting for 4m0s for 5 pods of deployment demo_ds.yaml to become ready
05:58:15 STEP: WaitforNPods(namespace="202110210558k8sdatapathconfigdirectroutingcheckconnectivitywith", filter="")
06:02:15 STEP: WaitforNPods(namespace="202110210558k8sdatapathconfigdirectroutingcheckconnectivitywith", filter="") => timed out waiting for pods with filter  to be ready: 

Pods aren't ready.
https://jenkins.cilium.io/job/Cilium-PR-K8s-GKE/6705/

@krishgobinath
Copy link
Contributor Author

k8s-1.20-kernel-4.19 (test-1.20-4.19) Failure:
https://jenkins.cilium.io/job/Cilium-PR-K8s-1.20-kernel-4.19/1590/
Cilium agent crash, fixed this issue as part of commit

The root cause of Issue, One of the deleted POD's IPV6 address is reallocated for cilium-health, before that POD IPCache entry deleted in Ipcache.

2021-10-21T05:58:37.482536733Z level=debug msg="Allocated random IP" ip="fd02::1b8" owner=health subsys=ipam
2021-10-21T05:58:37.482539137Z level=debug msg="IPv6 health endpoint address: fd02::1b8" subsys=daemon

The sequence of Events,

  1. Cilium-agent bootup
  2. Resync CES from apiserver, Updates IP-CACHE for all CEP's [including stale CEP]
  3. Cilium health reallocate one of Stale CEP IP address, this puts empty K8sMeta for this IPV6 address.
  4. Delete stale CEPs through CES Update, this causes CEP Ipcache delete, that caused agent crash.

@krishgobinath
Copy link
Contributor Author

k8s-1.16-kernel-netnext Failures

https://jenkins.cilium.io/job/Cilium-PR-K8s-1.16-net-next/1733/

Four tests failed with net-next kernel

Test Result (4 failures / +4)
Suite-k8s-1.16.K8sEgressGatewayTest tunnel disabled with endpointRoutes enabled Checks egress policy and basic connectivity both work
Suite-k8s-1.16.K8sEgressGatewayTest tunnel disabled with endpointRoutes disabled Checks egress policy and basic connectivity both work
Suite-k8s-1.16.K8sEgressGatewayTest tunnel vxlan with endpointRoutes enabled Checks egress policy and basic connectivity both work
Suite-k8s-1.16.K8sEgressGatewayTest tunnel vxlan with endpointRoutes disabled Checks egress policy and basic connectivity both work

@Weil0ng
Copy link
Contributor

Weil0ng commented Oct 21, 2021

test-1.20-4.19

@Weil0ng
Copy link
Contributor

Weil0ng commented Oct 21, 2021

Found out reason for netnext failures (#17658 (comment)):

egressPolicyManager depends on CEP watcher to populate egress policy map, we need to add the hook in CES watcher

The egress policy updater relies on endpoint.Identity.labels to match endpoint to egress policy selector (https://github.com/cilium/cilium/blob/master/pkg/egresspolicy/manager.go#L242) but by design CES won't carry pod labels in its CoreCEP struct. So egress policy won't match in this case and egress map is always empty.

We discussed this issue with @MasterZ40 offline, the fix is to use Identity.id as a middle groud. CoreCEP carries the numerical id of an endpoint's security identity, and agent is already watching for all ciliumidentities (https://github.com/cilium/cilium/blob/master/pkg/k8s/identitybackend/identity.go#L272), so egress policy manager could just grab the numerical security id from the endpoint, then look up the full identity in the cache, then retrieve the labels.

This failure is tracked in #17669.

@aanm Can we treat this as a known issue and unblock merging, all other tests are expected to pass.

@Weil0ng
Copy link
Contributor

Weil0ng commented Oct 22, 2021

test-1.19-5.4

@Weil0ng
Copy link
Contributor

Weil0ng commented Oct 22, 2021

test-1.21-4.9

@Weil0ng
Copy link
Contributor

Weil0ng commented Oct 22, 2021

test-gke

@aanm
Copy link
Member

aanm commented Oct 22, 2021

Found out reason for netnext failures (#17658 (comment)):

egressPolicyManager depends on CEP watcher to populate egress policy map, we need to add the hook in CES watcher

The egress policy updater relies on endpoint.Identity.labels to match endpoint to egress policy selector (https://github.com/cilium/cilium/blob/master/pkg/egresspolicy/manager.go#L242) but by design CES won't carry pod labels in its CoreCEP struct. So egress policy won't match in this case and egress map is always empty.

We discussed this issue with @MasterZ40 offline, the fix is to use Identity.id as a middle groud. CoreCEP carries the numerical id of an endpoint's security identity, and agent is already watching for all ciliumidentities (https://github.com/cilium/cilium/blob/master/pkg/k8s/identitybackend/identity.go#L272), so egress policy manager could just grab the numerical security id from the endpoint, then look up the full identity in the cache, then retrieve the labels.

This failure is tracked in #17669.

@aanm Can we treat this as a known issue and unblock merging, all other tests are expected to pass.

@Weil0ng as long it's documented

Copy link
Member

@aanm aanm left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

One small nit I have is that there are still lots of files named as ciliumendpointbatch and not ciliumendpointslice.

Copy link
Member

@aanm aanm left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

One small nit I have is that there are still lots of files named as ciliumendpointbatch and not ciliumendpointslice.

Makefile Outdated Show resolved Hide resolved
@Weil0ng
Copy link
Contributor

Weil0ng commented Oct 22, 2021

@Weil0ng
Copy link
Contributor

Weil0ng commented Oct 23, 2021

This should be marked ready for review? Can you provide context for reviewers in the PR description/commit msg?

@krishgobinath krishgobinath marked this pull request as ready for review October 25, 2021 17:17
@krishgobinath krishgobinath requested a review from a team October 25, 2021 17:17
@krishgobinath krishgobinath requested a review from a team as a code owner October 25, 2021 17:17
@joestringer
Copy link
Member

We're aware of an issue affecting the Jenkins-based infrastructure, so no action is necessary from you on that side. We can look out for the results of the GHA-based infrastructure (Conformance*) to check whether the latest failure has been resolved.

@pchaigno
Copy link
Member

pchaigno commented Nov 9, 2021

Provisioning issues have been fixed:
/test

@krishgobinath
Copy link
Contributor Author

  • runtime (test-runtime) Failure
    Unable to provision VM, infrastructure issue.

@joestringer
Copy link
Member

/test-runtime

@joestringer
Copy link
Member

joestringer commented Nov 9, 2021

test-1.16-netnext run looks like it needs some additional attention. Unless there is a very recent regression on master and this PR has been rebased to include it, it seems likely that the failures are somehow related to this PR as one of the recent PR runs for this job has succeeded about 5 hours ago (note, this is the PR listing so can include failures related to other PRs).

@Weil0ng
Copy link
Contributor

Weil0ng commented Nov 9, 2021

test-1.16-netnext run looks like it needs some additional attention. Unless there is a very recent regression on master and this PR has been rebased to include it, it seems likely that the failures are somehow related to this PR as one of the recent PR runs for this job has succeeded about 5 hours ago (note, this is the PR listing so can include failures related to other PRs).

This is very odd...we know that the egress gateway tests WILL fail w/ CES (see #17669), but per my understanding, the CI here does not enable CES...

Edit: actually on a closer look, these are failing for a different reason...the known issue is that the traffic won't be SNATed to egress IP correctly, but this is packet loss...

@joestringer
Copy link
Member

Edit: actually on a closer look, these are failing for a different reason...the known issue is that the traffic won't be SNATed to egress IP correctly, but this is packet loss...

Sometimes, failure to NAT or reverse-NAT correctly can exhibit as packet loss because the packets end up at the wrong destination or replies arrive back with the wrong addresses, hence the stack doesn't hand the response back to the application socket.

@Weil0ng
Copy link
Contributor

Weil0ng commented Nov 9, 2021

Sometimes, failure to NAT or reverse-NAT correctly can exhibit as packet loss because the packets end up at the wrong destination or replies arrive back with the wrong addresses, hence the stack doesn't hand the response back to the application socket.

Makse sense, but this is pinging from a pod to a node within the cluster...I don't see how this PR would affect this path, plus, CES is not enabled at all...maybe the test is flaky?

@joestringer
Copy link
Member

Given that all 4/4 egress gateway tests failed with a consistent error and a lack of similar past failures, the most likely explanation is that something in this PR is triggering the failure.

@krishgobinath
Copy link
Contributor Author

I did quick comparison between test/k8sT/Egress.go in master and krishgobinath:ceb-ci2 branch, i see some a few differences in code not sure why previous rebase doesn't pick those changes. all these failing tests are from this file.
That explains why we see this failure in this branch and not in other PRs based out of master branch.

@krishgobinath
Copy link
Contributor Author

Just for record, currently we are seeing only failure net-next based tests, failing tests are related to egress gateways.

Validated these tests on dev machine all are passed.
Test Command:
K8S_VERSION=1.16 NETNEXT=1 KUBEPROXY=0 K8S_NODES=3 NO_CILIUM_ON_NODE="k8s3" ginkgo -v --focus="k8s.*K8sEgressGatewayTest" --tags=integration_tests -v
Test Results
Ran 8 of 416 Specs in 1460.863 seconds
SUCCESS! -- 8 Passed | 0 Failed | 0 Pending | 408 Skipped
PASS

Enable CiliumEndpointSlice feature
1) CiliumEndpointSlice object packs group of slim version of CiliumEndpoints, and
broadcast these objects to all cilium-agents running in on the cluster.

2) If CiliumEndpointSlice feature is enabled, Cilium-agent no longer watch for CiliumEndpoint
updates instead they watch for CiliumEndpointSlice. CES watcher function calls
endpointUpdated/ endpointDeleted functions for every CEP present in CES.

3) Only cilium-Operator watches for CEPs, it Creates/Updates/Deletes CiliumEndpointSlice objects based on CiliumEndpoint updates.

4) By default, CiliumEndpoints are grouped based on Security Identity ID.
If the pods have same Security Identity ID they are put together in single CiliumEndpointSlice.

5) By default, maximum of 100 CiliumEndpoints can be grouped in single CiliumEndpointSlice.

This entire feature is split across multiple PRs, each PR is reviewed separately and merged in cep-scalability branch.

Signed-off-by: Gobinath Krishnamoorthy <gobinathk@google.com>
@joestringer
Copy link
Member

/test-1.16-netnext

@krishgobinath
Copy link
Contributor Author

Thank you @joestringer for re-run of netnext CI test, i see it's passed now.
https://jenkins.cilium.io/job/Cilium-PR-K8s-1.16-net-next/1913/

@joestringer
Copy link
Member

Everything should be green in CI now, all other tests were already green and the net-next run was also green this time. I'll run once more just to check that there weren't any other consistent failures and then I think this should be good to merge.

@joestringer
Copy link
Member

/test

@joestringer
Copy link
Member

ci-aks job failed during Cilium install due to warnings , seems like the cilium-agent backends couldn't be reached to fetch agent status. 🤔

@joestringer
Copy link
Member

/ci-aks

@krishgobinath
Copy link
Contributor Author

gke-stable (test-gke) failure is related to cluster access issue, all of sudden we lost connection to cluster.

[2021-11-10T19:47:23.923Z] error when deleting "cilium-16b645389d12cc54.yaml": Delete "https://34.127.123.192/apis/apps/v1/namespaces/kube-system/deployments/cilium-operator": dial tcp 34.127.123.192:443: connect: connection refused

@Weil0ng
Copy link
Contributor

Weil0ng commented Nov 10, 2021

test-gke

Job 'Cilium-PR-K8s-GKE' failed and has not been observed before, so may be related to your PR:

Click to show.

Test Name

K8sServicesTest Checks service across nodes Checks ClusterIP Connectivity

Failure Output

FAIL: Expected

If it is a flake, comment /mlh new-flake Cilium-PR-K8s-GKE so I can create a new GitHub issue to track it.

@krishgobinath
Copy link
Contributor Author

krishgobinath commented Nov 11, 2021

Again gke-stable test failed, this time one of the test pod is not ready, its readiness probe failed in BeforeAll, hence all tests in that spec are failed.

gke-stable passed in earlier runs, not sure it requires some code rebase from master?

@joestringer @Weil0ng any thoughts here ?

FAIL: Expected
    <*errors.errorString | 0xc000696730>: {
        s: "timed out waiting for pods with filter -l zgroup=test-k8s2 to be ready: 4m0s timeout expired",
    }
to be nil
Events:
	   Type     Reason     Age                  From               Message
	   ----     ------     ----                 ----               -------
	   Normal   Scheduled  4m18s                default-scheduler  Successfully assigned default/test-k8s2-79ff876c9d-r4pnc to gke-cilium-ci-10-cilium-ci-10-f656816e-cm7x
	   Normal   Pulling    4m16s                kubelet            Pulling image "docker.io/cilium/echoserver:1.10.1"
	   Normal   Pulled     4m11s                kubelet            Successfully pulled image "docker.io/cilium/echoserver:1.10.1" in 4.818380138s
	   Normal   Created    4m11s                kubelet            Created container web
	   Normal   Started    4m11s                kubelet            Started container web
	   Normal   Pulled     4m11s                kubelet            Container image "docker.io/cilium/echoserver-udp:v2020.01.30" already present on machine
	   Normal   Created    4m11s                kubelet            Created container udp
	   Normal   Started    4m11s                kubelet            Started container udp
	   Warning  Unhealthy  78s (x18 over 4m8s)  kubelet            Readiness probe failed: Get "http://10.48.1.1:80/": dial tcp 10.48.1.1:80: connect: connection refused

@Weil0ng
Copy link
Contributor

Weil0ng commented Nov 11, 2021

gke failure looks like an infra instability to me...1/2 test pods is ready, the other one fails health check...

@Weil0ng
Copy link
Contributor

Weil0ng commented Nov 11, 2021

created #17857, retrigger here

@Weil0ng
Copy link
Contributor

Weil0ng commented Nov 11, 2021

test-gke

@krishgobinath
Copy link
Contributor Author

Just to cross check CES feature is validated on K8S-1.22 based CI test,
downloaded all test results from https://jenkins.cilium.io/job/Cilium-PR-K8s-1.22-kernel-4.9/123/
and check operator logs from one of the test

2021-11-10T19:47:18.879150338Z level=info msg="  --config-dir='/tmp/cilium/config-map'" subsys=cilium-operator-generic
2021-11-10T19:47:18.879158518Z level=info msg="  --debug='true'" subsys=cilium-operator-generic
2021-11-10T19:47:18.879160937Z level=info msg="  --disable-cnp-status-updates='false'" subsys=cilium-operator-generic
2021-11-10T19:47:18.879163170Z level=info msg="  --disable-endpoint-crd='false'" subsys=cilium-operator-generic
2021-11-10T19:47:18.879165440Z level=info msg="  --enable-cilium-endpoint-slice='true'" subsys=cilium-operator-generic
021-11-10T19:47:36.856128249Z level=info msg="Create and run CES controller, start CEP watcher" subsys=cilium-operator-generic
2021-11-10T19:47:36.856152842Z level=info msg="CES controller workqueue configuration" subsys=ces-controller workQueueBurstLimit=100 workQueueQPSLimit=10 workQueueSyncBackOff=1s
2021-11-10T19:47:36.856418056Z level=info msg="Leading the operator HA deployment" subsys=cilium-operator-generic
2021-11-10T19:47:36.957634616Z level=debug msg="Generated CES" CESName=ces-x9bnhszw9-f9bq2 subsys=ces-controller
2021-11-10T19:47:36.957655328Z level=debug msg="Generated CES" CESName=ces-sdf6xvcsj-rncmr subsys=ces-controller
2021-11-10T19:47:36.957658090Z level=debug msg="Generated CES" CESName=ces-dxfkbclgh-lxptw subsys=ces-controller
2021-11-10T19:47:36.957660331Z level=debug msg="Generated CES" CESName=ces-tzm8jcdpl-k4wkh subsys=ces-controller
2021-11-10T19:47:36.957662467Z level=debug msg="Generated CES" CESName=ces-z8byv5mvm-lf2s6 subsys=ces-controller
2021-11-10T19:47:36.957671717Z level=debug msg="Generated CES" CESName=ces-vnvxfl97j-945w6 subsys=ces-controller
2021-11-10T19:47:36.957673995Z level=debug msg="Successfully synced all CESs locally" subsys=ces-controller
2021-11-10T19:47:36.964604917Z level=debug msg="Queueing CEP in the CES" CEPName=grafana-5747bcc8f9-wrtx9 CESName=ces-tzm8jcdpl-k4wkh subsys=ces-controller totalCEPCount=1
2021-11-10T19:47:36.964641783Z level=debug msg="Queueing CEP in the CES" CEPName=prometheus-655fb888d7-qkk7r CESName=ces-z8byv5mvm-lf2s6 subsys=ces-controller totalCEPCount=1
2021-11-10T19:47:36.964646478Z level=debug msg="Queueing CEP in the CES" CEPName=coredns-755cd654d4-fqhvv CESName=ces-vnvxfl97j-945w6 subsys=ces-controller totalCEPCount=1
2021-11-10T19:47:36.964649790Z level=debug msg="Queueing CEP in the CES" CEPName=testclient-s8lr5 CESName=ces-x9bnhszw9-f9bq2 subsys=ces-controller totalCEPCount=2
2021-11-10T19:47:36.964663938Z level=debug msg="Queueing CEP in the CES" CEPName=testclient-wq526 CESName=ces-x9bnhszw9-f9bq2 subsys=ces-controller totalCEPCount=2
2021-11-10T19:47:36.964668303Z level=debug msg="Queueing CEP in the CES" CEPName=testds-jhmhc CESName=ces-sdf6xvcsj-rncmr subsys=ces-controller totalCEPCount=2
2021-11-10T19:47:36.964671710Z level=debug msg="Queueing CEP in the CES" CEPName=testds-rf5fw CESName=ces-sdf6xvcsj-rncmr subsys=ces-controller totalCEPCount=2
2021-11-10T19:47:36.964674975Z level=debug msg="Queueing CEP in the CES" CEPName=test-k8s2-7f96d84c65-f29rm CESName=ces-dxfkbclgh-lxptw subsys=ces-controller totalCEPCount=1

@krishgobinath
Copy link
Contributor Author

similarly CES feature is validated on K8S-1.21 based CI test,
downloaded all test results from https://jenkins.cilium.io/job/Cilium-PR-K8s-1.21-kernel-4.19/131/
and check operator logs from one of the test

2021-11-10T20:10:05.229746284Z level=info msg="  --enable-ipv4-egress-gateway='false'" subsys=cilium-operator-generic
2021-11-10T20:10:05.229748355Z level=info msg="  --enable-ipv6='true'" subsys=cilium-operator-generic
2021-11-10T20:10:05.229750298Z level=info msg="  --enable-k8s-api-discovery='false'" subsys=cilium-operator-generic
2021-11-10T20:10:05.229753525Z level=info msg="  --enable-k8s-endpoint-slice='true'" subsys=cilium-operator-generic
2021-11-10T20:10:05.229755595Z level=info msg="  --enable-k8s-event-handover='false'" subsys=cilium-operator-generic
2021-11-10T20:10:05.229761491Z level=info msg="  --enable-local-redirect-policy='false'" subsys=cilium-operator-generic

2021-11-10T20:10:19.145462407Z level=info msg="Create and run CES controller, start CEP watcher" subsys=cilium-operator-generic
2021-11-10T20:10:19.145509133Z level=info msg="CES controller workqueue configuration" subsys=ces-controller workQueueBurstLimit=100 workQueueQPSLimit=10 workQueueSyncBackOff=1s
2021-11-10T20:10:19.246888059Z level=debug msg="Generated CES" CESName=ces-n6rdftf2p-pzpgh subsys=ces-controller
2021-11-10T20:10:19.246916373Z level=debug msg="Generated CES" CESName=ces-nsfc9s4xj-jtm67 subsys=ces-controller
2021-11-10T20:10:19.246920315Z level=debug msg="Generated CES" CESName=ces-ftgsbmvh7-xcwy9 subsys=ces-controller
2021-11-10T20:10:19.246923278Z level=debug msg="Generated CES" CESName=ces-hgdjbp2yw-74jqj subsys=ces-controller
2021-11-10T20:10:19.246926333Z level=debug msg="Generated CES" CESName=ces-nyffns54y-p746p subsys=ces-controller
2021-11-10T20:10:19.246945696Z level=debug msg="Generated CES" CESName=ces-mqjtwc9jl-ldfs4 subsys=ces-controller
2021-11-10T20:10:19.246949421Z level=debug msg="Successfully synced all CESs locally" subsys=ces-controller
2021-11-10T20:10:19.263572873Z level=debug msg="Queueing CEP in the CES" CEPName=prometheus-655fb888d7-wmpnd CESName=ces-mqjtwc9jl-ldfs4 subsys=ces-controller totalCEPCount=1
2021-11-10T20:10:19.263681188Z level=debug msg="Queueing CEP in the CES" CEPName=grafana-5747bcc8f9-v8qxd CESName=ces-n6rdftf2p-pzpgh subsys=ces-controller totalCEPCount=1
2021-11-10T20:10:19.263692274Z level=debug msg="Queueing CEP in the CES" CEPName=testclient-mxgsm CESName=ces-nsfc9s4xj-jtm67 subsys=ces-controller totalCEPCount=2
2021-11-10T20:10:19.263695142Z level=debug msg="Queueing CEP in the CES" CEPName=testds-lzndp CESName=ces-hgdjbp2yw-74jqj subsys=ces-controller totalCEPCount=2
2021-11-10T20:10:19.263726161Z level=debug msg="Queueing CEP in the CES" CEPName=testds-26c2j CESName=ces-hgdjbp2yw-74jqj subsys=ces-controller totalCEPCount=2
2021-11-10T20:10:19.263736187Z level=debug msg="Queueing CEP in the CES" CEPName=coredns-755cd654d4-xrhsm CESName=ces-nyffns54y-p746p subsys=ces-controller totalCEPCount=1
2021-11-10T20:10:19.263741148Z level=debug msg="Queueing CEP in the CES" CEPName=test-k8s2-7f96d84c65-h2r47 CESName=ces-ftgsbmvh7-xcwy9 subsys=ces-controller totalCEPCount=1
2021-11-10T20:10:19.263783761Z level=debug msg="Queueing CEP in the CES" CEPName=testclient-qqhnk CESName=ces-nsfc9s4xj-jtm67 subsys=ces-controller totalCEPCount=2

@aanm aanm merged commit 89ca3ed into cilium:master Nov 11, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
release-note/major This PR introduces major new functionality to Cilium.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

9 participants