Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CI: Smoke test: Timed out waiting for the condition on pods #12279

Closed
sayboras opened this issue Jun 25, 2020 · 20 comments
Closed

CI: Smoke test: Timed out waiting for the condition on pods #12279

sayboras opened this issue Jun 25, 2020 · 20 comments
Assignees
Labels
area/CI Continuous Integration testing issue or flake ci/flake This is a known failure that occurs in the tree. Please investigate me!
Projects

Comments

@sayboras
Copy link
Member

sayboras commented Jun 25, 2020

CI failure

Some failure happened for newly introduced github actions for conformance test (added in #11888 #12232). It's kind of rare based on my observation till now, and simple re-run will do (good thing that it didn't take long to re-run). https://github.com/cilium/cilium/runs/802697197?check_suite_focus=true

However, the developer experience is not pleasant. It will be helpful if we can dump pod logs in github action for quick analysis.

image

@sayboras sayboras added the area/CI Continuous Integration testing issue or flake label Jun 25, 2020
@sayboras
Copy link
Member Author

@pchaigno pchaigno changed the title CI: github action conformance test flanky CI: Smoke test: Timed out waiting for the condition on pods Jun 26, 2020
@pchaigno
Copy link
Member

We could probably have a final step in workflow to dump a bunch of logs to stdout in case of error.

sayboras added a commit to sayboras/cilium that referenced this issue Jun 27, 2020
Change IPAM to kubernetes instead of cluster-pool

Closes cilium#12279

Signed-off-by: Tam Mach <sayboras@yahoo.com>
@sayboras
Copy link
Member Author

We could probably have a final step in workflow to dump a bunch of logs to stdout in case of error.

Good point. Let me add the step to dump logs and events for related services, so that we can have more details to tackle this issue.

Currently, my hunch is due race conditional between local-path-storage and cilium, which causes the bpf mount failed. I am not 100% confident though :)

@sayboras
Copy link
Member Author

One more observation that I have is that the conformance test (with helm) never fail so far (maybe it might fail later :D). There are a few differences between conformance test and quick install test as below:

  • ipam is set to kubernetes instead of cluster-pool
  • nodeinit is enabled
  • externalIPs is enabled
  • hostService is enabled
  • nodePort and hostPort are enabled

Will narrow down further more once I have some free time. Any input or pointer are much appreciated 👍

sayboras added a commit to sayboras/cilium that referenced this issue Jun 29, 2020
…esting

Dump related logs and events for cilium, hubble and connectivity checks

Relates cilium#12279

Signed-off-by: Tam Mach <sayboras@yahoo.com>
aanm pushed a commit that referenced this issue Jun 29, 2020
…esting

Dump related logs and events for cilium, hubble and connectivity checks

Relates #12279

Signed-off-by: Tam Mach <sayboras@yahoo.com>
@sayboras
Copy link
Member Author

sayboras commented Jun 29, 2020

Failed in https://github.com/cilium/cilium/pull/12320/checks?check_run_id=818304498, I am dumping logs below to make sure it will not be lost.

raw logs
2020-06-29T12:45:09.7240127Z ##[section]Starting: Request a runner to run this job
2020-06-29T12:45:10.4830342Z Can't find any online and idle self-hosted runner in current repository that matches the required labels: 'ubuntu-latest'
2020-06-29T12:45:10.4830408Z Can't find any online and idle self-hosted runner in current repository's account/organization that matches the required labels: 'ubuntu-latest'
2020-06-29T12:45:10.4830439Z Found online and idle hosted runner in current repository's account/organization that matches the required labels: 'ubuntu-latest'
2020-06-29T12:45:10.6946034Z ##[section]Finishing: Request a runner to run this job
2020-06-29T12:45:16.5528195Z Current runner version: '2.263.0'
2020-06-29T12:45:16.5550493Z ##[group]Operating System
2020-06-29T12:45:16.5551043Z Ubuntu
2020-06-29T12:45:16.5551234Z 18.04.4
2020-06-29T12:45:16.5551420Z LTS
2020-06-29T12:45:16.5551628Z ##[endgroup]
2020-06-29T12:45:16.5551814Z ##[group]Virtual Environment
2020-06-29T12:45:16.5551970Z Environment: ubuntu-18.04
2020-06-29T12:45:16.5552145Z Version: 20200621.1
2020-06-29T12:45:16.5552393Z Included Software: https://github.com/actions/virtual-environments/blob/ubuntu18/20200621.1/images/linux/Ubuntu1804-README.md
2020-06-29T12:45:16.5552640Z ##[endgroup]
2020-06-29T12:45:16.5553488Z Prepare workflow directory
2020-06-29T12:45:16.5748702Z Prepare all required actions
2020-06-29T12:45:16.5758609Z Download action repository 'actions/checkout@v2'
2020-06-29T12:45:18.6021607Z Download action repository 'helm/kind-action@v1.0.0-rc.1'
2020-06-29T12:45:18.9471846Z ##[group]Run actions/checkout@v2
2020-06-29T12:45:18.9472204Z with:
2020-06-29T12:45:18.9472399Z   repository: cilium/cilium
2020-06-29T12:45:18.9472763Z   token: ***
2020-06-29T12:45:18.9472913Z   ssh-strict: true
2020-06-29T12:45:18.9473006Z   persist-credentials: true
2020-06-29T12:45:18.9473162Z   clean: true
2020-06-29T12:45:18.9473298Z   fetch-depth: 1
2020-06-29T12:45:18.9473438Z   lfs: false
2020-06-29T12:45:18.9473572Z   submodules: false
2020-06-29T12:45:18.9473740Z env:
2020-06-29T12:45:18.9473881Z   KIND_VERSION: v0.8.1
2020-06-29T12:45:18.9473976Z   KIND_CONFIG: .github/kind-config.yaml
2020-06-29T12:45:18.9474134Z   CONFORMANCE_TEMPLATE: examples/kubernetes/connectivity-check/connectivity-check.yaml
2020-06-29T12:45:18.9474294Z ##[endgroup]
2020-06-29T12:45:19.5875546Z Syncing repository: cilium/cilium
2020-06-29T12:45:19.5876008Z ##[group]Getting Git version info
2020-06-29T12:45:19.5876877Z Working directory is '/home/runner/work/cilium/cilium'
2020-06-29T12:45:19.5877219Z [command]/usr/bin/git version
2020-06-29T12:45:19.5877432Z git version 2.27.0
2020-06-29T12:45:19.5878064Z ##[endgroup]
2020-06-29T12:45:19.5878503Z Deleting the contents of '/home/runner/work/cilium/cilium'
2020-06-29T12:45:19.5879745Z ##[group]Initializing the repository
2020-06-29T12:45:19.5879952Z [command]/usr/bin/git init /home/runner/work/cilium/cilium
2020-06-29T12:45:19.5880124Z Initialized empty Git repository in /home/runner/work/cilium/cilium/.git/
2020-06-29T12:45:19.5880332Z [command]/usr/bin/git remote add origin https://github.com/cilium/cilium
2020-06-29T12:45:19.5880556Z ##[endgroup]
2020-06-29T12:45:19.5880726Z ##[group]Disabling automatic garbage collection
2020-06-29T12:45:19.5881173Z [command]/usr/bin/git config --local gc.auto 0
2020-06-29T12:45:19.5881366Z ##[endgroup]
2020-06-29T12:45:19.5883737Z ##[group]Setting up auth
2020-06-29T12:45:19.5884611Z [command]/usr/bin/git config --local --name-only --get-regexp core\.sshCommand
2020-06-29T12:45:19.5885215Z [command]/usr/bin/git submodule foreach --recursive git config --local --name-only --get-regexp 'core\.sshCommand' && git config --local --unset-all 'core.sshCommand' || :
2020-06-29T12:45:19.5885794Z [command]/usr/bin/git config --local --name-only --get-regexp http\.https\:\/\/github\.com\/\.extraheader
2020-06-29T12:45:19.5886477Z [command]/usr/bin/git submodule foreach --recursive git config --local --name-only --get-regexp 'http\.https\:\/\/github\.com\/\.extraheader' && git config --local --unset-all 'http.https://github.com/.extraheader' || :
2020-06-29T12:45:19.5887411Z [command]/usr/bin/git config --local http.https://github.com/.extraheader AUTHORIZATION: basic ***
2020-06-29T12:45:19.5887770Z ##[endgroup]
2020-06-29T12:45:19.5888013Z ##[group]Fetching the repository
2020-06-29T12:45:19.5888936Z [command]/usr/bin/git -c protocol.version=2 fetch --no-tags --prune --progress --no-recurse-submodules --depth=1 origin +ea24b4d304d18e2c5131e836c434f9a35a9628a4:refs/remotes/pull/12320/merge
2020-06-29T12:45:19.8052581Z remote: Enumerating objects: 10502, done.        
2020-06-29T12:45:19.8961372Z remote: Counting objects:   0% (1/10502)        
2020-06-29T12:45:19.8962556Z remote: Counting objects:   1% (106/10502)        
2020-06-29T12:45:19.8962752Z remote: Counting objects:   2% (211/10502)        
2020-06-29T12:45:19.8964847Z remote: Counting objects:   3% (316/10502)        
2020-06-29T12:45:19.8965031Z remote: Counting objects:   4% (421/10502)        
2020-06-29T12:45:19.8967320Z remote: Counting objects:   5% (526/10502)        
2020-06-29T12:45:19.8969266Z remote: Counting objects:   6% (631/10502)        
2020-06-29T12:45:19.8971791Z remote: Counting objects:   7% (736/10502)        
2020-06-29T12:45:19.8971935Z remote: Counting objects:   8% (841/10502)        
2020-06-29T12:45:19.8973471Z remote: Counting objects:   9% (946/10502)        
2020-06-29T12:45:19.8974868Z remote: Counting objects:  10% (1051/10502)        
2020-06-29T12:45:19.9014213Z remote: Counting objects:  11% (1156/10502)        
2020-06-29T12:45:19.9014651Z remote: Counting objects:  12% (1261/10502)        
2020-06-29T12:45:19.9014823Z remote: Counting objects:  13% (1366/10502)        
2020-06-29T12:45:19.9014974Z remote: Counting objects:  14% (1471/10502)        
2020-06-29T12:45:19.9016416Z remote: Counting objects:  15% (1576/10502)        
2020-06-29T12:45:19.9019030Z remote: Counting objects:  16% (1681/10502)        
2020-06-29T12:45:19.9019178Z remote: Counting objects:  17% (1786/10502)        
2020-06-29T12:45:19.9021147Z remote: Counting objects:  18% (1891/10502)        
2020-06-29T12:45:19.9021350Z remote: Counting objects:  19% (1996/10502)        
2020-06-29T12:45:19.9022621Z remote: Counting objects:  20% (2101/10502)        
2020-06-29T12:45:19.9023652Z remote: Counting objects:  21% (2206/10502)        
2020-06-29T12:45:19.9024555Z remote: Counting objects:  22% (2311/10502)        
2020-06-29T12:45:19.9025848Z remote: Counting objects:  23% (2416/10502)        
2020-06-29T12:45:19.9028735Z remote: Counting objects:  24% (2521/10502)        
2020-06-29T12:45:19.9029103Z remote: Counting objects:  25% (2626/10502)        
2020-06-29T12:45:19.9029386Z remote: Counting objects:  26% (2731/10502)        
2020-06-29T12:45:19.9029682Z remote: Counting objects:  27% (2836/10502)        
2020-06-29T12:45:19.9030162Z remote: Counting objects:  28% (2941/10502)        
2020-06-29T12:45:19.9030446Z remote: Counting objects:  29% (3046/10502)        
2020-06-29T12:45:19.9030712Z remote: Counting objects:  30% (3151/10502)        
2020-06-29T12:45:19.9031140Z remote: Counting objects:  31% (3256/10502)        
2020-06-29T12:45:19.9031360Z remote: Counting objects:  32% (3361/10502)        
2020-06-29T12:45:19.9032709Z remote: Counting objects:  33% (3466/10502)        
2020-06-29T12:45:19.9033229Z remote: Counting objects:  34% (3571/10502)        
2020-06-29T12:45:19.9033360Z remote: Counting objects:  35% (3676/10502)        
2020-06-29T12:45:19.9033569Z remote: Counting objects:  36% (3781/10502)        
2020-06-29T12:45:19.9033715Z remote: Counting objects:  37% (3886/10502)        
2020-06-29T12:45:19.9034558Z remote: Counting objects:  38% (3991/10502)        
2020-06-29T12:45:19.9035067Z remote: Counting objects:  39% (4096/10502)        
2020-06-29T12:45:19.9035331Z remote: Counting objects:  40% (4201/10502)        
2020-06-29T12:45:19.9035463Z remote: Counting objects:  41% (4306/10502)        
2020-06-29T12:45:19.9035599Z remote: Counting objects:  42% (4411/10502)        
2020-06-29T12:45:19.9036244Z remote: Counting objects:  43% (4516/10502)        
2020-06-29T12:45:19.9036431Z remote: Counting objects:  44% (4621/10502)        
2020-06-29T12:45:19.9036885Z remote: Counting objects:  45% (4726/10502)        
2020-06-29T12:45:19.9037009Z remote: Counting objects:  46% (4831/10502)        
2020-06-29T12:45:19.9037179Z remote: Counting objects:  47% (4936/10502)        
2020-06-29T12:45:19.9037304Z remote: Counting objects:  48% (5041/10502)        
2020-06-29T12:45:19.9037735Z remote: Counting objects:  49% (5146/10502)        
2020-06-29T12:45:19.9040087Z remote: Counting objects:  50% (5251/10502)        
2020-06-29T12:45:19.9040422Z remote: Counting objects:  51% (5357/10502)        
2020-06-29T12:45:19.9040996Z remote: Counting objects:  52% (5462/10502)        
2020-06-29T12:45:19.9041175Z remote: Counting objects:  53% (5567/10502)        
2020-06-29T12:45:19.9041280Z remote: Counting objects:  54% (5672/10502)        
2020-06-29T12:45:19.9041399Z remote: Counting objects:  55% (5777/10502)        
2020-06-29T12:45:19.9041515Z remote: Counting objects:  56% (5882/10502)        
2020-06-29T12:45:19.9041631Z remote: Counting objects:  57% (5987/10502)        
2020-06-29T12:45:19.9041749Z remote: Counting objects:  58% (6092/10502)        
2020-06-29T12:45:19.9041870Z remote: Counting objects:  59% (6197/10502)        
2020-06-29T12:45:19.9041983Z remote: Counting objects:  60% (6302/10502)        
2020-06-29T12:45:19.9064315Z remote: Counting objects:  61% (6407/10502)        
2020-06-29T12:45:19.9065439Z remote: Counting objects:  62% (6512/10502)        
2020-06-29T12:45:19.9066676Z remote: Counting objects:  63% (6617/10502)        
2020-06-29T12:45:19.9067865Z remote: Counting objects:  64% (6722/10502)        
2020-06-29T12:45:19.9069904Z remote: Counting objects:  65% (6827/10502)        
2020-06-29T12:45:19.9071921Z remote: Counting objects:  66% (6932/10502)        
2020-06-29T12:45:19.9075158Z remote: Counting objects:  67% (7037/10502)        
2020-06-29T12:45:19.9075390Z remote: Counting objects:  68% (7142/10502)        
2020-06-29T12:45:19.9075548Z remote: Counting objects:  69% (7247/10502)        
2020-06-29T12:45:19.9077172Z remote: Counting objects:  70% (7352/10502)        
2020-06-29T12:45:19.9077301Z remote: Counting objects:  71% (7457/10502)        
2020-06-29T12:45:19.9078602Z remote: Counting objects:  72% (7562/10502)        
2020-06-29T12:45:19.9079816Z remote: Counting objects:  73% (7667/10502)        
2020-06-29T12:45:19.9081575Z remote: Counting objects:  74% (7772/10502)        
2020-06-29T12:45:19.9084446Z remote: Counting objects:  75% (7877/10502)        
2020-06-29T12:45:19.9084611Z remote: Counting objects:  76% (7982/10502)        
2020-06-29T12:45:19.9084745Z remote: Counting objects:  77% (8087/10502)        
2020-06-29T12:45:19.9084868Z remote: Counting objects:  78% (8192/10502)        
2020-06-29T12:45:19.9084993Z remote: Counting objects:  79% (8297/10502)        
2020-06-29T12:45:19.9085116Z remote: Counting objects:  80% (8402/10502)        
2020-06-29T12:45:19.9085223Z remote: Counting objects:  81% (8507/10502)        
2020-06-29T12:45:19.9085343Z remote: Counting objects:  82% (8612/10502)        
2020-06-29T12:45:19.9085467Z remote: Counting objects:  83% (8717/10502)        
2020-06-29T12:45:19.9085586Z remote: Counting objects:  84% (8822/10502)        
2020-06-29T12:45:19.9085707Z remote: Counting objects:  85% (8927/10502)        
2020-06-29T12:45:19.9085823Z remote: Counting objects:  86% (9032/10502)        
2020-06-29T12:45:19.9085942Z remote: Counting objects:  87% (9137/10502)        
2020-06-29T12:45:19.9086065Z remote: Counting objects:  88% (9242/10502)        
2020-06-29T12:45:19.9086172Z remote: Counting objects:  89% (9347/10502)        
2020-06-29T12:45:19.9089129Z remote: Counting objects:  90% (9452/10502)        
2020-06-29T12:45:19.9089267Z remote: Counting objects:  91% (9557/10502)        
2020-06-29T12:45:19.9091913Z remote: Counting objects:  92% (9662/10502)        
2020-06-29T12:45:19.9092497Z remote: Counting objects:  93% (9767/10502)        
2020-06-29T12:45:19.9092954Z remote: Counting objects:  94% (9872/10502)        
2020-06-29T12:45:19.9095627Z remote: Counting objects:  95% (9977/10502)        
2020-06-29T12:45:19.9095782Z remote: Counting objects:  96% (10082/10502)        
2020-06-29T12:45:19.9095925Z remote: Counting objects:  97% (10187/10502)        
2020-06-29T12:45:19.9096053Z remote: Counting objects:  98% (10292/10502)        
2020-06-29T12:45:19.9096182Z remote: Counting objects:  99% (10397/10502)        
2020-06-29T12:45:19.9096321Z remote: Counting objects: 100% (10502/10502)        
2020-06-29T12:45:19.9096452Z remote: Counting objects: 100% (10502/10502), done.        
2020-06-29T12:45:19.9192869Z remote: Compressing objects:   0% (1/8335)        
2020-06-29T12:45:19.9418955Z remote: Compressing objects:   1% (84/8335)        
2020-06-29T12:45:19.9677720Z remote: Compressing objects:   2% (167/8335)        
2020-06-29T12:45:20.0031044Z remote: Compressing objects:   3% (251/8335)        
2020-06-29T12:45:20.0111220Z remote: Compressing objects:   4% (334/8335)        
2020-06-29T12:45:20.0124524Z remote: Compressing objects:   5% (417/8335)        
2020-06-29T12:45:20.0138454Z remote: Compressing objects:   6% (501/8335)        
2020-06-29T12:45:20.0184602Z remote: Compressing objects:   7% (584/8335)        
2020-06-29T12:45:20.0240770Z remote: Compressing objects:   8% (667/8335)        
2020-06-29T12:45:20.0283433Z remote: Compressing objects:   9% (751/8335)        
2020-06-29T12:45:20.0485850Z remote: Compressing objects:  10% (834/8335)        
2020-06-29T12:45:20.0729743Z remote: Compressing objects:  11% (917/8335)        
2020-06-29T12:45:20.1071057Z remote: Compressing objects:  12% (1001/8335)        
2020-06-29T12:45:20.1174688Z remote: Compressing objects:  13% (1084/8335)        
2020-06-29T12:45:20.1445996Z remote: Compressing objects:  14% (1167/8335)        
2020-06-29T12:45:20.1565631Z remote: Compressing objects:  15% (1251/8335)        
2020-06-29T12:45:20.1680990Z remote: Compressing objects:  16% (1334/8335)        
2020-06-29T12:45:20.1949399Z remote: Compressing objects:  17% (1417/8335)        
2020-06-29T12:45:20.2189667Z remote: Compressing objects:  18% (1501/8335)        
2020-06-29T12:45:20.2440840Z remote: Compressing objects:  19% (1584/8335)        
2020-06-29T12:45:20.2678931Z remote: Compressing objects:  20% (1667/8335)        
2020-06-29T12:45:20.2812173Z remote: Compressing objects:  21% (1751/8335)        
2020-06-29T12:45:20.2976455Z remote: Compressing objects:  22% (1834/8335)        
2020-06-29T12:45:20.3177526Z remote: Compressing objects:  23% (1918/8335)        
2020-06-29T12:45:20.3409949Z remote: Compressing objects:  24% (2001/8335)        
2020-06-29T12:45:20.3605511Z remote: Compressing objects:  25% (2084/8335)        
2020-06-29T12:45:20.3823630Z remote: Compressing objects:  26% (2168/8335)        
2020-06-29T12:45:20.3937043Z remote: Compressing objects:  27% (2251/8335)        
2020-06-29T12:45:20.4210358Z remote: Compressing objects:  28% (2334/8335)        
2020-06-29T12:45:20.4427173Z remote: Compressing objects:  29% (2418/8335)        
2020-06-29T12:45:20.4534731Z remote: Compressing objects:  30% (2501/8335)        
2020-06-29T12:45:20.4757872Z remote: Compressing objects:  31% (2584/8335)        
2020-06-29T12:45:20.5235065Z remote: Compressing objects:  32% (2668/8335)        
2020-06-29T12:45:20.5378964Z remote: Compressing objects:  33% (2751/8335)        
2020-06-29T12:45:20.5971536Z remote: Compressing objects:  34% (2834/8335)        
2020-06-29T12:45:20.6106098Z remote: Compressing objects:  35% (2918/8335)        
2020-06-29T12:45:20.6370313Z remote: Compressing objects:  36% (3001/8335)        
2020-06-29T12:45:20.6576384Z remote: Compressing objects:  37% (3084/8335)        
2020-06-29T12:45:20.6773645Z remote: Compressing objects:  38% (3168/8335)        
2020-06-29T12:45:20.6886343Z remote: Compressing objects:  39% (3251/8335)        
2020-06-29T12:45:20.7044843Z remote: Compressing objects:  40% (3334/8335)        
2020-06-29T12:45:20.7222799Z remote: Compressing objects:  41% (3418/8335)        
2020-06-29T12:45:20.7442723Z remote: Compressing objects:  42% (3501/8335)        
2020-06-29T12:45:20.7564424Z remote: Compressing objects:  43% (3585/8335)        
2020-06-29T12:45:20.7687087Z remote: Compressing objects:  44% (3668/8335)        
2020-06-29T12:45:20.7986499Z remote: Compressing objects:  45% (3751/8335)        
2020-06-29T12:45:20.8160396Z remote: Compressing objects:  46% (3835/8335)        
2020-06-29T12:45:20.8312061Z remote: Compressing objects:  47% (3918/8335)        
2020-06-29T12:45:20.8510623Z remote: Compressing objects:  48% (4001/8335)        
2020-06-29T12:45:20.8703628Z remote: Compressing objects:  49% (4085/8335)        
2020-06-29T12:45:20.8943232Z remote: Compressing objects:  50% (4168/8335)        
2020-06-29T12:45:20.9100840Z remote: Compressing objects:  51% (4251/8335)        
2020-06-29T12:45:20.9142423Z remote: Compressing objects:  51% (4320/8335)        
2020-06-29T12:45:20.9287353Z remote: Compressing objects:  52% (4335/8335)        
2020-06-29T12:45:20.9438335Z remote: Compressing objects:  53% (4418/8335)        
2020-06-29T12:45:20.9587838Z remote: Compressing objects:  54% (4501/8335)        
2020-06-29T12:45:20.9745088Z remote: Compressing objects:  55% (4585/8335)        
2020-06-29T12:45:20.9966519Z remote: Compressing objects:  56% (4668/8335)        
2020-06-29T12:45:21.0174422Z remote: Compressing objects:  57% (4751/8335)        
2020-06-29T12:45:21.0279324Z remote: Compressing objects:  58% (4835/8335)        
2020-06-29T12:45:21.0478658Z remote: Compressing objects:  59% (4918/8335)        
2020-06-29T12:45:21.0650800Z remote: Compressing objects:  60% (5001/8335)        
2020-06-29T12:45:21.0858423Z remote: Compressing objects:  61% (5085/8335)        
2020-06-29T12:45:21.1035107Z remote: Compressing objects:  62% (5168/8335)        
2020-06-29T12:45:21.1330144Z remote: Compressing objects:  63% (5252/8335)        
2020-06-29T12:45:21.1393939Z remote: Compressing objects:  64% (5335/8335)        
2020-06-29T12:45:21.1453191Z remote: Compressing objects:  65% (5418/8335)        
2020-06-29T12:45:21.1554389Z remote: Compressing objects:  66% (5502/8335)        
2020-06-29T12:45:21.1861535Z remote: Compressing objects:  67% (5585/8335)        
2020-06-29T12:45:21.2560200Z remote: Compressing objects:  68% (5668/8335)        
2020-06-29T12:45:21.3214644Z remote: Compressing objects:  69% (5752/8335)        
2020-06-29T12:45:21.3438299Z remote: Compressing objects:  70% (5835/8335)        
2020-06-29T12:45:21.3932835Z remote: Compressing objects:  71% (5918/8335)        
2020-06-29T12:45:21.4229859Z remote: Compressing objects:  72% (6002/8335)        
2020-06-29T12:45:21.4231925Z remote: Compressing objects:  73% (6085/8335)        
2020-06-29T12:45:21.4288527Z remote: Compressing objects:  74% (6168/8335)        
2020-06-29T12:45:21.4335385Z remote: Compressing objects:  75% (6252/8335)        
2020-06-29T12:45:21.4406051Z remote: Compressing objects:  76% (6335/8335)        
2020-06-29T12:45:21.4428400Z remote: Compressing objects:  77% (6418/8335)        
2020-06-29T12:45:21.4490009Z remote: Compressing objects:  78% (6502/8335)        
2020-06-29T12:45:21.4542383Z remote: Compressing objects:  79% (6585/8335)        
2020-06-29T12:45:21.4607854Z remote: Compressing objects:  80% (6668/8335)        
2020-06-29T12:45:21.4674041Z remote: Compressing objects:  81% (6752/8335)        
2020-06-29T12:45:21.4810952Z remote: Compressing objects:  82% (6835/8335)        
2020-06-29T12:45:21.4929344Z remote: Compressing objects:  83% (6919/8335)        
2020-06-29T12:45:21.4962430Z remote: Compressing objects:  84% (7002/8335)        
2020-06-29T12:45:21.4971700Z remote: Compressing objects:  85% (7085/8335)        
2020-06-29T12:45:21.4986363Z remote: Compressing objects:  86% (7169/8335)        
2020-06-29T12:45:21.4989398Z remote: Compressing objects:  87% (7252/8335)        
2020-06-29T12:45:21.4993971Z remote: Compressing objects:  88% (7335/8335)        
2020-06-29T12:45:21.5001125Z remote: Compressing objects:  89% (7419/8335)        
2020-06-29T12:45:21.5004833Z remote: Compressing objects:  90% (7502/8335)        
2020-06-29T12:45:21.5012951Z remote: Compressing objects:  91% (7585/8335)        
2020-06-29T12:45:21.5019118Z remote: Compressing objects:  92% (7669/8335)        
2020-06-29T12:45:21.5050419Z remote: Compressing objects:  93% (7752/8335)        
2020-06-29T12:45:21.5050553Z remote: Compressing objects:  94% (7835/8335)        
2020-06-29T12:45:21.5050653Z remote: Compressing objects:  95% (7919/8335)        
2020-06-29T12:45:21.5051722Z remote: Compressing objects:  96% (8002/8335)        
2020-06-29T12:45:21.5061201Z remote: Compressing objects:  97% (8085/8335)        
2020-06-29T12:45:21.5064793Z remote: Compressing objects:  98% (8169/8335)        
2020-06-29T12:45:21.5078933Z remote: Compressing objects:  99% (8252/8335)        
2020-06-29T12:45:21.5079070Z remote: Compressing objects: 100% (8335/8335)        
2020-06-29T12:45:21.5079189Z remote: Compressing objects: 100% (8335/8335), done.        
2020-06-29T12:45:21.5297405Z Receiving objects:   0% (1/10502)
2020-06-29T12:45:21.5777638Z Receiving objects:   1% (106/10502)
2020-06-29T12:45:21.6971030Z Receiving objects:   2% (211/10502)
2020-06-29T12:45:21.8414332Z Receiving objects:   3% (316/10502)
2020-06-29T12:45:21.8881306Z Receiving objects:   4% (421/10502)
2020-06-29T12:45:21.8901790Z Receiving objects:   5% (526/10502)
2020-06-29T12:45:21.8941284Z Receiving objects:   6% (631/10502)
2020-06-29T12:45:21.8983303Z Receiving objects:   7% (736/10502)
2020-06-29T12:45:21.8993492Z Receiving objects:   8% (841/10502)
2020-06-29T12:45:21.9076849Z Receiving objects:   9% (946/10502)
2020-06-29T12:45:21.9112115Z Receiving objects:  10% (1051/10502)
2020-06-29T12:45:21.9151448Z Receiving objects:  11% (1156/10502)
2020-06-29T12:45:21.9177266Z Receiving objects:  12% (1261/10502)
2020-06-29T12:45:21.9253966Z Receiving objects:  13% (1366/10502)
2020-06-29T12:45:21.9296833Z Receiving objects:  14% (1471/10502)
2020-06-29T12:45:21.9302650Z Receiving objects:  15% (1576/10502)
2020-06-29T12:45:21.9324620Z Receiving objects:  16% (1681/10502)
2020-06-29T12:45:21.9357426Z Receiving objects:  17% (1786/10502)
2020-06-29T12:45:21.9391165Z Receiving objects:  18% (1891/10502)
2020-06-29T12:45:21.9427690Z Receiving objects:  19% (1996/10502)
2020-06-29T12:45:21.9476127Z Receiving objects:  20% (2101/10502)
2020-06-29T12:45:21.9554377Z Receiving objects:  21% (2206/10502)
2020-06-29T12:45:21.9592374Z Receiving objects:  22% (2311/10502)
2020-06-29T12:45:21.9640948Z Receiving objects:  23% (2416/10502)
2020-06-29T12:45:21.9689187Z Receiving objects:  24% (2521/10502)
2020-06-29T12:45:21.9910748Z Receiving objects:  25% (2626/10502)
2020-06-29T12:45:21.9950371Z Receiving objects:  26% (2731/10502)
2020-06-29T12:45:22.0092523Z Receiving objects:  27% (2836/10502)
2020-06-29T12:45:22.0176369Z Receiving objects:  28% (2941/10502)
2020-06-29T12:45:22.0343622Z Receiving objects:  29% (3046/10502), 16.69 MiB | 33.31 MiB/s
2020-06-29T12:45:22.0504556Z Receiving objects:  30% (3151/10502), 16.69 MiB | 33.31 MiB/s
2020-06-29T12:45:22.0574584Z Receiving objects:  31% (3256/10502), 16.69 MiB | 33.31 MiB/s
2020-06-29T12:45:22.0785800Z Receiving objects:  32% (3361/10502), 16.69 MiB | 33.31 MiB/s
2020-06-29T12:45:22.0849679Z Receiving objects:  33% (3466/10502), 16.69 MiB | 33.31 MiB/s
2020-06-29T12:45:22.0922501Z Receiving objects:  34% (3571/10502), 16.69 MiB | 33.31 MiB/s
2020-06-29T12:45:22.0988322Z Receiving objects:  35% (3676/10502), 16.69 MiB | 33.31 MiB/s
2020-06-29T12:45:22.1146604Z Receiving objects:  36% (3781/10502), 16.69 MiB | 33.31 MiB/s
2020-06-29T12:45:22.1202046Z Receiving objects:  37% (3886/10502), 16.69 MiB | 33.31 MiB/s
2020-06-29T12:45:22.1256003Z Receiving objects:  38% (3991/10502), 16.69 MiB | 33.31 MiB/s
2020-06-29T12:45:22.1296492Z Receiving objects:  39% (4096/10502), 16.69 MiB | 33.31 MiB/s
2020-06-29T12:45:22.1341309Z Receiving objects:  40% (4201/10502), 16.69 MiB | 33.31 MiB/s
2020-06-29T12:45:22.1372073Z Receiving objects:  41% (4306/10502), 16.69 MiB | 33.31 MiB/s
2020-06-29T12:45:22.1406585Z Receiving objects:  42% (4411/10502), 16.69 MiB | 33.31 MiB/s
2020-06-29T12:45:22.1563677Z Receiving objects:  43% (4516/10502), 16.69 MiB | 33.31 MiB/s
2020-06-29T12:45:22.1601988Z Receiving objects:  44% (4621/10502), 16.69 MiB | 33.31 MiB/s
2020-06-29T12:45:22.1812501Z Receiving objects:  45% (4726/10502), 16.69 MiB | 33.31 MiB/s
2020-06-29T12:45:22.1919784Z Receiving objects:  46% (4831/10502), 16.69 MiB | 33.31 MiB/s
2020-06-29T12:45:22.2028052Z Receiving objects:  47% (4936/10502), 16.69 MiB | 33.31 MiB/s
2020-06-29T12:45:22.2069288Z Receiving objects:  48% (5041/10502), 16.69 MiB | 33.31 MiB/s
2020-06-29T12:45:22.2104810Z Receiving objects:  49% (5146/10502), 16.69 MiB | 33.31 MiB/s
2020-06-29T12:45:22.2188477Z Receiving objects:  50% (5251/10502), 16.69 MiB | 33.31 MiB/s
2020-06-29T12:45:22.2218393Z Receiving objects:  51% (5357/10502), 16.69 MiB | 33.31 MiB/s
2020-06-29T12:45:22.2286210Z Receiving objects:  52% (5462/10502), 16.69 MiB | 33.31 MiB/s
2020-06-29T12:45:22.2366963Z Receiving objects:  53% (5567/10502), 16.69 MiB | 33.31 MiB/s
2020-06-29T12:45:22.2522073Z Receiving objects:  54% (5672/10502), 16.69 MiB | 33.31 MiB/s
2020-06-29T12:45:22.2676099Z Receiving objects:  55% (5777/10502), 16.69 MiB | 33.31 MiB/s
2020-06-29T12:45:22.2768754Z Receiving objects:  56% (5882/10502), 16.69 MiB | 33.31 MiB/s
2020-06-29T12:45:22.3133563Z Receiving objects:  57% (5987/10502), 16.69 MiB | 33.31 MiB/s
2020-06-29T12:45:22.3209059Z Receiving objects:  58% (6092/10502), 16.69 MiB | 33.31 MiB/s
2020-06-29T12:45:22.3297685Z Receiving objects:  59% (6197/10502), 16.69 MiB | 33.31 MiB/s
2020-06-29T12:45:22.3324522Z Receiving objects:  60% (6302/10502), 16.69 MiB | 33.31 MiB/s
2020-06-29T12:45:22.3363923Z Receiving objects:  61% (6407/10502), 16.69 MiB | 33.31 MiB/s
2020-06-29T12:45:22.3447077Z Receiving objects:  62% (6512/10502), 16.69 MiB | 33.31 MiB/s
2020-06-29T12:45:22.3557455Z Receiving objects:  63% (6617/10502), 16.69 MiB | 33.31 MiB/s
2020-06-29T12:45:22.3612669Z Receiving objects:  64% (6722/10502), 16.69 MiB | 33.31 MiB/s
2020-06-29T12:45:22.3648249Z Receiving objects:  65% (6827/10502), 16.69 MiB | 33.31 MiB/s
2020-06-29T12:45:22.3745973Z Receiving objects:  66% (6932/10502), 16.69 MiB | 33.31 MiB/s
2020-06-29T12:45:22.3841382Z Receiving objects:  67% (7037/10502), 16.69 MiB | 33.31 MiB/s
2020-06-29T12:45:22.3932460Z Receiving objects:  68% (7142/10502), 16.69 MiB | 33.31 MiB/s
2020-06-29T12:45:22.4036330Z Receiving objects:  69% (7247/10502), 16.69 MiB | 33.31 MiB/s
2020-06-29T12:45:22.4149071Z Receiving objects:  70% (7352/10502), 16.69 MiB | 33.31 MiB/s
2020-06-29T12:45:22.4275815Z Receiving objects:  71% (7457/10502), 16.69 MiB | 33.31 MiB/s
2020-06-29T12:45:22.4463487Z Receiving objects:  72% (7562/10502), 16.69 MiB | 33.31 MiB/s
2020-06-29T12:45:22.4517648Z Receiving objects:  73% (7667/10502), 16.69 MiB | 33.31 MiB/s
2020-06-29T12:45:22.4580345Z Receiving objects:  74% (7772/10502), 16.69 MiB | 33.31 MiB/s
2020-06-29T12:45:22.4722333Z Receiving objects:  75% (7877/10502), 16.69 MiB | 33.31 MiB/s
2020-06-29T12:45:22.4937472Z Receiving objects:  76% (7982/10502), 16.69 MiB | 33.31 MiB/s
2020-06-29T12:45:22.5003994Z Receiving objects:  77% (8087/10502), 16.69 MiB | 33.31 MiB/s
2020-06-29T12:45:22.5068570Z Receiving objects:  78% (8192/10502), 16.69 MiB | 33.31 MiB/s
2020-06-29T12:45:22.5143766Z Receiving objects:  79% (8297/10502), 16.69 MiB | 33.31 MiB/s
2020-06-29T12:45:22.5233004Z Receiving objects:  79% (8339/10502), 23.72 MiB | 23.70 MiB/s
2020-06-29T12:45:22.5794408Z Receiving objects:  80% (8402/10502), 23.72 MiB | 23.70 MiB/s
2020-06-29T12:45:22.6160898Z Receiving objects:  81% (8507/10502), 23.72 MiB | 23.70 MiB/s
2020-06-29T12:45:22.6555228Z Receiving objects:  82% (8612/10502), 23.72 MiB | 23.70 MiB/s
2020-06-29T12:45:22.6637548Z Receiving objects:  83% (8717/10502), 23.72 MiB | 23.70 MiB/s
2020-06-29T12:45:22.6722906Z Receiving objects:  84% (8822/10502), 23.72 MiB | 23.70 MiB/s
2020-06-29T12:45:22.6871001Z Receiving objects:  85% (8927/10502), 23.72 MiB | 23.70 MiB/s
2020-06-29T12:45:22.7018478Z Receiving objects:  86% (9032/10502), 23.72 MiB | 23.70 MiB/s
2020-06-29T12:45:22.7203271Z Receiving objects:  87% (9137/10502), 23.72 MiB | 23.70 MiB/s
2020-06-29T12:45:22.7293732Z Receiving objects:  88% (9242/10502), 23.72 MiB | 23.70 MiB/s
2020-06-29T12:45:22.7417493Z Receiving objects:  89% (9347/10502), 23.72 MiB | 23.70 MiB/s
2020-06-29T12:45:22.7552971Z Receiving objects:  90% (9452/10502), 23.72 MiB | 23.70 MiB/s
2020-06-29T12:45:22.7676579Z Receiving objects:  91% (9557/10502), 23.72 MiB | 23.70 MiB/s
2020-06-29T12:45:22.7721001Z Receiving objects:  92% (9662/10502), 23.72 MiB | 23.70 MiB/s
2020-06-29T12:45:22.7740554Z Receiving objects:  93% (9767/10502), 23.72 MiB | 23.70 MiB/s
2020-06-29T12:45:22.7788802Z Receiving objects:  94% (9872/10502), 23.72 MiB | 23.70 MiB/s
2020-06-29T12:45:22.7821301Z Receiving objects:  95% (9977/10502), 23.72 MiB | 23.70 MiB/s
2020-06-29T12:45:22.7890973Z Receiving objects:  96% (10082/10502), 23.72 MiB | 23.70 MiB/s
2020-06-29T12:45:22.8019917Z Receiving objects:  97% (10187/10502), 23.72 MiB | 23.70 MiB/s
2020-06-29T12:45:22.8099523Z Receiving objects:  98% (10292/10502), 23.72 MiB | 23.70 MiB/s
2020-06-29T12:45:22.8164314Z Receiving objects:  99% (10397/10502), 23.72 MiB | 23.70 MiB/s
2020-06-29T12:45:22.8165399Z remote: Total 10502 (delta 2765), reused 5326 (delta 1609), pack-reused 0        
2020-06-29T12:45:22.8173692Z Receiving objects: 100% (10502/10502), 23.72 MiB | 23.70 MiB/s
2020-06-29T12:45:22.8174248Z Receiving objects: 100% (10502/10502), 27.73 MiB | 21.27 MiB/s, done.
2020-06-29T12:45:22.8296765Z Resolving deltas:   0% (0/2765)
2020-06-29T12:45:22.8299544Z Resolving deltas:   1% (43/2765)
2020-06-29T12:45:22.8309408Z Resolving deltas:   2% (56/2765)
2020-06-29T12:45:22.8321024Z Resolving deltas:   3% (85/2765)
2020-06-29T12:45:22.8326464Z Resolving deltas:   6% (172/2765)
2020-06-29T12:45:22.8330237Z Resolving deltas:   7% (201/2765)
2020-06-29T12:45:22.8338639Z Resolving deltas:   8% (223/2765)
2020-06-29T12:45:22.8345972Z Resolving deltas:   9% (267/2765)
2020-06-29T12:45:22.8350527Z Resolving deltas:  12% (344/2765)
2020-06-29T12:45:22.8353992Z Resolving deltas:  13% (364/2765)
2020-06-29T12:45:22.8362765Z Resolving deltas:  14% (388/2765)
2020-06-29T12:45:22.8366327Z Resolving deltas:  15% (433/2765)
2020-06-29T12:45:22.8371630Z Resolving deltas:  16% (447/2765)
2020-06-29T12:45:22.8376428Z Resolving deltas:  17% (474/2765)
2020-06-29T12:45:22.8381714Z Resolving deltas:  18% (498/2765)
2020-06-29T12:45:22.8390495Z Resolving deltas:  19% (529/2765)
2020-06-29T12:45:22.8397409Z Resolving deltas:  20% (558/2765)
2020-06-29T12:45:22.8403393Z Resolving deltas:  21% (583/2765)
2020-06-29T12:45:22.8410074Z Resolving deltas:  22% (609/2765)
2020-06-29T12:45:22.8415799Z Resolving deltas:  23% (636/2765)
2020-06-29T12:45:22.8420967Z Resolving deltas:  24% (664/2765)
2020-06-29T12:45:22.8423757Z Resolving deltas:  25% (701/2765)
2020-06-29T12:45:22.8426198Z Resolving deltas:  26% (719/2765)
2020-06-29T12:45:22.8434051Z Resolving deltas:  27% (747/2765)
2020-06-29T12:45:22.8436113Z Resolving deltas:  28% (783/2765)
2020-06-29T12:45:22.8439902Z Resolving deltas:  29% (802/2765)
2020-06-29T12:45:22.8444245Z Resolving deltas:  30% (831/2765)
2020-06-29T12:45:22.8448581Z Resolving deltas:  31% (859/2765)
2020-06-29T12:45:22.8452355Z Resolving deltas:  32% (891/2765)
2020-06-29T12:45:22.8459331Z Resolving deltas:  33% (914/2765)
2020-06-29T12:45:22.8464418Z Resolving deltas:  34% (941/2765)
2020-06-29T12:45:22.8470568Z Resolving deltas:  35% (968/2765)
2020-06-29T12:45:22.8474010Z Resolving deltas:  36% (1012/2765)
2020-06-29T12:45:22.8477100Z Resolving deltas:  37% (1024/2765)
2020-06-29T12:45:22.8479911Z Resolving deltas:  38% (1053/2765)
2020-06-29T12:45:22.8495323Z Resolving deltas:  39% (1079/2765)
2020-06-29T12:45:22.8510974Z Resolving deltas:  40% (1130/2765)
2020-06-29T12:45:22.8525661Z Resolving deltas:  41% (1135/2765)
2020-06-29T12:45:22.8537277Z Resolving deltas:  42% (1166/2765)
2020-06-29T12:45:22.8550383Z Resolving deltas:  44% (1217/2765)
2020-06-29T12:45:22.8556543Z Resolving deltas:  45% (1245/2765)
2020-06-29T12:45:22.8565740Z Resolving deltas:  46% (1272/2765)
2020-06-29T12:45:22.8573330Z Resolving deltas:  47% (1311/2765)
2020-06-29T12:45:22.8616045Z Resolving deltas:  48% (1329/2765)
2020-06-29T12:45:22.8652457Z Resolving deltas:  49% (1367/2765)
2020-06-29T12:45:22.8668454Z Resolving deltas:  50% (1403/2765)
2020-06-29T12:45:22.8679617Z Resolving deltas:  51% (1423/2765)
2020-06-29T12:45:22.8689972Z Resolving deltas:  52% (1439/2765)
2020-06-29T12:45:22.8697918Z Resolving deltas:  53% (1466/2765)
2020-06-29T12:45:22.8704639Z Resolving deltas:  54% (1496/2765)
2020-06-29T12:45:22.8720444Z Resolving deltas:  55% (1521/2765)
2020-06-29T12:45:22.8724994Z Resolving deltas:  56% (1567/2765)
2020-06-29T12:45:22.8735439Z Resolving deltas:  57% (1579/2765)
2020-06-29T12:45:22.8746777Z Resolving deltas:  58% (1604/2765)
2020-06-29T12:45:22.8758793Z Resolving deltas:  59% (1636/2765)
2020-06-29T12:45:22.8765725Z Resolving deltas:  60% (1660/2765)
2020-06-29T12:45:22.8772961Z Resolving deltas:  61% (1687/2765)
2020-06-29T12:45:22.8779944Z Resolving deltas:  62% (1717/2765)
2020-06-29T12:45:22.8787487Z Resolving deltas:  63% (1742/2765)
2020-06-29T12:45:22.8795965Z Resolving deltas:  64% (1771/2765)
2020-06-29T12:45:22.8803228Z Resolving deltas:  65% (1798/2765)
2020-06-29T12:45:22.8814506Z Resolving deltas:  66% (1826/2765)
2020-06-29T12:45:22.8817769Z Resolving deltas:  67% (1853/2765)
2020-06-29T12:45:22.8823888Z Resolving deltas:  68% (1881/2765)
2020-06-29T12:45:22.8829783Z Resolving deltas:  69% (1912/2765)
2020-06-29T12:45:22.8835092Z Resolving deltas:  70% (1936/2765)
2020-06-29T12:45:22.8842622Z Resolving deltas:  71% (1964/2765)
2020-06-29T12:45:22.8868176Z Resolving deltas:  72% (1992/2765)
2020-06-29T12:45:22.8874369Z Resolving deltas:  73% (2019/2765)
2020-06-29T12:45:22.8878853Z Resolving deltas:  74% (2047/2765)
2020-06-29T12:45:22.8885893Z Resolving deltas:  75% (2074/2765)
2020-06-29T12:45:22.8893223Z Resolving deltas:  76% (2102/2765)
2020-06-29T12:45:22.8903273Z Resolving deltas:  77% (2130/2765)
2020-06-29T12:45:22.8910525Z Resolving deltas:  78% (2158/2765)
2020-06-29T12:45:22.8914729Z Resolving deltas:  79% (2189/2765)
2020-06-29T12:45:22.8920410Z Resolving deltas:  80% (2221/2765)
2020-06-29T12:45:22.8929444Z Resolving deltas:  81% (2242/2765)
2020-06-29T12:45:22.8942930Z Resolving deltas:  82% (2268/2765)
2020-06-29T12:45:22.8974977Z Resolving deltas:  83% (2295/2765)
2020-06-29T12:45:22.8995253Z Resolving deltas:  84% (2326/2765)
2020-06-29T12:45:22.9014643Z Resolving deltas:  85% (2352/2765)
2020-06-29T12:45:22.9049128Z Resolving deltas:  86% (2378/2765)
2020-06-29T12:45:22.9052581Z Resolving deltas:  87% (2406/2765)
2020-06-29T12:45:22.9067955Z Resolving deltas:  88% (2451/2765)
2020-06-29T12:45:22.9077387Z Resolving deltas:  89% (2462/2765)
2020-06-29T12:45:22.9089209Z Resolving deltas:  90% (2490/2765)
2020-06-29T12:45:22.9096231Z Resolving deltas:  91% (2523/2765)
2020-06-29T12:45:22.9099260Z Resolving deltas:  92% (2544/2765)
2020-06-29T12:45:22.9107219Z Resolving deltas:  93% (2573/2765)
2020-06-29T12:45:22.9110290Z Resolving deltas:  94% (2615/2765)
2020-06-29T12:45:22.9115005Z Resolving deltas:  95% (2628/2765)
2020-06-29T12:45:22.9120010Z Resolving deltas:  96% (2655/2765)
2020-06-29T12:45:22.9126521Z Resolving deltas:  97% (2683/2765)
2020-06-29T12:45:22.9130959Z Resolving deltas:  98% (2714/2765)
2020-06-29T12:45:22.9137478Z Resolving deltas:  99% (2738/2765)
2020-06-29T12:45:22.9177085Z Resolving deltas: 100% (2765/2765)
2020-06-29T12:45:22.9177542Z Resolving deltas: 100% (2765/2765), done.
2020-06-29T12:45:23.3203689Z From https://github.com/cilium/cilium
2020-06-29T12:45:23.3204516Z  * [new ref]         ea24b4d304d18e2c5131e836c434f9a35a9628a4 -> pull/12320/merge
2020-06-29T12:45:23.3225606Z ##[endgroup]
2020-06-29T12:45:23.3225820Z ##[group]Determining the checkout info
2020-06-29T12:45:23.3228723Z ##[endgroup]
2020-06-29T12:45:23.3228875Z ##[group]Checking out the ref
2020-06-29T12:45:23.3234539Z [command]/usr/bin/git checkout --progress --force refs/remotes/pull/12320/merge
2020-06-29T12:45:24.0177811Z Note: switching to 'refs/remotes/pull/12320/merge'.
2020-06-29T12:45:24.0177935Z 
2020-06-29T12:45:24.0178399Z You are in 'detached HEAD' state. You can look around, make experimental
2020-06-29T12:45:24.0178535Z changes and commit them, and you can discard any commits you make in this
2020-06-29T12:45:24.0178662Z state without impacting any branches by switching back to a branch.
2020-06-29T12:45:24.0178725Z 
2020-06-29T12:45:24.0178860Z If you want to create a new branch to retain commits you create, you may
2020-06-29T12:45:24.0179150Z do so (now or later) by using -c with the switch command. Example:
2020-06-29T12:45:24.0179433Z 
2020-06-29T12:45:24.0179696Z   git switch -c <new-branch-name>
2020-06-29T12:45:24.0179767Z 
2020-06-29T12:45:24.0179868Z Or undo this operation with:
2020-06-29T12:45:24.0179921Z 
2020-06-29T12:45:24.0180375Z   git switch -
2020-06-29T12:45:24.0180433Z 
2020-06-29T12:45:24.0180728Z Turn off this advice by setting config variable advice.detachedHead to false
2020-06-29T12:45:24.0181077Z 
2020-06-29T12:45:24.0181202Z HEAD is now at ea24b4d Merge f756b452a58b25cc2c10198969ba20d762b18975 into 0490aede09655b7f7534b503b5be5ae66639f1e0
2020-06-29T12:45:24.0195543Z ##[endgroup]
2020-06-29T12:45:24.0199805Z [command]/usr/bin/git log -1
2020-06-29T12:45:24.0241656Z commit ea24b4d304d18e2c5131e836c434f9a35a9628a4
2020-06-29T12:45:24.0272912Z Author: Robin Hahling <robin.hahling@gw-computing.net>
2020-06-29T12:45:24.0273410Z Date:   Mon Jun 29 12:45:03 2020 +0000
2020-06-29T12:45:24.0273962Z 
2020-06-29T12:45:24.0274283Z     Merge f756b452a58b25cc2c10198969ba20d762b18975 into 0490aede09655b7f7534b503b5be5ae66639f1e0
2020-06-29T12:45:24.0379547Z ##[group]Run make -C install/kubernetes quick-install
2020-06-29T12:45:24.0379764Z �[36;1mmake -C install/kubernetes quick-install�[0m
2020-06-29T12:45:24.0379876Z �[36;1mgit diff --exit-code�[0m
2020-06-29T12:45:24.0430040Z shell: /bin/bash -e {0}
2020-06-29T12:45:24.0430159Z env:
2020-06-29T12:45:24.0430450Z   KIND_VERSION: v0.8.1
2020-06-29T12:45:24.0430732Z   KIND_CONFIG: .github/kind-config.yaml
2020-06-29T12:45:24.0430864Z   CONFORMANCE_TEMPLATE: examples/kubernetes/connectivity-check/connectivity-check.yaml
2020-06-29T12:45:24.0431176Z ##[endgroup]
2020-06-29T12:45:24.0531593Z make: Entering directory '/home/runner/work/cilium/cilium/install/kubernetes'
2020-06-29T12:45:24.1917189Z helm template cilium --namespace=kube-system  > "/home/runner/work/cilium/cilium/install/kubernetes/quick-install.yaml"
2020-06-29T12:45:27.2558210Z make: Leaving directory '/home/runner/work/cilium/cilium/install/kubernetes'
2020-06-29T12:45:27.2918834Z ##[group]Run helm/kind-action@v1.0.0-rc.1
2020-06-29T12:45:27.2919001Z with:
2020-06-29T12:45:27.2919086Z   version: v0.8.1
2020-06-29T12:45:27.2919195Z   config: .github/kind-config.yaml
2020-06-29T12:45:27.2919296Z env:
2020-06-29T12:45:27.2919398Z   KIND_VERSION: v0.8.1
2020-06-29T12:45:27.2919498Z   KIND_CONFIG: .github/kind-config.yaml
2020-06-29T12:45:27.2919615Z   CONFORMANCE_TEMPLATE: examples/kubernetes/connectivity-check/connectivity-check.yaml
2020-06-29T12:45:27.2919727Z ##[endgroup]
2020-06-29T12:45:27.3309253Z Installing kind...
2020-06-29T12:45:27.6278465Z Installing kubectl...
2020-06-29T12:45:27.9245796Z Creating kind cluster...
2020-06-29T12:45:28.3509056Z Creating cluster "chart-testing" ...
2020-06-29T12:45:28.3510169Z  • Ensuring node image (kindest/node:v1.18.2) 🖼  ...
2020-06-29T12:45:53.7342654Z  ✓ Ensuring node image (kindest/node:v1.18.2) 🖼
2020-06-29T12:45:53.7343290Z  • Preparing nodes 📦 📦   ...
2020-06-29T12:46:27.6018567Z  ✓ Preparing nodes 📦 📦 
2020-06-29T12:46:27.7130868Z  • Writing configuration 📜  ...
2020-06-29T12:46:28.8581021Z  ✓ Writing configuration 📜
2020-06-29T12:46:28.8581333Z  • Starting control-plane 🕹️  ...
2020-06-29T12:46:51.2111253Z  ✓ Starting control-plane 🕹️
2020-06-29T12:46:51.2112130Z  • Installing StorageClass 💾  ...
2020-06-29T12:46:51.8456274Z  ✓ Installing StorageClass 💾
2020-06-29T12:46:52.0130520Z  • Joining worker nodes 🚜  ...
2020-06-29T12:47:24.7922651Z  ✓ Joining worker nodes 🚜
2020-06-29T12:47:24.7923155Z  • Waiting ≤ 1m0s for control-plane = Ready ⏳  ...
2020-06-29T12:48:24.8909762Z  ✗ Waiting ≤ 1m0s for control-plane = Ready ⏳
2020-06-29T12:48:24.8911090Z  • WARNING: Timed out waiting for Ready ⚠️
2020-06-29T12:48:25.2319471Z Set kubectl context to "kind-chart-testing"
2020-06-29T12:48:25.2320312Z You can now use your cluster with:
2020-06-29T12:48:25.2320438Z 
2020-06-29T12:48:25.2320782Z kubectl cluster-info --context kind-chart-testing
2020-06-29T12:48:25.2320969Z 
2020-06-29T12:48:25.2321663Z Not sure what to do next? 😅  Check out https://kind.sigs.k8s.io/docs/user/quick-start/
2020-06-29T12:48:25.2410005Z ##[group]Run kubectl apply -f install/kubernetes/quick-install.yaml
2020-06-29T12:48:25.2410224Z �[36;1mkubectl apply -f install/kubernetes/quick-install.yaml�[0m
2020-06-29T12:48:25.2410354Z �[36;1mkubectl wait -n kube-system --for=condition=Ready --all pod --timeout=5m�[0m
2020-06-29T12:48:25.2410482Z �[36;1m# To make sure that cilium CRD is available (default timeout is 5m)�[0m
2020-06-29T12:48:25.2410642Z �[36;1m# https://github.com/cilium/cilium/blob/master/operator/crd.go#L34�[0m
2020-06-29T12:48:25.2410968Z �[36;1mkubectl wait --for condition=Established crd/ciliumnetworkpolicies.cilium.io --timeout=5m�[0m
2020-06-29T12:48:25.2462307Z shell: /bin/bash -e {0}
2020-06-29T12:48:25.2462425Z env:
2020-06-29T12:48:25.2462546Z   KIND_VERSION: v0.8.1
2020-06-29T12:48:25.2462642Z   KIND_CONFIG: .github/kind-config.yaml
2020-06-29T12:48:25.2462772Z   CONFORMANCE_TEMPLATE: examples/kubernetes/connectivity-check/connectivity-check.yaml
2020-06-29T12:48:25.2463415Z ##[endgroup]
2020-06-29T12:48:25.5959386Z serviceaccount/cilium created
2020-06-29T12:48:25.6053633Z serviceaccount/cilium-operator created
2020-06-29T12:48:25.6187020Z configmap/cilium-config created
2020-06-29T12:48:25.6230115Z clusterrole.rbac.authorization.k8s.io/cilium created
2020-06-29T12:48:25.6278206Z clusterrole.rbac.authorization.k8s.io/cilium-operator created
2020-06-29T12:48:25.6321661Z clusterrolebinding.rbac.authorization.k8s.io/cilium created
2020-06-29T12:48:25.6362757Z clusterrolebinding.rbac.authorization.k8s.io/cilium-operator created
2020-06-29T12:48:25.6460882Z daemonset.apps/cilium created
2020-06-29T12:48:25.6645792Z deployment.apps/cilium-operator created
2020-06-29T12:49:00.5308260Z pod/cilium-operator-6d9b895c9c-dpm58 condition met
2020-06-29T12:49:30.2743226Z pod/cilium-p4rdx condition met
2020-06-29T12:49:30.2830937Z pod/cilium-v6mhm condition met
2020-06-29T12:49:30.2869297Z pod/coredns-66bff467f8-5pmv4 condition met
2020-06-29T12:49:30.2903562Z pod/coredns-66bff467f8-zlwn9 condition met
2020-06-29T12:49:30.2944363Z pod/etcd-chart-testing-control-plane condition met
2020-06-29T12:49:30.3000215Z pod/kube-apiserver-chart-testing-control-plane condition met
2020-06-29T12:49:30.3044123Z pod/kube-controller-manager-chart-testing-control-plane condition met
2020-06-29T12:49:30.3069028Z pod/kube-proxy-rb2xx condition met
2020-06-29T12:49:30.3094256Z pod/kube-proxy-zqkw2 condition met
2020-06-29T12:49:30.3115007Z pod/kube-scheduler-chart-testing-control-plane condition met
2020-06-29T12:49:30.4410493Z customresourcedefinition.apiextensions.k8s.io/ciliumnetworkpolicies.cilium.io condition met
2020-06-29T12:49:30.4487023Z ##[group]Run kubectl apply -f examples/kubernetes/connectivity-check/connectivity-check.yaml
2020-06-29T12:49:30.4487309Z �[36;1mkubectl apply -f examples/kubernetes/connectivity-check/connectivity-check.yaml�[0m
2020-06-29T12:49:30.4487459Z �[36;1mkubectl wait --for=condition=Available --all deployment --timeout=5m�[0m
2020-06-29T12:49:30.4539292Z shell: /bin/bash -e {0}
2020-06-29T12:49:30.4539416Z env:
2020-06-29T12:49:30.4539526Z   KIND_VERSION: v0.8.1
2020-06-29T12:49:30.4539632Z   KIND_CONFIG: .github/kind-config.yaml
2020-06-29T12:49:30.4539755Z   CONFORMANCE_TEMPLATE: examples/kubernetes/connectivity-check/connectivity-check.yaml
2020-06-29T12:49:30.4539875Z ##[endgroup]
2020-06-29T12:49:30.7005961Z service/echo-a created
2020-06-29T12:49:30.7189290Z deployment.apps/echo-a created
2020-06-29T12:49:30.7720926Z service/echo-b created
2020-06-29T12:49:30.8095961Z service/echo-b-headless created
2020-06-29T12:49:30.8305347Z deployment.apps/echo-b created
2020-06-29T12:49:30.8538039Z deployment.apps/echo-b-host created
2020-06-29T12:49:30.9199928Z service/echo-b-host-headless created
2020-06-29T12:49:30.9498144Z deployment.apps/host-to-b-multi-node-clusterip created
2020-06-29T12:49:30.9746369Z deployment.apps/host-to-b-multi-node-headless created
2020-06-29T12:49:31.0090966Z deployment.apps/pod-to-a-allowed-cnp created
2020-06-29T12:49:31.6371013Z ciliumnetworkpolicy.cilium.io/pod-to-a-allowed-cnp created
2020-06-29T12:49:31.6666768Z deployment.apps/pod-to-a-l3-denied-cnp created
2020-06-29T12:49:31.7651315Z ciliumnetworkpolicy.cilium.io/pod-to-a-l3-denied-cnp created
2020-06-29T12:49:31.8745534Z deployment.apps/pod-to-a created
2020-06-29T12:49:31.9233461Z deployment.apps/pod-to-b-intra-node-nodeport created
2020-06-29T12:49:31.9679446Z deployment.apps/pod-to-b-intra-node created
2020-06-29T12:49:32.0481675Z deployment.apps/pod-to-b-multi-node-clusterip created
2020-06-29T12:49:32.1497632Z deployment.apps/pod-to-b-multi-node-headless created
2020-06-29T12:49:32.2084610Z deployment.apps/pod-to-b-multi-node-nodeport created
2020-06-29T12:49:32.2588593Z deployment.apps/pod-to-a-external-1111 created
2020-06-29T12:49:32.2761799Z deployment.apps/pod-to-external-fqdn-allow-google-cnp created
2020-06-29T12:49:32.2940508Z ciliumnetworkpolicy.cilium.io/pod-to-external-fqdn-allow-google-cnp created
2020-06-29T12:49:55.0739919Z deployment.apps/echo-a condition met
2020-06-29T12:49:55.0872101Z deployment.apps/echo-b condition met
2020-06-29T12:49:55.0962686Z deployment.apps/echo-b-host condition met
2020-06-29T12:49:55.1031114Z deployment.apps/host-to-b-multi-node-clusterip condition met
2020-06-29T12:49:55.1073516Z deployment.apps/host-to-b-multi-node-headless condition met
2020-06-29T12:49:55.1133843Z deployment.apps/pod-to-a condition met
2020-06-29T12:54:55.1285674Z deployment.apps/pod-to-a-external-1111 condition met
2020-06-29T12:54:55.1306337Z deployment.apps/pod-to-a-l3-denied-cnp condition met
2020-06-29T12:54:55.1404837Z deployment.apps/pod-to-b-intra-node condition met
2020-06-29T12:59:55.1470001Z deployment.apps/pod-to-b-multi-node-clusterip condition met
2020-06-29T12:59:55.1494564Z deployment.apps/pod-to-b-multi-node-headless condition met
2020-06-29T13:09:55.1609281Z timed out waiting for the condition on deployments/pod-to-a-allowed-cnp
2020-06-29T13:09:55.1609732Z timed out waiting for the condition on deployments/pod-to-b-intra-node-nodeport
2020-06-29T13:09:55.1610078Z timed out waiting for the condition on deployments/pod-to-b-multi-node-nodeport
2020-06-29T13:09:55.1610433Z timed out waiting for the condition on deployments/pod-to-external-fqdn-allow-google-cnp
2020-06-29T13:09:55.1622464Z ##[error]Process completed with exit code 1.
2020-06-29T13:09:55.1654041Z ##[group]Run kubectl -n kube-system describe daemonsets.apps cilium
2020-06-29T13:09:55.1654268Z �[36;1mkubectl -n kube-system describe daemonsets.apps cilium�[0m
2020-06-29T13:09:55.1654372Z �[36;1mkubectl -n kube-system logs daemonset/cilium --all-containers --since=$LOG_TIME�[0m
2020-06-29T13:09:55.1709999Z shell: /bin/bash -e {0}
2020-06-29T13:09:55.1710107Z env:
2020-06-29T13:09:55.1710192Z   KIND_VERSION: v0.8.1
2020-06-29T13:09:55.1710437Z   KIND_CONFIG: .github/kind-config.yaml
2020-06-29T13:09:55.1710549Z   CONFORMANCE_TEMPLATE: examples/kubernetes/connectivity-check/connectivity-check.yaml
2020-06-29T13:09:55.1710660Z   LOG_TIME: 30m
2020-06-29T13:09:55.1710753Z ##[endgroup]
2020-06-29T13:09:55.3610284Z Name:           cilium
2020-06-29T13:09:55.3611935Z Selector:       k8s-app=cilium
2020-06-29T13:09:55.3613782Z Node-Selector:  <none>
2020-06-29T13:09:55.3615095Z Labels:         k8s-app=cilium
2020-06-29T13:09:55.3616014Z Annotations:    deprecated.daemonset.template.generation: 1
2020-06-29T13:09:55.3616881Z                 kubectl.kubernetes.io/last-applied-configuration:
2020-06-29T13:09:55.3617813Z                   {"apiVersion":"apps/v1","kind":"DaemonSet","metadata":{"annotations":{},"labels":{"k8s-app":"cilium"},"name":"cilium","namespace":"kube-sy...
2020-06-29T13:09:55.3618122Z Desired Number of Nodes Scheduled: 2
2020-06-29T13:09:55.3618757Z Current Number of Nodes Scheduled: 2
2020-06-29T13:09:55.3619609Z Number of Nodes Scheduled with Up-to-date Pods: 2
2020-06-29T13:09:55.3619830Z Number of Nodes Scheduled with Available Pods: 2
2020-06-29T13:09:55.3620589Z Number of Nodes Misscheduled: 0
2020-06-29T13:09:55.3621269Z Pods Status:  2 Running / 0 Waiting / 0 Succeeded / 0 Failed
2020-06-29T13:09:55.3622274Z Pod Template:
2020-06-29T13:09:55.3624128Z   Labels:           k8s-app=cilium
2020-06-29T13:09:55.3625022Z   Annotations:      scheduler.alpha.kubernetes.io/critical-pod: 
2020-06-29T13:09:55.3625609Z   Service Account:  cilium
2020-06-29T13:09:55.3625808Z   Init Containers:
2020-06-29T13:09:55.3626101Z    clean-cilium-state:
2020-06-29T13:09:55.3626444Z     Image:      docker.io/cilium/cilium:latest
2020-06-29T13:09:55.3626597Z     Port:       <none>
2020-06-29T13:09:55.3626733Z     Host Port:  <none>
2020-06-29T13:09:55.3626853Z     Command:
2020-06-29T13:09:55.3627425Z       /init-container.sh
2020-06-29T13:09:55.3627736Z     Requests:
2020-06-29T13:09:55.3628101Z       cpu:     100m
2020-06-29T13:09:55.3628377Z       memory:  100Mi
2020-06-29T13:09:55.3628558Z     Environment:
2020-06-29T13:09:55.3629213Z       CILIUM_ALL_STATE:       <set to the key 'clean-cilium-state' of config map 'cilium-config'>      Optional: true
2020-06-29T13:09:55.3630656Z       CILIUM_BPF_STATE:       <set to the key 'clean-cilium-bpf-state' of config map 'cilium-config'>  Optional: true
2020-06-29T13:09:55.3641694Z       CILIUM_WAIT_BPF_MOUNT:  <set to the key 'wait-bpf-mount' of config map 'cilium-config'>          Optional: true
2020-06-29T13:09:55.3641918Z     Mounts:
2020-06-29T13:09:55.3642770Z       /sys/fs/bpf from bpf-maps (rw)
2020-06-29T13:09:55.3643242Z       /var/run/cilium from cilium-run (rw)
2020-06-29T13:09:55.3643326Z   Containers:
2020-06-29T13:09:55.3643552Z    cilium-agent:
2020-06-29T13:09:55.3643740Z     Image:      docker.io/cilium/cilium:latest
2020-06-29T13:09:55.3643865Z     Port:       <none>
2020-06-29T13:09:55.3643985Z     Host Port:  <none>
2020-06-29T13:09:55.3644115Z     Command:
2020-06-29T13:09:55.3644354Z       cilium-agent
2020-06-29T13:09:55.3644433Z     Args:
2020-06-29T13:09:55.3644673Z       --config-dir=/tmp/cilium/config-map
2020-06-29T13:09:55.3645070Z     Liveness:   http-get http://127.0.0.1:9876/healthz delay=120s timeout=5s period=30s #success=1 #failure=10
2020-06-29T13:09:55.3645460Z     Readiness:  http-get http://127.0.0.1:9876/healthz delay=5s timeout=5s period=30s #success=1 #failure=3
2020-06-29T13:09:55.3645650Z     Environment:
2020-06-29T13:09:55.3645780Z       K8S_NODE_NAME:                      (v1:spec.nodeName)
2020-06-29T13:09:55.3646097Z       CILIUM_K8S_NAMESPACE:               (v1:metadata.namespace)
2020-06-29T13:09:55.3646490Z       CILIUM_FLANNEL_MASTER_DEVICE:      <set to the key 'flannel-master-device' of config map 'cilium-config'>      Optional: true
2020-06-29T13:09:55.3646828Z       CILIUM_FLANNEL_UNINSTALL_ON_EXIT:  <set to the key 'flannel-uninstall-on-exit' of config map 'cilium-config'>  Optional: true
2020-06-29T13:09:55.3647110Z       CILIUM_CLUSTERMESH_CONFIG:         /var/lib/cilium/clustermesh/
2020-06-29T13:09:55.3647395Z       CILIUM_CNI_CHAINING_MODE:          <set to the key 'cni-chaining-mode' of config map 'cilium-config'>  Optional: true
2020-06-29T13:09:55.3647731Z       CILIUM_CUSTOM_CNI_CONF:            <set to the key 'custom-cni-conf' of config map 'cilium-config'>    Optional: true
2020-06-29T13:09:55.3647881Z     Mounts:
2020-06-29T13:09:55.3648172Z       /host/etc/cni/net.d from etc-cni-netd (rw)
2020-06-29T13:09:55.3648428Z       /host/opt/cni/bin from cni-path (rw)
2020-06-29T13:09:55.3648690Z       /lib/modules from lib-modules (ro)
2020-06-29T13:09:55.3648944Z       /run/xtables.lock from xtables-lock (rw)
2020-06-29T13:09:55.3649190Z       /sys/fs/bpf from bpf-maps (rw)
2020-06-29T13:09:55.3649407Z       /tmp/cilium/config-map from cilium-config-path (ro)
2020-06-29T13:09:55.3649682Z       /var/lib/cilium/clustermesh from clustermesh-secrets (ro)
2020-06-29T13:09:55.3649939Z       /var/run/cilium from cilium-run (rw)
2020-06-29T13:09:55.3650066Z   Volumes:
2020-06-29T13:09:55.3650350Z    cilium-run:
2020-06-29T13:09:55.3650499Z     Type:          HostPath (bare host directory volume)
2020-06-29T13:09:55.3650638Z     Path:          /var/run/cilium
2020-06-29T13:09:55.3650766Z     HostPathType:  DirectoryOrCreate
2020-06-29T13:09:55.3651072Z    bpf-maps:
2020-06-29T13:09:55.3651215Z     Type:          HostPath (bare host directory volume)
2020-06-29T13:09:55.3651344Z     Path:          /sys/fs/bpf
2020-06-29T13:09:55.3651470Z     HostPathType:  DirectoryOrCreate
2020-06-29T13:09:55.3651709Z    cni-path:
2020-06-29T13:09:55.3651879Z     Type:          HostPath (bare host directory volume)
2020-06-29T13:09:55.3652009Z     Path:          /opt/cni/bin
2020-06-29T13:09:55.3652093Z     HostPathType:  DirectoryOrCreate
2020-06-29T13:09:55.3652332Z    etc-cni-netd:
2020-06-29T13:09:55.3652468Z     Type:          HostPath (bare host directory volume)
2020-06-29T13:09:55.3652598Z     Path:          /etc/cni/net.d
2020-06-29T13:09:55.3652724Z     HostPathType:  DirectoryOrCreate
2020-06-29T13:09:55.3652955Z    lib-modules:
2020-06-29T13:09:55.3653086Z     Type:          HostPath (bare host directory volume)
2020-06-29T13:09:55.3653173Z     Path:          /lib/modules
2020-06-29T13:09:55.3653335Z     HostPathType:  
2020-06-29T13:09:55.3653576Z    xtables-lock:
2020-06-29T13:09:55.3653717Z     Type:          HostPath (bare host directory volume)
2020-06-29T13:09:55.3653847Z     Path:          /run/xtables.lock
2020-06-29T13:09:55.3653973Z     HostPathType:  FileOrCreate
2020-06-29T13:09:55.3654215Z    clustermesh-secrets:
2020-06-29T13:09:55.3654303Z     Type:        Secret (a volume populated by a Secret)
2020-06-29T13:09:55.3654553Z     SecretName:  cilium-clustermesh
2020-06-29T13:09:55.3654694Z     Optional:    true
2020-06-29T13:09:55.3654948Z    cilium-config-path:
2020-06-29T13:09:55.3655086Z     Type:               ConfigMap (a volume populated by a ConfigMap)
2020-06-29T13:09:55.3655342Z     Name:               cilium-config
2020-06-29T13:09:55.3655470Z     Optional:           false
2020-06-29T13:09:55.3655720Z   Priority Class Name:  system-node-critical
2020-06-29T13:09:55.3655804Z Events:
2020-06-29T13:09:55.3655932Z   Type    Reason            Age   From                  Message
2020-06-29T13:09:55.3656331Z   ----    ------            ----  ----                  -------
2020-06-29T13:09:55.3656667Z   Normal  SuccessfulCreate  21m   daemonset-controller  Created pod: cilium-v6mhm
2020-06-29T13:09:55.3656993Z   Normal  SuccessfulCreate  21m   daemonset-controller  Created pod: cilium-p4rdx
2020-06-29T13:09:55.4495296Z Found 2 pods, using pod/cilium-v6mhm
2020-06-29T13:09:55.4632840Z level=info msg="Skipped reading configuration file" reason="Config File \"ciliumd\" Not Found in \"[/root]\"" subsys=config
2020-06-29T13:09:55.4633449Z level=info msg="Memory available for map entries (0.003% of 7263739904B): 18159349B" subsys=config
2020-06-29T13:09:55.4634394Z level=info msg="option bpf-ct-global-tcp-max set by dynamic sizing to 131072" subsys=config
2020-06-29T13:09:55.4635255Z level=info msg="option bpf-ct-global-any-max set by dynamic sizing to 65536" subsys=config
2020-06-29T13:09:55.4636125Z level=info msg="option bpf-nat-global-max set by dynamic sizing to 131072" subsys=config
2020-06-29T13:09:55.4637032Z level=info msg="option bpf-neigh-global-max set by dynamic sizing to 131072" subsys=config
2020-06-29T13:09:55.4637846Z level=info msg="option bpf-sock-rev-map-max set by dynamic sizing to 65536" subsys=config
2020-06-29T13:09:55.4638520Z level=info msg="  --agent-health-port='9876'" subsys=daemon
2020-06-29T13:09:55.4639105Z level=info msg="  --agent-labels=''" subsys=daemon
2020-06-29T13:09:55.4641039Z level=info msg="  --allow-icmp-frag-needed='true'" subsys=daemon
2020-06-29T13:09:55.4641465Z level=info msg="  --allow-localhost='auto'" subsys=daemon
2020-06-29T13:09:55.4641795Z level=info msg="  --annotate-k8s-node='true'" subsys=daemon
2020-06-29T13:09:55.4642478Z level=info msg="  --auto-create-cilium-node-resource='true'" subsys=daemon
2020-06-29T13:09:55.4642840Z level=info msg="  --auto-direct-node-routes='false'" subsys=daemon
2020-06-29T13:09:55.4643161Z level=info msg="  --blacklist-conflicting-routes='true'" subsys=daemon
2020-06-29T13:09:55.4643490Z level=info msg="  --bpf-compile-debug='false'" subsys=daemon
2020-06-29T13:09:55.4644070Z level=info msg="  --bpf-ct-global-any-max='262144'" subsys=daemon
2020-06-29T13:09:55.4644385Z level=info msg="  --bpf-ct-global-tcp-max='524288'" subsys=daemon
2020-06-29T13:09:55.4644706Z level=info msg="  --bpf-ct-timeout-regular-any='1m0s'" subsys=daemon
2020-06-29T13:09:55.4644973Z level=info msg="  --bpf-ct-timeout-regular-tcp='6h0m0s'" subsys=daemon
2020-06-29T13:09:55.4645293Z level=info msg="  --bpf-ct-timeout-regular-tcp-fin='10s'" subsys=daemon
2020-06-29T13:09:55.4645682Z level=info msg="  --bpf-ct-timeout-regular-tcp-syn='1m0s'" subsys=daemon
2020-06-29T13:09:55.4645999Z level=info msg="  --bpf-ct-timeout-service-any='1m0s'" subsys=daemon
2020-06-29T13:09:55.4646493Z level=info msg="  --bpf-ct-timeout-service-tcp='6h0m0s'" subsys=daemon
2020-06-29T13:09:55.4646792Z level=info msg="  --bpf-fragments-map-max='8192'" subsys=daemon
2020-06-29T13:09:55.4647099Z level=info msg="  --bpf-map-dynamic-size-ratio='0.0025'" subsys=daemon
2020-06-29T13:09:55.4647402Z level=info msg="  --bpf-nat-global-max='524288'" subsys=daemon
2020-06-29T13:09:55.4647711Z level=info msg="  --bpf-neigh-global-max='524288'" subsys=daemon
2020-06-29T13:09:55.4647959Z level=info msg="  --bpf-policy-map-max='16384'" subsys=daemon
2020-06-29T13:09:55.4648247Z level=info msg="  --bpf-root=''" subsys=daemon
2020-06-29T13:09:55.4648580Z level=info msg="  --bpf-sock-rev-map-max='262144'" subsys=daemon
2020-06-29T13:09:55.4648904Z level=info msg="  --certificates-directory='/var/run/cilium/certs'" subsys=daemon
2020-06-29T13:09:55.4649218Z level=info msg="  --cgroup-root=''" subsys=daemon
2020-06-29T13:09:55.4649495Z level=info msg="  --cluster-id='0'" subsys=daemon
2020-06-29T13:09:55.4649782Z level=info msg="  --cluster-name='default'" subsys=daemon
2020-06-29T13:09:55.4650096Z level=info msg="  --clustermesh-config='/var/lib/cilium/clustermesh/'" subsys=daemon
2020-06-29T13:09:55.4650337Z level=info msg="  --cmdref=''" subsys=daemon
2020-06-29T13:09:55.4650613Z level=info msg="  --config=''" subsys=daemon
2020-06-29T13:09:55.4650918Z level=info msg="  --config-dir='/tmp/cilium/config-map'" subsys=daemon
2020-06-29T13:09:55.4651248Z level=info msg="  --conntrack-gc-interval='0s'" subsys=daemon
2020-06-29T13:09:55.4651556Z level=info msg="  --datapath-mode='veth'" subsys=daemon
2020-06-29T13:09:55.4651932Z level=info msg="  --debug='false'" subsys=daemon
2020-06-29T13:09:55.4652254Z level=info msg="  --debug-verbose=''" subsys=daemon
2020-06-29T13:09:55.4652539Z level=info msg="  --device=''" subsys=daemon
2020-06-29T13:09:55.4652768Z level=info msg="  --devices=''" subsys=daemon
2020-06-29T13:09:55.4653062Z level=info msg="  --direct-routing-device=''" subsys=daemon
2020-06-29T13:09:55.4653362Z level=info msg="  --disable-cnp-status-updates='true'" subsys=daemon
2020-06-29T13:09:55.4653658Z level=info msg="  --disable-conntrack='false'" subsys=daemon
2020-06-29T13:09:55.4654006Z level=info msg="  --disable-endpoint-crd='false'" subsys=daemon
2020-06-29T13:09:55.4654302Z level=info msg="  --disable-envoy-version-check='false'" subsys=daemon
2020-06-29T13:09:55.4654616Z level=info msg="  --disable-iptables-feeder-rules=''" subsys=daemon
2020-06-29T13:09:55.4654904Z level=info msg="  --disable-ipv4='false'" subsys=daemon
2020-06-29T13:09:55.4655201Z level=info msg="  --disable-k8s-services='false'" subsys=daemon
2020-06-29T13:09:55.4655451Z level=info msg="  --egress-masquerade-interfaces=''" subsys=daemon
2020-06-29T13:09:55.4655762Z level=info msg="  --enable-auto-protect-node-port-range='true'" subsys=daemon
2020-06-29T13:09:55.4656062Z level=info msg="  --enable-bpf-clock-probe='true'" subsys=daemon
2020-06-29T13:09:55.4656353Z level=info msg="  --enable-bpf-masquerade='true'" subsys=daemon
2020-06-29T13:09:55.4656702Z level=info msg="  --enable-endpoint-health-checking='true'" subsys=daemon
2020-06-29T13:09:55.4657002Z level=info msg="  --enable-endpoint-routes='false'" subsys=daemon
2020-06-29T13:09:55.4657295Z level=info msg="  --enable-external-ips='true'" subsys=daemon
2020-06-29T13:09:55.4657586Z level=info msg="  --enable-health-checking='true'" subsys=daemon
2020-06-29T13:09:55.4663881Z level=info msg="  --enable-host-firewall='false'" subsys=daemon
2020-06-29T13:09:55.4664791Z level=info msg="  --enable-host-port='true'" subsys=daemon
2020-06-29T13:09:55.4665887Z level=info msg="  --enable-host-reachable-services='false'" subsys=daemon
2020-06-29T13:09:55.4667002Z level=info msg="  --enable-hubble='false'" subsys=daemon
2020-06-29T13:09:55.4668944Z level=info msg="  --enable-identity-mark='true'" subsys=daemon
2020-06-29T13:09:55.4670201Z level=info msg="  --enable-ip-masq-agent='false'" subsys=daemon
2020-06-29T13:09:55.4671724Z level=info msg="  --enable-ipsec='false'" subsys=daemon
2020-06-29T13:09:55.4674144Z level=info msg="  --enable-ipv4='true'" subsys=daemon
2020-06-29T13:09:55.4675721Z level=info msg="  --enable-ipv4-fragment-tracking='true'" subsys=daemon
2020-06-29T13:09:55.4677225Z level=info msg="  --enable-ipv6='false'" subsys=daemon
2020-06-29T13:09:55.4678135Z level=info msg="  --enable-k8s-api-discovery='false'" subsys=daemon
2020-06-29T13:09:55.4681714Z level=info msg="  --enable-k8s-endpoint-slice='true'" subsys=daemon
2020-06-29T13:09:55.4682989Z level=info msg="  --enable-k8s-event-handover='false'" subsys=daemon
2020-06-29T13:09:55.4684455Z level=info msg="  --enable-l7-proxy='true'" subsys=daemon
2020-06-29T13:09:55.4685562Z level=info msg="  --enable-local-node-route='true'" subsys=daemon
2020-06-29T13:09:55.4686546Z level=info msg="  --enable-node-port='false'" subsys=daemon
2020-06-29T13:09:55.4687864Z level=info msg="  --enable-policy='default'" subsys=daemon
2020-06-29T13:09:55.4689317Z level=info msg="  --enable-remote-node-identity='true'" subsys=daemon
2020-06-29T13:09:55.4690168Z level=info msg="  --enable-selective-regeneration='true'" subsys=daemon
2020-06-29T13:09:55.4691746Z level=info msg="  --enable-session-affinity='true'" subsys=daemon
2020-06-29T13:09:55.4694357Z level=info msg="  --enable-tracing='false'" subsys=daemon
2020-06-29T13:09:55.4696099Z level=info msg="  --enable-well-known-identities='false'" subsys=daemon
2020-06-29T13:09:55.4697307Z level=info msg="  --enable-xt-socket-fallback='true'" subsys=daemon
2020-06-29T13:09:55.4698062Z level=info msg="  --encrypt-interface=''" subsys=daemon
2020-06-29T13:09:55.4698634Z level=info msg="  --encrypt-node='false'" subsys=daemon
2020-06-29T13:09:55.4699260Z level=info msg="  --endpoint-interface-name-prefix='lxc+'" subsys=daemon
2020-06-29T13:09:55.4699596Z level=info msg="  --endpoint-queue-size='25'" subsys=daemon
2020-06-29T13:09:55.4699923Z level=info msg="  --endpoint-status=''" subsys=daemon
2020-06-29T13:09:55.4700318Z level=info msg="  --envoy-log=''" subsys=daemon
2020-06-29T13:09:55.4700603Z level=info msg="  --exclude-local-address=''" subsys=daemon
2020-06-29T13:09:55.4701091Z level=info msg="  --fixed-identity-mapping='map[]'" subsys=daemon
2020-06-29T13:09:55.4701582Z level=info msg="  --flannel-master-device=''" subsys=daemon
2020-06-29T13:09:55.4702122Z level=info msg="  --flannel-uninstall-on-exit='false'" subsys=daemon
2020-06-29T13:09:55.4702833Z level=info msg="  --force-local-policy-eval-at-source='true'" subsys=daemon
2020-06-29T13:09:55.4703121Z level=info msg="  --host-reachable-services-protos=''" subsys=daemon
2020-06-29T13:09:55.4703483Z level=info msg="  --http-403-msg=''" subsys=daemon
2020-06-29T13:09:55.4703820Z level=info msg="  --http-idle-timeout='0'" subsys=daemon
2020-06-29T13:09:55.4704212Z level=info msg="  --http-max-grpc-timeout='0'" subsys=daemon
2020-06-29T13:09:55.4704551Z level=info msg="  --http-request-timeout='3600'" subsys=daemon
2020-06-29T13:09:55.4705063Z level=info msg="  --http-retry-count='3'" subsys=daemon
2020-06-29T13:09:55.4705538Z level=info msg="  --http-retry-timeout='0'" subsys=daemon
2020-06-29T13:09:55.4706104Z level=info msg="  --hubble-event-queue-size='0'" subsys=daemon
2020-06-29T13:09:55.4706417Z level=info msg="  --hubble-flow-buffer-size='4095'" subsys=daemon
2020-06-29T13:09:55.4706671Z level=info msg="  --hubble-listen-address=''" subsys=daemon
2020-06-29T13:09:55.4707135Z level=info msg="  --hubble-metrics=''" subsys=daemon
2020-06-29T13:09:55.4707641Z level=info msg="  --hubble-metrics-server=''" subsys=daemon
2020-06-29T13:09:55.4708005Z level=info msg="  --hubble-socket-path='/var/run/cilium/hubble.sock'" subsys=daemon
2020-06-29T13:09:55.4708326Z level=info msg="  --identity-allocation-mode='crd'" subsys=daemon
2020-06-29T13:09:55.4708633Z level=info msg="  --identity-change-grace-period='5s'" subsys=daemon
2020-06-29T13:09:55.4709300Z level=info msg="  --install-iptables-rules='true'" subsys=daemon
2020-06-29T13:09:55.4709615Z level=info msg="  --ip-allocation-timeout='2m0s'" subsys=daemon
2020-06-29T13:09:55.4709949Z level=info msg="  --ip-masq-agent-config-path='/etc/config/ip-masq-agent'" subsys=daemon
2020-06-29T13:09:55.4710465Z level=info msg="  --ipam='cluster-pool'" subsys=daemon
2020-06-29T13:09:55.4710801Z level=info msg="  --ipsec-key-file=''" subsys=daemon
2020-06-29T13:09:55.4711118Z level=info msg="  --iptables-lock-timeout='5s'" subsys=daemon
2020-06-29T13:09:55.4711485Z level=info msg="  --ipv4-cluster-cidr-mask-size='8'" subsys=daemon
2020-06-29T13:09:55.4711942Z level=info msg="  --ipv4-node='auto'" subsys=daemon
2020-06-29T13:09:55.4712256Z level=info msg="  --ipv4-pod-subnets=''" subsys=daemon
2020-06-29T13:09:55.4712744Z level=info msg="  --ipv4-range='auto'" subsys=daemon
2020-06-29T13:09:55.4713093Z level=info msg="  --ipv4-service-loopback-address='169.254.42.1'" subsys=daemon
2020-06-29T13:09:55.4713768Z level=info msg="  --ipv4-service-range='auto'" subsys=daemon
2020-06-29T13:09:55.4715415Z level=info msg="  --ipv6-cluster-alloc-cidr='f00d::/64'" subsys=daemon
2020-06-29T13:09:55.4715772Z level=info msg="  --ipv6-node='auto'" subsys=daemon
2020-06-29T13:09:55.4716092Z level=info msg="  --ipv6-pod-subnets=''" subsys=daemon
2020-06-29T13:09:55.4716466Z level=info msg="  --ipv6-range='auto'" subsys=daemon
2020-06-29T13:09:55.4716791Z level=info msg="  --ipv6-service-range='auto'" subsys=daemon
2020-06-29T13:09:55.4717121Z level=info msg="  --ipvlan-master-device='undefined'" subsys=daemon
2020-06-29T13:09:55.4717459Z level=info msg="  --k8s-api-server=''" subsys=daemon
2020-06-29T13:09:55.4717966Z level=info msg="  --k8s-force-json-patch='false'" subsys=daemon
2020-06-29T13:09:55.4718539Z level=info msg="  --k8s-heartbeat-timeout='30s'" subsys=daemon
2020-06-29T13:09:55.4719141Z level=info msg="  --k8s-kubeconfig-path=''" subsys=daemon
2020-06-29T13:09:55.4719475Z level=info msg="  --k8s-namespace='kube-system'" subsys=daemon
2020-06-29T13:09:55.4719806Z level=info msg="  --k8s-require-ipv4-pod-cidr='true'" subsys=daemon
2020-06-29T13:09:55.4720185Z level=info msg="  --k8s-require-ipv6-pod-cidr='false'" subsys=daemon
2020-06-29T13:09:55.4720662Z level=info msg="  --k8s-service-cache-size='128'" subsys=daemon
2020-06-29T13:09:55.4721132Z level=info msg="  --k8s-watcher-endpoint-selector='metadata.name!=kube-scheduler,metadata.name!=kube-controller-manager,metadata.name!=etcd-operator,metadata.name!=gcp-controller-manager'" subsys=daemon
2020-06-29T13:09:55.4721502Z level=info msg="  --k8s-watcher-queue-size='1024'" subsys=daemon
2020-06-29T13:09:55.4721846Z level=info msg="  --keep-bpf-templates='false'" subsys=daemon
2020-06-29T13:09:55.4722578Z level=info msg="  --keep-config='false'" subsys=daemon
2020-06-29T13:09:55.4722902Z level=info msg="  --kube-proxy-replacement='probe'" subsys=daemon
2020-06-29T13:09:55.4723147Z level=info msg="  --kvstore=''" subsys=daemon
2020-06-29T13:09:55.4723457Z level=info msg="  --kvstore-connectivity-timeout='2m0s'" subsys=daemon
2020-06-29T13:09:55.4723972Z level=info msg="  --kvstore-lease-ttl='15m0s'" subsys=daemon
2020-06-29T13:09:55.4724327Z level=info msg="  --kvstore-opt='map[]'" subsys=daemon
2020-06-29T13:09:55.4724623Z level=info msg="  --kvstore-periodic-sync='5m0s'" subsys=daemon
2020-06-29T13:09:55.4724921Z level=info msg="  --label-prefix-file=''" subsys=daemon
2020-06-29T13:09:55.4725200Z level=info msg="  --labels=''" subsys=daemon
2020-06-29T13:09:55.4725495Z level=info msg="  --lib-dir='/var/lib/cilium'" subsys=daemon
2020-06-29T13:09:55.4725860Z level=info msg="  --log-driver=''" subsys=daemon
2020-06-29T13:09:55.4726148Z level=info msg="  --log-opt='map[]'" subsys=daemon
2020-06-29T13:09:55.4726436Z level=info msg="  --log-system-load='false'" subsys=daemon
2020-06-29T13:09:55.4726768Z level=info msg="  --masquerade='true'" subsys=daemon
2020-06-29T13:09:55.4727065Z level=info msg="  --max-controller-interval='0'" subsys=daemon
2020-06-29T13:09:55.4727356Z level=info msg="  --metrics=''" subsys=daemon
2020-06-29T13:09:55.4727653Z level=info msg="  --monitor-aggregation='medium'" subsys=daemon
2020-06-29T13:09:55.4727955Z level=info msg="  --monitor-aggregation-flags='all'" subsys=daemon
2020-06-29T13:09:55.4728208Z level=info msg="  --monitor-aggregation-interval='5s'" subsys=daemon
2020-06-29T13:09:55.4728506Z level=info msg="  --monitor-queue-size='0'" subsys=daemon
2020-06-29T13:09:55.4728781Z level=info msg="  --mtu='0'" subsys=daemon
2020-06-29T13:09:55.4729390Z level=info msg="  --nat46-range='0:0:0:0:0:FFFF::/96'" subsys=daemon
2020-06-29T13:09:55.4729759Z level=info msg="  --native-routing-cidr=''" subsys=daemon
2020-06-29T13:09:55.4730057Z level=info msg="  --node-port-acceleration='disabled'" subsys=daemon
2020-06-29T13:09:55.4730380Z level=info msg="  --node-port-bind-protection='true'" subsys=daemon
2020-06-29T13:09:55.4730680Z level=info msg="  --node-port-mode='snat'" subsys=daemon
2020-06-29T13:09:55.4730970Z level=info msg="  --node-port-range=''" subsys=daemon
2020-06-29T13:09:55.4731215Z level=info msg="  --policy-audit-mode='false'" subsys=daemon
2020-06-29T13:09:55.4731508Z level=info msg="  --policy-queue-size='100'" subsys=daemon
2020-06-29T13:09:55.4731806Z level=info msg="  --policy-trigger-interval='1s'" subsys=daemon
2020-06-29T13:09:55.4732091Z level=info msg="  --pprof='false'" subsys=daemon
2020-06-29T13:09:55.4732416Z level=info msg="  --preallocate-bpf-maps='false'" subsys=daemon
2020-06-29T13:09:55.4732721Z level=info msg="  --prefilter-device='undefined'" subsys=daemon
2020-06-29T13:09:55.4733025Z level=info msg="  --prefilter-mode='native'" subsys=daemon
2020-06-29T13:09:55.4733325Z level=info msg="  --prepend-iptables-chains='true'" subsys=daemon
2020-06-29T13:09:55.4733569Z level=info msg="  --prometheus-serve-addr=''" subsys=daemon
2020-06-29T13:09:55.4733959Z level=info msg="  --proxy-connect-timeout='1'" subsys=daemon
2020-06-29T13:09:55.4734276Z level=info msg="  --read-cni-conf=''" subsys=daemon
2020-06-29T13:09:55.4734561Z level=info msg="  --restore='true'" subsys=daemon
2020-06-29T13:09:55.4734874Z level=info msg="  --sidecar-istio-proxy-image='cilium/istio_proxy'" subsys=daemon
2020-06-29T13:09:55.4735272Z level=info msg="  --single-cluster-route='false'" subsys=daemon
2020-06-29T13:09:55.4735579Z level=info msg="  --skip-crd-creation='false'" subsys=daemon
2020-06-29T13:09:55.4735887Z level=info msg="  --socket-path='/var/run/cilium/cilium.sock'" subsys=daemon
2020-06-29T13:09:55.4736186Z level=info msg="  --sockops-enable='false'" subsys=daemon
2020-06-29T13:09:55.4736441Z level=info msg="  --state-dir='/var/run/cilium'" subsys=daemon
2020-06-29T13:09:55.4736753Z level=info msg="  --tofqdns-dns-reject-response-code='refused'" subsys=daemon
2020-06-29T13:09:55.4737071Z level=info msg="  --tofqdns-enable-dns-compression='true'" subsys=daemon
2020-06-29T13:09:55.4737379Z level=info msg="  --tofqdns-enable-poller='false'" subsys=daemon
2020-06-29T13:09:55.4737682Z level=info msg="  --tofqdns-enable-poller-events='true'" subsys=daemon
2020-06-29T13:09:55.4738028Z level=info msg="  --tofqdns-endpoint-max-ip-per-hostname='50'" subsys=daemon
2020-06-29T13:09:55.4738362Z level=info msg="  --tofqdns-max-deferred-connection-deletes='10000'" subsys=daemon
2020-06-29T13:09:55.4738661Z level=info msg="  --tofqdns-min-ttl='0'" subsys=daemon
2020-06-29T13:09:55.4738949Z level=info msg="  --tofqdns-pre-cache=''" subsys=daemon
2020-06-29T13:09:55.4739192Z level=info msg="  --tofqdns-proxy-port='0'" subsys=daemon
2020-06-29T13:09:55.4739500Z level=info msg="  --tofqdns-proxy-response-max-delay='100ms'" subsys=daemon
2020-06-29T13:09:55.4739896Z level=info msg="  --trace-payloadlen='128'" subsys=daemon
2020-06-29T13:09:55.4740179Z level=info msg="  --tunnel='vxlan'" subsys=daemon
2020-06-29T13:09:55.4740476Z level=info msg="  --version='false'" subsys=daemon
2020-06-29T13:09:55.4740800Z level=info msg="  --write-cni-conf-when-ready=''" subsys=daemon
2020-06-29T13:09:55.4740954Z level=info msg="     _ _ _" subsys=daemon
2020-06-29T13:09:55.4741095Z level=info msg=" ___|_| |_|_ _ _____" subsys=daemon
2020-06-29T13:09:55.4741192Z level=info msg="|  _| | | | | |     |" subsys=daemon
2020-06-29T13:09:55.4741340Z level=info msg="|___|_|_|_|___|_|_|_|" subsys=daemon
2020-06-29T13:09:55.4741716Z level=info msg="Cilium 1.8.90 0490aede0 2020-06-25T22:22:09+10:00 go version go1.14.4 linux/amd64" subsys=daemon
2020-06-29T13:09:55.4742107Z level=info msg="cilium-envoy  version: a8f292139e923b205525feb2c8a4377005904776/1.13.2/Modified/RELEASE/BoringSSL" subsys=daemon
2020-06-29T13:09:55.4742473Z level=info msg="clang (10.0.0) and kernel (5.3.0) versions: OK!" subsys=linux-datapath
2020-06-29T13:09:55.4742782Z level=info msg="linking environment: OK!" subsys=linux-datapath
2020-06-29T13:09:55.4743164Z level=warning msg="BPF system config check: NOT OK." error="CONFIG_BPF kernel parameter is required" subsys=linux-datapath
2020-06-29T13:09:55.4743726Z level=warning msg="BPF filesystem is going to be mounted automatically in /run/cilium/bpffs. However, it probably means that Cilium is running inside container and BPFFS is not mounted on the host. for more information, see: https://cilium.link/err-bpf-mount" subsys=bpf
2020-06-29T13:09:55.4743938Z level=warning msg="================================= WARNING ==========================================" subsys=bpf
2020-06-29T13:09:55.4744112Z level=warning msg="BPF filesystem is not mounted. This will lead to network disruption when Cilium pods" subsys=bpf
2020-06-29T13:09:55.4744283Z level=warning msg="are restarted. Ensure that the BPF filesystem is mounted in the host." subsys=bpf
2020-06-29T13:09:55.4744673Z level=warning msg="https://docs.cilium.io/en/stable/kubernetes/requirements/#mounted-bpf-filesystem" subsys=bpf
2020-06-29T13:09:55.4745007Z level=warning msg="====================================================================================" subsys=bpf
2020-06-29T13:09:55.4745178Z level=info msg="Mounting BPF filesystem at /run/cilium/bpffs" subsys=bpf
2020-06-29T13:09:55.4745288Z level=info msg="Detected mounted BPF filesystem at /run/cilium/bpffs" subsys=bpf
2020-06-29T13:09:55.4745657Z level=info msg="Valid label prefix configuration:" subsys=labels-filter
2020-06-29T13:09:55.4745971Z level=info msg=" - :io.kubernetes.pod.namespace" subsys=labels-filter
2020-06-29T13:09:55.4746283Z level=info msg=" - :io.cilium.k8s.namespace.labels" subsys=labels-filter
2020-06-29T13:09:55.4746581Z level=info msg=" - :app.kubernetes.io" subsys=labels-filter
2020-06-29T13:09:55.4746867Z level=info msg=" - !:io.kubernetes" subsys=labels-filter
2020-06-29T13:09:55.4747178Z level=info msg=" - !:kubernetes.io" subsys=labels-filter
2020-06-29T13:09:55.4747473Z level=info msg=" - !:.*beta.kubernetes.io" subsys=labels-filter
2020-06-29T13:09:55.4747756Z level=info msg=" - !:k8s.io" subsys=labels-filter
2020-06-29T13:09:55.4748007Z level=info msg=" - !:pod-template-generation" subsys=labels-filter
2020-06-29T13:09:55.4748339Z level=info msg=" - !:pod-template-hash" subsys=labels-filter
2020-06-29T13:09:55.4748702Z level=info msg=" - !:controller-revision-hash" subsys=labels-filter
2020-06-29T13:09:55.4749003Z level=info msg=" - !:annotation.*" subsys=labels-filter
2020-06-29T13:09:55.4749288Z level=info msg=" - !:etcd_node" subsys=labels-filter
2020-06-29T13:09:55.4749466Z level=info msg="Using autogenerated IPv4 allocation range" subsys=node v4Prefix=10.2.0.0/16
2020-06-29T13:09:55.4749623Z level=info msg="Initializing daemon" subsys=daemon
2020-06-29T13:09:55.4749809Z level=info msg="Establishing connection to apiserver" host="https://10.245.0.1:443" subsys=k8s
2020-06-29T13:09:55.4750073Z level=info msg="Connected to apiserver" subsys=k8s
2020-06-29T13:09:55.4750185Z level=info msg="Inheriting MTU from external network interface" device=eth0 ipAddr=172.18.0.2 mtu=1500 subsys=mtu
2020-06-29T13:09:55.4750724Z level=info msg="Trying to auto-enable \"enable-node-port\", \"enable-external-ips\", \"enable-host-reachable-services\", \"enable-host-port\", \"enable-session-affinity\" features" subsys=daemon
2020-06-29T13:09:55.4750959Z level=warning msg="Session affinity for host reachable services needs kernel 5.7.0 or newer to work properly when accessed from inside cluster: the same service endpoint will be selected from all network namespaces on the host." subsys=daemon
2020-06-29T13:09:55.4751219Z level=info msg="Restored services from maps" failed=0 restored=0 subsys=service
2020-06-29T13:09:55.4751539Z level=info msg="CRD (CustomResourceDefinition) is installed and up-to-date" name=CiliumNetworkPolicy/v2 subsys=k8s
2020-06-29T13:09:55.4751914Z level=info msg="CRD (CustomResourceDefinition) is installed and up-to-date" name=CiliumClusterwideNetworkPolicy/v2 subsys=k8s
2020-06-29T13:09:55.4752280Z level=info msg="CRD (CustomResourceDefinition) is installed and up-to-date" name=v2.CiliumEndpoint subsys=k8s
2020-06-29T13:09:55.4764081Z level=info msg="Creating CRD (CustomResourceDefinition)..." name=v2.CiliumNode subsys=k8s
2020-06-29T13:09:55.4764627Z level=info msg="CRD (CustomResourceDefinition) is installed and up-to-date" name=v2.CiliumNode subsys=k8s
2020-06-29T13:09:55.4765007Z level=info msg="CRD (CustomResourceDefinition) is installed and up-to-date" name=v2.CiliumIdentity subsys=k8s
2020-06-29T13:09:55.4765205Z level=info msg="Successfully created CiliumNode resource" subsys=nodediscovery
2020-06-29T13:09:55.4765581Z level=info msg="Retrieved node information from cilium node" nodeName=chart-testing-control-plane subsys=k8s
2020-06-29T13:09:55.4765765Z level=warning msg="Waiting for k8s node information" error="required IPv4 pod CIDR not present in node resource" subsys=k8s
2020-06-29T13:09:55.4766197Z level=info msg="Retrieved node information from cilium node" nodeName=chart-testing-control-plane subsys=k8s
2020-06-29T13:09:55.4766511Z level=warning msg="Waiting for k8s node information" error="required IPv4 pod CIDR not present in node resource" subsys=k8s
2020-06-29T13:09:55.4766916Z level=info msg="Retrieved node information from cilium node" nodeName=chart-testing-control-plane subsys=k8s
2020-06-29T13:09:55.4767158Z level=warning msg="Waiting for k8s node information" error="required IPv4 pod CIDR not present in node resource" subsys=k8s
2020-06-29T13:09:55.4767536Z level=info msg="Retrieved node information from cilium node" nodeName=chart-testing-control-plane subsys=k8s
2020-06-29T13:09:55.4767663Z level=warning msg="Waiting for k8s node information" error="required IPv4 pod CIDR not present in node resource" subsys=k8s
2020-06-29T13:09:55.4768041Z level=info msg="Retrieved node information from cilium node" nodeName=chart-testing-control-plane subsys=k8s
2020-06-29T13:09:55.4768225Z level=warning msg="Waiting for k8s node information" error="required IPv4 pod CIDR not present in node resource" subsys=k8s
2020-06-29T13:09:55.4768588Z level=info msg="Retrieved node information from cilium node" nodeName=chart-testing-control-plane subsys=k8s
2020-06-29T13:09:55.4769075Z level=info msg="Received own node information from API server" ipAddr.ipv4=172.18.0.2 ipAddr.ipv6="<nil>" k8sNodeIP=172.18.0.2 nodeName=chart-testing-control-plane subsys=k8s v4Prefix=10.0.1.0/24 v6Prefix="<nil>"
2020-06-29T13:09:55.4769340Z level=info msg="k8s mode: Allowing localhost to reach local endpoints" subsys=daemon
2020-06-29T13:09:55.4769725Z level=info msg="Using auto-derived devices for BPF node port" devices="[eth0]" directRoutingDevice=eth0 subsys=daemon
2020-06-29T13:09:55.4770086Z level=info msg="Enabling k8s event listener" subsys=k8s-watcher
2020-06-29T13:09:55.4770262Z level=info msg="Removing stale endpoint interfaces" subsys=daemon
2020-06-29T13:09:55.4770626Z level=info msg="Waiting until all pre-existing resources related to policy have been received" subsys=k8s-watcher
2020-06-29T13:09:55.4770891Z level=info msg="Initializing node addressing" subsys=daemon
2020-06-29T13:09:55.4771285Z level=info msg="Initializing cluster-pool IPAM" subsys=ipam v4Prefix=10.0.1.0/24 v6Prefix="<nil>"
2020-06-29T13:09:55.4771450Z level=info msg="Restoring endpoints..." subsys=daemon
2020-06-29T13:09:55.4771551Z level=info msg="No old endpoints found." subsys=daemon
2020-06-29T13:09:55.4771699Z level=info msg="Addressing information:" subsys=daemon
2020-06-29T13:09:55.4771999Z level=info msg="  Cluster-Name: default" subsys=daemon
2020-06-29T13:09:55.4772303Z level=info msg="  Cluster-ID: 0" subsys=daemon
2020-06-29T13:09:55.4772612Z level=info msg="  Local node-name: chart-testing-control-plane" subsys=daemon
2020-06-29T13:09:55.4772908Z level=info msg="  Node-IPv6: <nil>" subsys=daemon
2020-06-29T13:09:55.4773238Z level=info msg="  External-Node IPv4: 172.18.0.2" subsys=daemon
2020-06-29T13:09:55.4773549Z level=info msg="  Internal-Node IPv4: 10.0.1.185" subsys=daemon
2020-06-29T13:09:55.4773663Z level=info msg="  IPv4 allocation prefix: 10.0.1.0/24" subsys=daemon
2020-06-29T13:09:55.4773818Z level=info msg="  Loopback IPv4: 169.254.42.1" subsys=daemon
2020-06-29T13:09:55.4773969Z level=info msg="  Local IPv4 addresses:" subsys=daemon
2020-06-29T13:09:55.4774274Z level=info msg="  - 172.18.0.2" subsys=daemon
2020-06-29T13:09:55.4774453Z level=info msg="Annotating k8s node" subsys=daemon v4CiliumHostIP.IPv4=10.0.1.185 v4Prefix=10.0.1.0/24 v4healthIP.IPv4=10.0.1.89 v6CiliumHostIP.IPv6="<nil>" v6Prefix="<nil>" v6healthIP.IPv6="<nil>"
2020-06-29T13:09:55.4774817Z level=info msg="Envoy: Starting xDS gRPC server listening on /var/run/cilium/xds.sock" subsys=envoy-manager
2020-06-29T13:09:55.4775278Z level=info msg="Adding local node to cluster" node="{chart-testing-control-plane default [{InternalIP 172.18.0.2} {CiliumInternalIP 10.0.1.185}] 10.0.1.0/24 <nil> 10.0.1.89 <nil> 0 local 0 map[]}" subsys=nodediscovery
2020-06-29T13:09:55.4775728Z level=info msg="Initializing identity allocator" subsys=identity-cache
2020-06-29T13:09:55.4776076Z level=info msg="Cluster-ID is not specified, skipping ClusterMesh initialization" subsys=daemon
2020-06-29T13:09:55.4776510Z level=info msg="Setting up base BPF datapath (BPF v2 instruction set, ktime clock source)" subsys=datapath-loader
2020-06-29T13:09:55.4776886Z level=info msg="Setting sysctl net.core.bpf_jit_enable=1" subsys=datapath-loader
2020-06-29T13:09:55.4777888Z level=warning msg="Failed to sysctl -w" error="could not open the sysctl file /proc/sys/net/core/bpf_jit_enable: open /proc/sys/net/core/bpf_jit_enable: no such file or directory" subsys=datapath-loader sysParamName=net.core.bpf_jit_enable sysParamValue=1
2020-06-29T13:09:55.4778352Z level=info msg="Setting sysctl net.ipv4.conf.all.rp_filter=0" subsys=datapath-loader
2020-06-29T13:09:55.4778642Z level=info msg="Setting sysctl kernel.unprivileged_bpf_disabled=1" subsys=datapath-loader
2020-06-29T13:09:55.4779003Z level=info msg="All pre-existing resources related to policy have been received; continuing" subsys=k8s-watcher
2020-06-29T13:09:55.4779380Z level=info msg="Blacklisting local route as no-alloc" route=172.18.0.0/16 subsys=ipam
2020-06-29T13:09:55.4779988Z level=info msg="Serving cilium node monitor v1.2 API at unix:///var/run/cilium/monitor1_2.sock" subsys=monitor-agent
2020-06-29T13:09:55.4780416Z level=info msg="Adding new proxy port rules for cilium-dns-egress:45789" proxy port name=cilium-dns-egress subsys=proxy
2020-06-29T13:09:55.4780588Z level=info msg="Starting IP identity watcher" subsys=ipcache
2020-06-29T13:09:55.4780895Z level=info msg="Validating configured node address ranges" subsys=daemon
2020-06-29T13:09:55.4781057Z level=info msg="Starting connection tracking garbage collector" subsys=daemon
2020-06-29T13:09:55.4781212Z level=info msg="Datapath signal listener running" subsys=signal
2020-06-29T13:09:55.4781593Z level=info msg="Initial scan of connection tracking completed" subsys=ct-gc
2020-06-29T13:09:55.4781824Z level=info msg="Skipping kvstore configuration" subsys=daemon
2020-06-29T13:09:55.4781981Z level=info msg="Regenerating restored endpoints" numRestored=0 subsys=daemon
2020-06-29T13:09:55.4782136Z level=info msg="Creating host endpoint" subsys=daemon
2020-06-29T13:09:55.4782312Z level=info msg="New endpoint" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=3813 ipv4= ipv6= k8sPodName=/ subsys=endpoint
2020-06-29T13:09:55.4782538Z level=info msg="Resolving identity labels (blocking)" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=3813 identityLabels="reserved:host" ipv4= ipv6= k8sPodName=/ subsys=endpoint
2020-06-29T13:09:55.4782757Z level=info msg="Identity of endpoint changed" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=3813 identity=1 identityLabels="reserved:host" ipv4= ipv6= k8sPodName=/ oldIdentity="no identity" subsys=endpoint
2020-06-29T13:09:55.4782942Z level=info msg="Launching Cilium health daemon" subsys=daemon
2020-06-29T13:09:55.4783101Z level=info msg="Finished regenerating restored endpoints" regenerated=0 subsys=daemon total=0
2020-06-29T13:09:55.4783284Z level=info msg="Launching Cilium health endpoint" subsys=daemon
2020-06-29T13:09:55.4783447Z level=info msg="Started healthz status API server on address localhost:9876" subsys=daemon
2020-06-29T13:09:55.4783660Z level=info msg="Initializing Cilium API" subsys=daemon
2020-06-29T13:09:55.4783827Z level=info msg="Daemon initialization completed" bootstrapTime=25.885378875s subsys=daemon
2020-06-29T13:09:55.4783938Z level=info msg="Hubble server is disabled" subsys=hubble
2020-06-29T13:09:55.4784091Z level=info msg="Serving cilium at unix:///var/run/cilium/cilium.sock" subsys=daemon
2020-06-29T13:09:55.4784928Z level=info msg="Create endpoint request" addressing="&{10.0.1.108 ef968009-ba06-11ea-a468-0242ac120002  }" containerID=e8e6c4d6a2e90a493b144d28d6c1e6002205b3e0e3611b13bcc6d02c9446035c datapathConfiguration="<nil>" interface=lxc490e842d5f89 k8sPodName=kube-system/coredns-66bff467f8-5pmv4 labels="[]" subsys=daemon sync-build=true
2020-06-29T13:09:55.4785168Z level=info msg="New endpoint" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=2564 ipv4= ipv6= k8sPodName=/ subsys=endpoint
2020-06-29T13:09:55.4786086Z level=info msg="Resolving identity labels (blocking)" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=2564 identityLabels="k8s:io.cilium.k8s.policy.cluster=default,k8s:io.cilium.k8s.policy.serviceaccount=coredns,k8s:io.kubernetes.pod.namespace=kube-system,k8s:k8s-app=kube-dns" ipv4= ipv6= k8sPodName=/ subsys=endpoint
2020-06-29T13:09:55.4786731Z level=info msg="Skipped non-kubernetes labels when labelling ciliumidentity. All labels will still be used in identity determination" labels="map[]" subsys=crd-allocator
2020-06-29T13:09:55.4787170Z level=info msg="regenerating all endpoints" reason="one or more identities created or deleted" subsys=endpoint-manager
2020-06-29T13:09:55.4787800Z level=info msg="Invalid state transition skipped" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=2564 endpointState.from=waiting-for-identity endpointState.to=waiting-to-regenerate file=/go/src/github.com/cilium/cilium/pkg/endpoint/policy.go ipv4= ipv6= k8sPodName=/ line=476 subsys=endpoint
2020-06-29T13:09:55.4788038Z level=info msg="New endpoint" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=3145 ipv4= ipv6= k8sPodName=/ subsys=endpoint
2020-06-29T13:09:55.4791723Z level=info msg="Resolving identity labels (blocking)" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=3145 identityLabels="reserved:health" ipv4= ipv6= k8sPodName=/ subsys=endpoint
2020-06-29T13:09:55.4794017Z level=info msg="Identity of endpoint changed" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=3145 identity=4 identityLabels="reserved:health" ipv4= ipv6= k8sPodName=/ oldIdentity="no identity" subsys=endpoint
2020-06-29T13:09:55.4795145Z level=info msg="Allocated new global key" key="k8s:io.cilium.k8s.policy.cluster=default;k8s:io.cilium.k8s.policy.serviceaccount=coredns;k8s:io.kubernetes.pod.namespace=kube-system;k8s:k8s-app=kube-dns;" subsys=allocator
2020-06-29T13:09:55.4797095Z level=info msg="Identity of endpoint changed" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=2564 identity=2057 identityLabels="k8s:io.cilium.k8s.policy.cluster=default,k8s:io.cilium.k8s.policy.serviceaccount=coredns,k8s:io.kubernetes.pod.namespace=kube-system,k8s:k8s-app=kube-dns" ipv4= ipv6= k8sPodName=/ oldIdentity="no identity" subsys=endpoint
2020-06-29T13:09:55.4797351Z level=info msg="Waiting for endpoint to be generated" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=2564 identity=2057 ipv4= ipv6= k8sPodName=/ subsys=endpoint
2020-06-29T13:09:55.4797781Z level=info msg="regenerating all endpoints" reason="Named ports added or updated, one or more identities created or deleted" subsys=endpoint-manager
2020-06-29T13:09:55.4798181Z level=info msg="regenerating all endpoints" reason= subsys=endpoint-manager
2020-06-29T13:09:55.4798626Z level=info msg="Compiled new BPF template" BPFCompilationTime=3.453061608s file-path=/var/run/cilium/state/templates/9ad2862c3e76bd06f4f5940f8fbf826455c9f6eb/bpf_host.o subsys=datapath-loader
2020-06-29T13:09:55.4799090Z level=info msg="Compiled new BPF template" BPFCompilationTime=3.212101313s file-path=/var/run/cilium/state/templates/6a073cb9f3ef965cfc38b7b3cb64d8f9f1c6cd40/bpf_lxc.o subsys=datapath-loader
2020-06-29T13:09:55.4799291Z level=info msg="Rewrote endpoint BPF program" containerID= datapathPolicyRevision=0 desiredPolicyRevision=1 endpointID=3145 identity=4 ipv4= ipv6= k8sPodName=/ subsys=endpoint
2020-06-29T13:09:55.4799483Z level=info msg="Rewrote endpoint BPF program" containerID= datapathPolicyRevision=0 desiredPolicyRevision=1 endpointID=3813 identity=1 ipv4= ipv6= k8sPodName=/ subsys=endpoint
2020-06-29T13:09:55.4799674Z level=info msg="Rewrote endpoint BPF program" containerID= datapathPolicyRevision=0 desiredPolicyRevision=2 endpointID=2564 identity=2057 ipv4= ipv6= k8sPodName=/ subsys=endpoint
2020-06-29T13:09:55.4799977Z level=info msg="Successful endpoint creation" containerID= datapathPolicyRevision=2 desiredPolicyRevision=2 endpointID=2564 identity=2057 ipv4= ipv6= k8sPodName=/ subsys=daemon
2020-06-29T13:09:55.4800371Z level=info msg="Serving cilium health at unix:///var/run/cilium/health.sock" subsys=health-server
2020-06-29T13:09:55.4800778Z level=info msg="regenerating all endpoints" reason="one or more identities created or deleted" subsys=endpoint-manager
2020-06-29T13:09:55.4802960Z level=info msg="Policy Add Request" ciliumNetworkPolicy="[&{EndpointSelector:{\"matchLabels\":{\"any:name\":\"pod-to-a-allowed-cnp\",\"k8s:io.kubernetes.pod.namespace\":\"default\"}} NodeSelector:{} Ingress:[] Egress:[{ToEndpoints:[{\"matchLabels\":{\"any:name\":\"echo-a\",\"k8s:io.kubernetes.pod.namespace\":\"default\"}}] ToRequires:[] ToPorts:[{Ports:[{Port:80 Protocol:TCP}] TerminatingTLS:<nil> OriginatingTLS:<nil> Rules:<nil>}] ToCIDR: ToCIDRSet:[] ToEntities:[] ToServices:[] ToFQDNs:[] ToGroups:[] aggregatedSelectors:[]} {ToEndpoints:[{\"matchLabels\":{\"k8s:io.kubernetes.pod.namespace\":\"kube-system\",\"k8s:k8s-app\":\"kube-dns\"}}] ToRequires:[] ToPorts:[{Ports:[{Port:53 Protocol:UDP}] TerminatingTLS:<nil> OriginatingTLS:<nil> Rules:<nil>}] ToCIDR: ToCIDRSet:[] ToEntities:[] ToServices:[] ToFQDNs:[] ToGroups:[] aggregatedSelectors:[]} {ToEndpoints:[{\"matchLabels\":{\"k8s:dns.operator.openshift.io/daemonset-dns\":\"default\",\"k8s:io.kubernetes.pod.namespace\":\"openshift-dns\"}}] ToRequires:[] ToPorts:[{Ports:[{Port:5353 Protocol:UDP}] TerminatingTLS:<nil> OriginatingTLS:<nil> Rules:<nil>}] ToCIDR: ToCIDRSet:[] ToEntities:[] ToServices:[] ToFQDNs:[] ToGroups:[] aggregatedSelectors:[]}] Labels:[k8s:io.cilium.k8s.policy.derived-from=CiliumNetworkPolicy k8s:io.cilium.k8s.policy.name=pod-to-a-allowed-cnp k8s:io.cilium.k8s.policy.namespace=default k8s:io.cilium.k8s.policy.uid=c6cdab39-4652-40db-ad08-da50d6d100a4] Description:}]" policyAddRequest=fa05bba4-ba06-11ea-a468-0242ac120002 subsys=daemon
2020-06-29T13:09:55.4803783Z level=info msg="Policy imported via API, recalculating..." policyAddRequest=fa05bba4-ba06-11ea-a468-0242ac120002 policyRevision=3 subsys=daemon
2020-06-29T13:09:55.4804209Z level=info msg="Imported CiliumNetworkPolicy" ciliumNetworkPolicyName=pod-to-a-allowed-cnp k8sApiVersion= k8sNamespace=default subsys=k8s-watcher
2020-06-29T13:09:55.4805673Z level=info msg="Policy Add Request" ciliumNetworkPolicy="[&{EndpointSelector:{\"matchLabels\":{\"any:name\":\"pod-to-a-l3-denied-cnp\",\"k8s:io.kubernetes.pod.namespace\":\"default\"}} NodeSelector:{} Ingress:[] Egress:[{ToEndpoints:[{\"matchLabels\":{\"k8s:io.kubernetes.pod.namespace\":\"kube-system\",\"k8s:k8s-app\":\"kube-dns\"}}] ToRequires:[] ToPorts:[{Ports:[{Port:53 Protocol:UDP}] TerminatingTLS:<nil> OriginatingTLS:<nil> Rules:<nil>}] ToCIDR: ToCIDRSet:[] ToEntities:[] ToServices:[] ToFQDNs:[] ToGroups:[] aggregatedSelectors:[]} {ToEndpoints:[{\"matchLabels\":{\"k8s:dns.operator.openshift.io/daemonset-dns\":\"default\",\"k8s:io.kubernetes.pod.namespace\":\"openshift-dns\"}}] ToRequires:[] ToPorts:[{Ports:[{Port:5353 Protocol:UDP}] TerminatingTLS:<nil> OriginatingTLS:<nil> Rules:<nil>}] ToCIDR: ToCIDRSet:[] ToEntities:[] ToServices:[] ToFQDNs:[] ToGroups:[] aggregatedSelectors:[]}] Labels:[k8s:io.cilium.k8s.policy.derived-from=CiliumNetworkPolicy k8s:io.cilium.k8s.policy.name=pod-to-a-l3-denied-cnp k8s:io.cilium.k8s.policy.namespace=default k8s:io.cilium.k8s.policy.uid=02e53dc6-7222-4a4d-bc8a-c0865413e358] Description:}]" policyAddRequest=fa21e183-ba06-11ea-a468-0242ac120002 subsys=daemon
2020-06-29T13:09:55.4806236Z level=info msg="Policy imported via API, recalculating..." policyAddRequest=fa21e183-ba06-11ea-a468-0242ac120002 policyRevision=4 subsys=daemon
2020-06-29T13:09:55.4806675Z level=info msg="Imported CiliumNetworkPolicy" ciliumNetworkPolicyName=pod-to-a-l3-denied-cnp k8sApiVersion= k8sNamespace=default subsys=k8s-watcher
2020-06-29T13:09:55.4808613Z level=info msg="Policy Add Request" ciliumNetworkPolicy="[&{EndpointSelector:{\"matchLabels\":{\"any:name\":\"pod-to-external-fqdn-allow-google-cnp\",\"k8s:io.kubernetes.pod.namespace\":\"default\"}} NodeSelector:{} Ingress:[] Egress:[{ToEndpoints:[{\"matchLabels\":{\"k8s:io.kubernetes.pod.namespace\":\"kube-system\",\"k8s:k8s-app\":\"kube-dns\"}}] ToRequires:[] ToPorts:[{Ports:[{Port:53 Protocol:ANY}] TerminatingTLS:<nil> OriginatingTLS:<nil> Rules:0xc0011ebb20}] ToCIDR: ToCIDRSet:[] ToEntities:[] ToServices:[] ToFQDNs:[] ToGroups:[] aggregatedSelectors:[]} {ToEndpoints:[{\"matchLabels\":{\"k8s:dns.operator.openshift.io/daemonset-dns\":\"default\",\"k8s:io.kubernetes.pod.namespace\":\"openshift-dns\"}}] ToRequires:[] ToPorts:[{Ports:[{Port:5353 Protocol:UDP}] TerminatingTLS:<nil> OriginatingTLS:<nil> Rules:0xc0011ebc00}] ToCIDR: ToCIDRSet:[] ToEntities:[] ToServices:[] ToFQDNs:[] ToGroups:[] aggregatedSelectors:[]} {ToEndpoints:[] ToRequires:[] ToPorts:[] ToCIDR: ToCIDRSet:[] ToEntities:[] ToServices:[] ToFQDNs:[{MatchName: MatchPattern:*.google.com}] ToGroups:[] aggregatedSelectors:[{LabelSelector:0xc0004bbde0 requirements:0xc0004bbe40 cachedLabelSelectorString:&LabelSelector{MatchLabels:map[string]string{reserved.none: ,},MatchExpressions:[]LabelSelectorRequirement{},}}]}] Labels:[k8s:io.cilium.k8s.policy.derived-from=CiliumNetworkPolicy k8s:io.cilium.k8s.policy.name=pod-to-external-fqdn-allow-google-cnp k8s:io.cilium.k8s.policy.namespace=default k8s:io.cilium.k8s.policy.uid=a2b64742-c146-4c07-aa37-c9afcb8ae0f1] Description:}]" policyAddRequest=fa76e283-ba06-11ea-a468-0242ac120002 subsys=daemon
2020-06-29T13:09:55.4809371Z level=info msg="Policy imported via API, recalculating..." policyAddRequest=fa76e283-ba06-11ea-a468-0242ac120002 policyRevision=5 subsys=daemon
2020-06-29T13:09:55.4809783Z level=info msg="Imported CiliumNetworkPolicy" ciliumNetworkPolicyName=pod-to-external-fqdn-allow-google-cnp k8sApiVersion= k8sNamespace=default subsys=k8s-watcher
2020-06-29T13:09:55.4810264Z level=info msg="regenerating all endpoints" reason="one or more identities created or deleted" subsys=endpoint-manager
2020-06-29T13:09:55.4811413Z level=info msg="Create endpoint request" addressing="&{10.0.1.29 fb0d51ad-ba06-11ea-a468-0242ac120002  }" containerID=8d255969ab7e226b320f543c2b75d3ab0abd390810c5103541a1c5ef07025182 datapathConfiguration="<nil>" interface=lxc4e24035ecaaa k8sPodName=default/pod-to-b-multi-node-clusterip-75f5c78f68-br7hc labels="[]" subsys=daemon sync-build=true
2020-06-29T13:09:55.4811679Z level=info msg="New endpoint" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=2221 ipv4= ipv6= k8sPodName=/ subsys=endpoint
2020-06-29T13:09:55.4812360Z level=info msg="Resolving identity labels (blocking)" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=2221 identityLabels="k8s:io.cilium.k8s.policy.cluster=default,k8s:io.cilium.k8s.policy.serviceaccount=default,k8s:io.kubernetes.pod.namespace=default,k8s:name=pod-to-b-multi-node-clusterip" ipv4= ipv6= k8sPodName=/ subsys=endpoint
2020-06-29T13:09:55.4812897Z level=info msg="Skipped non-kubernetes labels when labelling ciliumidentity. All labels will still be used in identity determination" labels="map[]" subsys=crd-allocator
2020-06-29T13:09:55.4813393Z level=info msg="Allocated new global key" key="k8s:io.cilium.k8s.policy.cluster=default;k8s:io.cilium.k8s.policy.serviceaccount=default;k8s:io.kubernetes.pod.namespace=default;k8s:name=pod-to-b-multi-node-clusterip;" subsys=allocator
2020-06-29T13:09:55.4965191Z level=info msg="Identity of endpoint changed" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=2221 identity=54396 identityLabels="k8s:io.cilium.k8s.policy.cluster=default,k8s:io.cilium.k8s.policy.serviceaccount=default,k8s:io.kubernetes.pod.namespace=default,k8s:name=pod-to-b-multi-node-clusterip" ipv4= ipv6= k8sPodName=/ oldIdentity="no identity" subsys=endpoint
2020-06-29T13:09:55.4965456Z level=info msg="Waiting for endpoint to be generated" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=2221 identity=54396 ipv4= ipv6= k8sPodName=/ subsys=endpoint
2020-06-29T13:09:55.4966502Z level=info msg="Create endpoint request" addressing="&{10.0.1.187 fb291995-ba06-11ea-a468-0242ac120002  }" containerID=b42ff00e7577dd16eab8f523886336d09802ccabd462ac0903e0df3473844929 datapathConfiguration="<nil>" interface=lxc24fe6c286998 k8sPodName=default/pod-to-b-multi-node-headless-5df88f9bd4-6slv7 labels="[]" subsys=daemon sync-build=true
2020-06-29T13:09:55.4966952Z level=info msg="regenerating all endpoints" reason="one or more identities created or deleted" subsys=endpoint-manager
2020-06-29T13:09:55.4967118Z level=info msg="New endpoint" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=75 ipv4= ipv6= k8sPodName=/ subsys=endpoint
2020-06-29T13:09:55.4967730Z level=info msg="Resolving identity labels (blocking)" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=75 identityLabels="k8s:io.cilium.k8s.policy.cluster=default,k8s:io.cilium.k8s.policy.serviceaccount=default,k8s:io.kubernetes.pod.namespace=default,k8s:name=pod-to-b-multi-node-headless" ipv4= ipv6= k8sPodName=/ subsys=endpoint
2020-06-29T13:09:55.4968242Z level=info msg="Skipped non-kubernetes labels when labelling ciliumidentity. All labels will still be used in identity determination" labels="map[]" subsys=crd-allocator
2020-06-29T13:09:55.4968727Z level=info msg="Allocated new global key" key="k8s:io.cilium.k8s.policy.cluster=default;k8s:io.cilium.k8s.policy.serviceaccount=default;k8s:io.kubernetes.pod.namespace=default;k8s:name=pod-to-b-multi-node-headless;" subsys=allocator
2020-06-29T13:09:55.4969365Z level=info msg="Identity of endpoint changed" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=75 identity=20971 identityLabels="k8s:io.cilium.k8s.policy.cluster=default,k8s:io.cilium.k8s.policy.serviceaccount=default,k8s:io.kubernetes.pod.namespace=default,k8s:name=pod-to-b-multi-node-headless" ipv4= ipv6= k8sPodName=/ oldIdentity="no identity" subsys=endpoint
2020-06-29T13:09:55.4969870Z level=info msg="Waiting for endpoint to be generated" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=75 identity=20971 ipv4= ipv6= k8sPodName=/ subsys=endpoint
2020-06-29T13:09:55.4970496Z level=info msg="Create endpoint request" addressing="&{10.0.1.183 fb52a99c-ba06-11ea-a468-0242ac120002  }" containerID=30bcd70a35dbf9d445e8da4d9c4509893d514cdde1e6792fdb11691b232391eb datapathConfiguration="<nil>" interface=lxcd5fbda702c44 k8sPodName=default/pod-to-b-multi-node-nodeport-55b9769455-d5bbb labels="[]" subsys=daemon sync-build=true
2020-06-29T13:09:55.4970790Z level=info msg="New endpoint" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=3314 ipv4= ipv6= k8sPodName=/ subsys=endpoint
2020-06-29T13:09:55.4971383Z level=info msg="Resolving identity labels (blocking)" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=3314 identityLabels="k8s:io.cilium.k8s.policy.cluster=default,k8s:io.cilium.k8s.policy.serviceaccount=default,k8s:io.kubernetes.pod.namespace=default,k8s:name=pod-to-b-multi-node-nodeport" ipv4= ipv6= k8sPodName=/ subsys=endpoint
2020-06-29T13:09:55.4971839Z level=info msg="Skipped non-kubernetes labels when labelling ciliumidentity. All labels will still be used in identity determination" labels="map[]" subsys=crd-allocator
2020-06-29T13:09:55.4972013Z level=info msg="Rewrote endpoint BPF program" containerID= datapathPolicyRevision=0 desiredPolicyRevision=5 endpointID=2221 identity=54396 ipv4= ipv6= k8sPodName=/ subsys=endpoint
2020-06-29T13:09:55.4972185Z level=info msg="Successful endpoint creation" containerID= datapathPolicyRevision=5 desiredPolicyRevision=5 endpointID=2221 identity=54396 ipv4= ipv6= k8sPodName=/ subsys=daemon
2020-06-29T13:09:55.4972351Z level=info msg="Rewrote endpoint BPF program" containerID= datapathPolicyRevision=0 desiredPolicyRevision=5 endpointID=75 identity=20971 ipv4= ipv6= k8sPodName=/ subsys=endpoint
2020-06-29T13:09:55.4972516Z level=info msg="Successful endpoint creation" containerID= datapathPolicyRevision=5 desiredPolicyRevision=5 endpointID=75 identity=20971 ipv4= ipv6= k8sPodName=/ subsys=daemon
2020-06-29T13:09:55.4973062Z level=info msg="Allocated new global key" key="k8s:io.cilium.k8s.policy.cluster=default;k8s:io.cilium.k8s.policy.serviceaccount=default;k8s:io.kubernetes.pod.namespace=default;k8s:name=pod-to-b-multi-node-nodeport;" subsys=allocator
2020-06-29T13:09:55.4973705Z level=info msg="Identity of endpoint changed" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=3314 identity=2283 identityLabels="k8s:io.cilium.k8s.policy.cluster=default,k8s:io.cilium.k8s.policy.serviceaccount=default,k8s:io.kubernetes.pod.namespace=default,k8s:name=pod-to-b-multi-node-nodeport" ipv4= ipv6= k8sPodName=/ oldIdentity="no identity" subsys=endpoint
2020-06-29T13:09:55.4974100Z level=info msg="Waiting for endpoint to be generated" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=3314 identity=2283 ipv4= ipv6= k8sPodName=/ subsys=endpoint
2020-06-29T13:09:55.4974282Z level=info msg="Rewrote endpoint BPF program" containerID= datapathPolicyRevision=0 desiredPolicyRevision=5 endpointID=3314 identity=2283 ipv4= ipv6= k8sPodName=/ subsys=endpoint
2020-06-29T13:09:55.4974459Z level=info msg="Successful endpoint creation" containerID= datapathPolicyRevision=5 desiredPolicyRevision=5 endpointID=3314 identity=2283 ipv4= ipv6= k8sPodName=/ subsys=daemon
2020-06-29T13:09:55.4974819Z level=info msg="regenerating all endpoints" reason="one or more identities created or deleted" subsys=endpoint-manager
2020-06-29T13:09:55.4975159Z level=info msg="regenerating all endpoints" reason="one or more identities created or deleted" subsys=endpoint-manager
2020-06-29T13:09:55.4975492Z level=info msg="regenerating all endpoints" reason="one or more identities created or deleted" subsys=endpoint-manager
2020-06-29T13:09:55.4975832Z level=info msg="regenerating all endpoints" reason="one or more identities created or deleted" subsys=endpoint-manager
2020-06-29T13:09:55.4976225Z level=info msg="regenerating all endpoints" reason= subsys=endpoint-manager
2020-06-29T13:09:55.4976580Z level=info msg="Conntrack garbage collector interval recalculated" deleteRatio=0.0209808349609375 newInterval=7m30s subsys=map-ct
2020-06-29T13:09:55.4977461Z level=info msg="Conntrack garbage collector interval recalculated" deleteRatio=0.0460205078125 newInterval=11m15s subsys=map-ct
2020-06-29T13:09:55.5014789Z ##[group]Run kubectl describe service echo-a
2020-06-29T13:09:55.5015215Z �[36;1mkubectl describe service echo-a�[0m
2020-06-29T13:09:55.5015341Z �[36;1mkubectl logs service/echo-a --all-containers --since=$LOG_TIME�[0m
2020-06-29T13:09:55.5015474Z �[36;1mkubectl describe service echo-b�[0m
2020-06-29T13:09:55.5015604Z �[36;1mkubectl logs service/echo-b --all-containers --since=$LOG_TIME�[0m
2020-06-29T13:09:55.5015734Z �[36;1mkubectl describe service echo-b-headless�[0m
2020-06-29T13:09:55.5015864Z �[36;1mkubectl logs service/echo-b-headless --all-containers --since=$LOG_TIME�[0m
2020-06-29T13:09:55.5015984Z �[36;1mkubectl describe service echo-b-host-headless�[0m
2020-06-29T13:09:55.5016118Z �[36;1mkubectl logs service/echo-b-host-headless --all-containers --since=$LOG_TIME�[0m
2020-06-29T13:09:55.5057274Z shell: /bin/bash -e {0}
2020-06-29T13:09:55.5057407Z env:
2020-06-29T13:09:55.5057533Z   KIND_VERSION: v0.8.1
2020-06-29T13:09:55.5057878Z   KIND_CONFIG: .github/kind-config.yaml
2020-06-29T13:09:55.5058019Z   CONFORMANCE_TEMPLATE: examples/kubernetes/connectivity-check/connectivity-check.yaml
2020-06-29T13:09:55.5058152Z   LOG_TIME: 30m
2020-06-29T13:09:55.5058259Z ##[endgroup]
2020-06-29T13:09:55.6677199Z Name:              echo-a
2020-06-29T13:09:55.6677706Z Namespace:         default
2020-06-29T13:09:55.6678358Z Labels:            <none>
2020-06-29T13:09:55.6678905Z Annotations:       kubectl.kubernetes.io/last-applied-configuration:
2020-06-29T13:09:55.6679289Z                      {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"name":"echo-a","namespace":"default"},"spec":{"ports":[{"port":80}],"sel...
2020-06-29T13:09:55.6679563Z Selector:          name=echo-a
2020-06-29T13:09:55.6679670Z Type:              ClusterIP
2020-06-29T13:09:55.6679774Z IP:                10.245.237.92
2020-06-29T13:09:55.6679880Z Port:              <unset>  80/TCP
2020-06-29T13:09:55.6679965Z TargetPort:        80/TCP
2020-06-29T13:09:55.6680070Z Endpoints:         10.0.0.90:80
2020-06-29T13:09:55.6680181Z Session Affinity:  None
2020-06-29T13:09:55.6680281Z Events:            <none>
2020-06-29T13:09:55.7621087Z 
2020-06-29T13:09:55.7621609Z   \{^_^}/ hi!
2020-06-29T13:09:55.7621757Z 
2020-06-29T13:09:55.7622188Z   Loading /default.json
2020-06-29T13:09:55.7622427Z   Done
2020-06-29T13:09:55.7622833Z 
2020-06-29T13:09:55.7623023Z   Resources
2020-06-29T13:09:55.7623692Z   http://0.0.0.0:80/private
2020-06-29T13:09:55.7623861Z   http://0.0.0.0:80/public
2020-06-29T13:09:55.7623924Z 
2020-06-29T13:09:55.7624019Z   Home
2020-06-29T13:09:55.7624127Z   http://0.0.0.0:80
2020-06-29T13:09:55.7624216Z 
2020-06-29T13:09:55.7624319Z   Type s + enter at any time to create a snapshot of the database
2020-06-29T13:09:55.7624427Z   Watching...
2020-06-29T13:09:55.7624473Z 
2020-06-29T13:09:55.8412100Z Name:                     echo-b
2020-06-29T13:09:55.8412640Z Namespace:                default
2020-06-29T13:09:55.8412887Z Labels:                   <none>
2020-06-29T13:09:55.8413467Z Annotations:              kubectl.kubernetes.io/last-applied-configuration:
2020-06-29T13:09:55.8414044Z                             {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"name":"echo-b","namespace":"default"},"spec":{"ports":[{"nodePort":31313...
2020-06-29T13:09:55.8414518Z Selector:                 name=echo-b
2020-06-29T13:09:55.8414749Z Type:                     NodePort
2020-06-29T13:09:55.8414969Z IP:                       10.245.217.175
2020-06-29T13:09:55.8415197Z Port:                     <unset>  80/TCP
2020-06-29T13:09:55.8415385Z TargetPort:               80/TCP
2020-06-29T13:09:55.8416513Z NodePort:                 <unset>  31313/TCP
2020-06-29T13:09:55.8416624Z Endpoints:                10.0.0.211:80
2020-06-29T13:09:55.8416730Z Session Affinity:         None
2020-06-29T13:09:55.8416835Z External Traffic Policy:  Cluster
2020-06-29T13:09:55.8416922Z Events:                   <none>
2020-06-29T13:09:55.9267854Z 
2020-06-29T13:09:55.9268408Z   \{^_^}/ hi!
2020-06-29T13:09:55.9268916Z 
2020-06-29T13:09:55.9269209Z   Loading /default.json
2020-06-29T13:09:55.9269454Z   Done
2020-06-29T13:09:55.9269620Z 
2020-06-29T13:09:55.9269817Z   Resources
2020-06-29T13:09:55.9270087Z   http://0.0.0.0:80/private
2020-06-29T13:09:55.9270309Z   http://0.0.0.0:80/public
2020-06-29T13:09:55.9270489Z 
2020-06-29T13:09:55.9270688Z   Home
2020-06-29T13:09:55.9270884Z   http://0.0.0.0:80
2020-06-29T13:09:55.9271421Z 
2020-06-29T13:09:55.9271676Z   Type s + enter at any time to create a snapshot of the database
2020-06-29T13:09:55.9271900Z   Watching...
2020-06-29T13:09:55.9272087Z 
2020-06-29T13:09:56.0221960Z Name:              echo-b-headless
2020-06-29T13:09:56.0222305Z Namespace:         default
2020-06-29T13:09:56.0222423Z Labels:            <none>
2020-06-29T13:09:56.0222734Z Annotations:       kubectl.kubernetes.io/last-applied-configuration:
2020-06-29T13:09:56.0223607Z                      {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"name":"echo-b-headless","namespace":"default"},"spec":{"clusterIP":"None...
2020-06-29T13:09:56.0223897Z Selector:          name=echo-b
2020-06-29T13:09:56.0224019Z Type:              ClusterIP
2020-06-29T13:09:56.0224117Z IP:                None
2020-06-29T13:09:56.0224230Z Port:              <unset>  80/TCP
2020-06-29T13:09:56.0224343Z TargetPort:        80/TCP
2020-06-29T13:09:56.0224458Z Endpoints:         10.0.0.211:80
2020-06-29T13:09:56.0224567Z Session Affinity:  None
2020-06-29T13:09:56.0224678Z Events:            <none>
2020-06-29T13:09:56.1136724Z 
2020-06-29T13:09:56.1137162Z   \{^_^}/ hi!
2020-06-29T13:09:56.1138348Z 
2020-06-29T13:09:56.1139682Z   Loading /default.json
2020-06-29T13:09:56.1140530Z   Done
2020-06-29T13:09:56.1141336Z 
2020-06-29T13:09:56.1142318Z   Resources
2020-06-29T13:09:56.1143243Z   http://0.0.0.0:80/private
2020-06-29T13:09:56.1144668Z   http://0.0.0.0:80/public
2020-06-29T13:09:56.1145409Z 
2020-06-29T13:09:56.1146480Z   Home
2020-06-29T13:09:56.1150118Z   http://0.0.0.0:80
2020-06-29T13:09:56.1150579Z 
2020-06-29T13:09:56.1150843Z   Type s + enter at any time to create a snapshot of the database
2020-06-29T13:09:56.1150986Z   Watching...
2020-06-29T13:09:56.1155770Z 
2020-06-29T13:09:56.2249014Z Name:              echo-b-host-headless
2020-06-29T13:09:56.2249480Z Namespace:         default
2020-06-29T13:09:56.2249650Z Labels:            <none>
2020-06-29T13:09:56.2250449Z Annotations:       kubectl.kubernetes.io/last-applied-configuration:
2020-06-29T13:09:56.2250883Z                      {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"name":"echo-b-host-headless","namespace":"default"},"spec":{"clusterIP":...
2020-06-29T13:09:56.2251225Z Selector:          name=echo-b-host
2020-06-29T13:09:56.2251349Z Type:              ClusterIP
2020-06-29T13:09:56.2251448Z IP:                None
2020-06-29T13:09:56.2251562Z Session Affinity:  None
2020-06-29T13:09:56.2251674Z Events:            <none>
2020-06-29T13:09:56.3481300Z 
2020-06-29T13:09:56.3481508Z   \{^_^}/ hi!
2020-06-29T13:09:56.3481581Z 
2020-06-29T13:09:56.3481691Z   Loading /default.json
2020-06-29T13:09:56.3481802Z   Done
2020-06-29T13:09:56.3481869Z 
2020-06-29T13:09:56.3481971Z   Resources
2020-06-29T13:09:56.3482114Z   http://0.0.0.0:41000/private
2020-06-29T13:09:56.3482412Z   http://0.0.0.0:41000/public
2020-06-29T13:09:56.3482670Z 
2020-06-29T13:09:56.3482771Z   Home
2020-06-29T13:09:56.3482877Z   http://0.0.0.0:41000
2020-06-29T13:09:56.3482961Z 
2020-06-29T13:09:56.3483072Z   Type s + enter at any time to create a snapshot of the database
2020-06-29T13:09:56.3483190Z   Watching...
2020-06-29T13:09:56.3483241Z 
2020-06-29T13:09:56.3564551Z Post job cleanup.
2020-06-29T13:09:56.5130341Z [command]/usr/bin/git version
2020-06-29T13:09:56.5226180Z git version 2.27.0
2020-06-29T13:09:56.5266144Z [command]/usr/bin/git config --local --name-only --get-regexp core\.sshCommand
2020-06-29T13:09:56.5313901Z [command]/usr/bin/git submodule foreach --recursive git config --local --name-only --get-regexp 'core\.sshCommand' && git config --local --unset-all 'core.sshCommand' || :
2020-06-29T13:09:56.5734540Z [command]/usr/bin/git config --local --name-only --get-regexp http\.https\:\/\/github\.com\/\.extraheader
2020-06-29T13:09:56.5772035Z http.https://github.com/.extraheader
2020-06-29T13:09:56.5783886Z [command]/usr/bin/git config --local --unset-all http.https://github.com/.extraheader
2020-06-29T13:09:56.5835431Z [command]/usr/bin/git submodule foreach --recursive git config --local --name-only --get-regexp 'http\.https\:\/\/github\.com\/\.extraheader' && git config --local --unset-all 'http.https://github.com/.extraheader' || :
2020-06-29T13:09:56.6311352Z Cleaning up orphan processes

@pchaigno pchaigno added this to To Do (1.8 - Rare Flakes) in CI Force Jun 29, 2020
@pchaigno pchaigno added the ci/flake This is a known failure that occurs in the tree. Please investigate me! label Jun 29, 2020
@sayboras sayboras self-assigned this Jul 27, 2020
@pchaigno
Copy link
Member

pchaigno commented Sep 2, 2020

Happened again in #13045 but this time we have the kubectl describe logs. All pods that failed to come up ave the same timeouts on readiness and liveness probes:

Name:         pod-to-external-fqdn-allow-google-cnp-74466b4c6f-pnk85
[...]
Events:
  Type     Reason     Age                   From                                  Message
  ----     ------     ----                  ----                                  -------
  Normal   Scheduled  18m                   default-scheduler                     Successfully assigned default/pod-to-external-fqdn-allow-google-cnp-74466b4c6f-pnk85 to chart-testing-control-plane
  Normal   Pulling    18m                   kubelet, chart-testing-control-plane  Pulling image "docker.io/byrnedo/alpine-curl:0.1.8"
  Normal   Pulled     18m                   kubelet, chart-testing-control-plane  Successfully pulled image "docker.io/byrnedo/alpine-curl:0.1.8"
  Normal   Created    18m                   kubelet, chart-testing-control-plane  Created container pod-to-external-fqdn-allow-google-cnp-container
  Normal   Started    18m                   kubelet, chart-testing-control-plane  Started container pod-to-external-fqdn-allow-google-cnp-container
  Warning  Unhealthy  16m (x10 over 18m)    kubelet, chart-testing-control-plane  Readiness probe errored: rpc error: code = DeadlineExceeded desc = failed to exec in container: timeout 1s exceeded: context deadline exceeded
  Warning  Unhealthy  3m19s (x90 over 18m)  kubelet, chart-testing-control-plane  Liveness probe errored: rpc error: code = DeadlineExceeded desc = failed to exec in container: timeout 1s exceeded: context deadline exceeded

logs_23367.zip

We may have to bump some timeout to know if it's the DNS failing or some other packet drop.
Maybe we should also collect packet drops with Hubble?

sayboras added a commit to sayboras/cilium that referenced this issue Sep 3, 2020
The default value for these two fields is only 1 second. This PR is to
update values to 7 seconds, which is 5 (curl connection timeout) + 2
(some buffer)

Relates: cilium#12279
Signed-off-by: Tam Mach <sayboras@yahoo.com>
@sayboras
Copy link
Member Author

sayboras commented Sep 4, 2020

@pchaigno The above logs are very useful, I just sent PR to increase the timeout, and plan to dump hubble details as well.

Just curious if we can use bugtool into stdout. We got this feature merged in recently #12837.

@pchaigno
Copy link
Member

pchaigno commented Sep 4, 2020

Just curious if we can use bugtool into stdout. We got this feature merged in recently #12837.

Yep, that could be useful 👍 We'll need to separate it from other outputs because there's quite a lot of text printed with bugtool.

aanm pushed a commit that referenced this issue Sep 4, 2020
The default value for these two fields is only 1 second. This PR is to
update values to 7 seconds, which is 5 (curl connection timeout) + 2
(some buffer)

Relates: #12279
Signed-off-by: Tam Mach <sayboras@yahoo.com>
@sayboras
Copy link
Member Author

sayboras commented Sep 5, 2020

Yep, that could be useful We'll need to separate it from other outputs because there's quite a lot of text printed with bugtool.

This is good point, so I decided to make it as downloadable artifact (e.g. bugtool.tar) in github action

@pchaigno
Copy link
Member

pchaigno commented Sep 9, 2020

Crossposting cilium-sysdump from #13090 here as well.
cilium-sysdump-failed.zip

@sayboras
Copy link
Member Author

I can't find anything wrong with this, I am suspecting that it is not happening anymore.

To confirm this hypothesis, I setup one scheduled job in my forked repo, 25 runs withouth any failure so far https://github.com/sayboras/cilium/actions?query=workflow%3A%22Smoke+Test+with+IPv6%22+event%3Aschedule

@pchaigno
Copy link
Member

Ok, let's close then. If someone hits this again, they/we can reopen.

CI Force automation moved this from To Do (1.8, 1.9 - Rare Flakes) to Fixed / Done Oct 28, 2020
@aditighag
Copy link
Member

aditighag commented Jan 22, 2021

I hit this on PR #14679 yesterday, where the smoke-test-ipv6 failed. Attaching sysdump.
We don't save kubectl describe pod output to the collected sysdump, but I suspect it might be same issue as - #12279 (comment)

How do we get history of this GH workflow?

cilium-sysdump-out.zip.zip

@sayboras
Copy link
Member Author

I usually checked this page for history. Yup, I can see a few occurrences for this check 🤔

@pchaigno pchaigno reopened this Jan 22, 2021
CI Force automation moved this from Fixed / Done to In Progress (Cilium) Jan 22, 2021
@pchaigno
Copy link
Member

Looks like pod pod-to-a-allowed-cnp-7d7c8f9f9b-v4ntv was in CrashLoopBackOff but I'm failing to see why. Maybe we should also include the output of kubectl describe pods -A in the artifact.

@aditighag
Copy link
Member

I usually checked this page for history. Yup, I can see a few occurrences for this check 🤔

Surprisingly, I didn't see the failure from my PR.

@joestringer
Copy link
Member

I suspect we're underreporting this failure. Looking back through the workflows history I see this has hit #17219 (link cc @joamaki ), #16926 (link cc @errordeveloper ), #17241 (link cc @brb ), #17210 (link cc @jrajahalme ) and possibly others, this is only going back 8 days.

@joestringer
Copy link
Member

joestringer commented Sep 1, 2021

Here's an example failure from the v1.10 backports PR #17256 (link):
cilium-sysdump-out.zip.zip

I didn't yet dig too far, but this is a classic sign that the connectivity check is just failing since the pods are never becoming ready (which is effectively what the test does, deploy the pods and rely on readiness checks to validate different paths between pods):

$ cat pods*txt
NAMESPACE            NAME                                                  READY   STATUS    RESTARTS   AGE   IP                      NODE                          NOMINATED NODE   READINESS GATES
default              echo-a-56f9849b6b-zdtsw                               1/1     Running   0          40m   fd00:10:244:0:1::ce01   chart-testing-worker          <none>           <none>
default              echo-b-5898f8c6c4-6blsv                               1/1     Running   0          40m   fd00:10:244:0:1::f245   chart-testing-worker          <none>           <none>
default              echo-b-host-7b949c7b9f-vr5hr                          1/1     Running   0          40m   fc00:f853:ccd:e793::3   chart-testing-worker          <none>           <none>
default              host-to-b-multi-node-clusterip-6c4db976b5-l56mh       0/1     Running   15         40m   fc00:f853:ccd:e793::2   chart-testing-control-plane   <none>           <none>
default              host-to-b-multi-node-headless-68d54bb4b7-mgtn4        0/1     Running   15         40m   fc00:f853:ccd:e793::2   chart-testing-control-plane   <none>           <none>
default              pod-to-a-9dc6d768c-zxqbj                              0/1     Running   15         40m   fd00:10:244:0:1::f96f   chart-testing-worker          <none>           <none>
default              pod-to-a-allowed-cnp-5ff69578c5-zmgdk                 0/1     Running   15         40m   fd00:10:244:0:1::dbc3   chart-testing-worker          <none>           <none>
default              pod-to-a-denied-cnp-64d7765ddf-t5p7l                  1/1     Running   0          40m   fd00:10:244:0:1::7da8   chart-testing-worker          <none>           <none>
default              pod-to-b-intra-node-nodeport-5c8cd69ff5-rdfpg         0/1     Running   15         40m   fd00:10:244:0:1::66b0   chart-testing-worker          <none>           <none>
default              pod-to-b-multi-node-clusterip-7b5854d46c-2ssg8        0/1     Running   15         40m   fd00:10:244::af0e       chart-testing-control-plane   <none>           <none>
default              pod-to-b-multi-node-headless-77b698d8f5-7rfb5         0/1     Running   15         40m   fd00:10:244::7a98       chart-testing-control-plane   <none>           <none>
default              pod-to-b-multi-node-nodeport-84fdc88d9f-z48sg         0/1     Running   15         40m   fd00:10:244::3fc1       chart-testing-control-plane   <none>           <none>
kube-system          cilium-node-init-p6l76                                1/1     Running   0          41m   fc00:f853:ccd:e793::3   chart-testing-worker          <none>           <none>
kube-system          cilium-node-init-spm8f                                1/1     Running   0          41m   fc00:f853:ccd:e793::2   chart-testing-control-plane   <none>           <none>
kube-system          cilium-nztnq                                          1/1     Running   0          41m   fc00:f853:ccd:e793::3   chart-testing-worker          <none>           <none>
kube-system          cilium-operator-58ffb454b6-bmjkw                      1/1     Running   0          41m   fc00:f853:ccd:e793::3   chart-testing-worker          <none>           <none>
kube-system          cilium-operator-58ffb454b6-ggtpw                      1/1     Running   0          41m   fc00:f853:ccd:e793::2   chart-testing-control-plane   <none>           <none>
kube-system          cilium-tdx65                                          1/1     Running   0          41m   fc00:f853:ccd:e793::2   chart-testing-control-plane   <none>           <none>
kube-system          coredns-f9fd979d6-hlb8c                               1/1     Running   0          44m   fd00:10:244:0:1::dded   chart-testing-worker          <none>           <none>
kube-system          coredns-f9fd979d6-sng47                               1/1     Running   0          44m   fd00:10:244::6520       chart-testing-control-plane   <none>           <none>
kube-system          etcd-chart-testing-control-plane                      1/1     Running   0          44m   fc00:f853:ccd:e793::2   chart-testing-control-plane   <none>           <none>
kube-system          kube-apiserver-chart-testing-control-plane            1/1     Running   0          44m   fc00:f853:ccd:e793::2   chart-testing-control-plane   <none>           <none>
kube-system          kube-controller-manager-chart-testing-control-plane   1/1     Running   0          44m   fc00:f853:ccd:e793::2   chart-testing-control-plane   <none>           <none>
kube-system          kube-proxy-8x8vd                                      1/1     Running   0          44m   fc00:f853:ccd:e793::3   chart-testing-worker          <none>           <none>
kube-system          kube-proxy-9knzl                                      1/1     Running   0          44m   fc00:f853:ccd:e793::2   chart-testing-control-plane   <none>           <none>
kube-system          kube-scheduler-chart-testing-control-plane            1/1     Running   0          44m   fc00:f853:ccd:e793::2   chart-testing-control-plane   <none>           <none>
local-path-storage   local-path-provisioner-78776bfc44-rg4t9               1/1     Running   0          44m   fd00:10:244:0:1::e8d5   chart-testing-worker          <none>           <none>

@sayboras
Copy link
Member Author

No failure for last 2 weeks except legitimate one. Discussed offline, we can close this and re-open if there is any occurrence in the future.

CI Force automation moved this from In Progress (Cilium) to Fixed / Done Jan 19, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/CI Continuous Integration testing issue or flake ci/flake This is a known failure that occurs in the tree. Please investigate me!
Projects
No open projects
CI Force
  
Fixed / Done
Development

No branches or pull requests

4 participants