From 8a03d54da11a1c15302f4f90de8bf1e539252784 Mon Sep 17 00:00:00 2001 From: Joe Stringer Date: Wed, 29 Jul 2020 15:27:57 -0700 Subject: [PATCH] Extend connectivity-check for HTTP policy validation via CUE (#12599) * connectivity-check: Add 'make clean' support Factor out the targets for all YAMLs so it can be reused by a new phony target, 'clean'. Signed-off-by: Joe Stringer * connectivity-check: Introduce cuelang framework CUE (https://cuelang.org/) is a data constraint language built defined as a superset of JSON which aims to "simplify tasks involving defining and using data". In context of the connectivity check YAMLs, CUE is useful to allow us to "evaporate" the boilerplate necessary to define Kubernetes YAMLs for Deployments, Services and CiliumNetworkPolicies and allow developers to specify various permutations for connectivity checks concisely. Why should we use it? * It's more concise: One template definition, multiple reuse. This is useful for introducing new connectivity checks as upcoming commits will demonstrate as the developer doesn't need to perform the tedious and error-prone process of copying and modifying the YAMLs to implement various permutations of a check. Furthermore this helps reviewers as they will not have to read through swathes of YAMLs but can instead focus on the diffs in the templating that are introduced and compare to existing data definitions. * Consolidate constant declaration. When a core change needs to be made to something like the readinessProbe for probes that expect a success or failure, we can update one definition in the main CUE file and all YAMLs will subsequently be generated with this change in mind. During the process of preparing these changes, I noticed inconsistencies between various existing YAMLs which appear to just be unintentional, where some YAMLs were improved with better timeoute behaviour or error rendering, but other YAMLs were left out. * The data is more structured. Upcoming commits will introduce simple CLI tools that allow matching on different classes of connectivity checks to generate the corresponding YAMLs. Previously we have depended upon file naming schemes and Makefile globbing magic to implement this which quickly reaches a limit in which checks should be selected for a specific check. What are the dangers? * It's relatively immature. At current version v0.2.2 it is subject to language changes. Upcoming commits will pin the CLI tool usage to a docker container derived from this version to ensure compatibility. * One more language in the tree to understand, review and interact with. Mitigating circumstances: This language comes out of the Golang community and as such brings some commonalities; furthermore it is beginning to be used in other Kubernetes projects, so there is some broader community alignment. * Its power allows you to hide as much or as little complexity as you want. It's tricky to strike a fine balance between explicitly declaring (and duplicating) relevant fields in the local file vs. hiding convenient templating language in common files. For examples, see defaults.cue which automatically derives connectivity check destinations based on object name declarations matching regexes of "pod-to-X", and applies affinity/anti-affinity via matches on "intra-host" or "multi-host". * All declarations are additive, ie there is no ordering based upon the layout in code; instead, data dependencies are determined using the declarations, and all data is arranged into a lattice to determine the evaluation ordering[0]. This can be counter-intuitive to reason about for the uninitiated. The general approach used in this commit was to `cue import` various existing YAML files to generate JSON equivalents, then iteratively combining & consolidating existing definitions using the language constructs provided by CUE. CUE also provides mechanisms to generate schemas and autogenerate the structures used here directly from API definitions (eg from k8s source or Cilium tree), however this area was not explored in this PR yet. While this doesn't take advantage of one major aspect of the language, upcoming commits will demonstrate the way that these changes were validated without the use of standardized schemas from the underlying Kubernetes resource definitions. (TL;DR: `kubectl diff ...` with kubectl validation on a live cluster). This was sufficient to extend the connectivity checks and does not preclude future explanation of the use of schemas for these definitions. This commit introduces usage of CUE in a relatively minimal way into the tree which was useful for my goals of extending the connectivity checks. If we find that it is useful and powerful, we may consider whether to extend its usage to other areas of the code (such as for test manifest generation). [0] https://cuelang.org/docs/concepts/logic/#the-value-lattice Signed-off-by: Joe Stringer * connectivity-check: Add cue CLI tools Add some basic tooling around connectivity-check YAML generation: $ cue cmd help List connectivity-check resources specified in this directory Usage: cue [-t component=] [-t name=] [-t topology=] Available Commands: dump Generate connectivity-check YAMLs from the cuelang scripts ls List connectivity-check resources specified in this directory List available connectivity-check components: $ cue cmd ls KIND COMPONENT TOPOLOGY NAME Service network-check any echo-a Service services-check any echo-b Service services-check any echo-b-headless Service services-check any echo-b-host-headless Deployment network-check any echo-a Deployment services-check any echo-b Deployment services-check any echo-b-host Deployment network-check any pod-to-a Deployment network-check any pod-to-external-1111 Deployment policy-check any pod-to-a-allowed-cnp Deployment policy-check any pod-to-a-denied-cnp Deployment policy-check any pod-to-external-fqdn-allow-google-cnp Deployment services-check multi-node pod-to-b-multi-node-clusterip Deployment services-check multi-node pod-to-b-multi-node-headless Deployment services-check intra-node pod-to-b-intra-node-clusterip Deployment services-check intra-node pod-to-b-intra-node-headless Deployment services-check multi-node host-to-b-multi-node-clusterip Deployment services-check multi-node host-to-b-multi-node-headless CiliumNetworkPolicy policy-check any pod-to-a-allowed-cnp CiliumNetworkPolicy policy-check any pod-to-a-denied-cnp CiliumNetworkPolicy policy-check any pod-to-external-fqdn-allow-google-cnp These can be filtered by component, topology or name. For example: $ cue cmd -t component=network ls KIND COMPONENT TOPOLOGY NAME Service network-check any echo-a Deployment network-check any echo-a Deployment network-check any pod-to-a Deployment network-check any pod-to-external-1111 Finally, to gather the (filtered) YAMLs for the specified resources: $ cue cmd dump | head -n 20 metadata: name: echo-a labels: name: echo-a topology: any component: network-check spec: ports: - port: 80 selector: name: echo-a type: ClusterIP apiVersion: v1 kind: Service --- ... Or with an upcoming commit you can just use the Makefile, which now depends on the cuelang/cue:v0.2.2 Docker image: $ make connectivity-check.yaml Signed-off-by: Joe Stringer * connectivity-check: Support generating YAMLs via cue Replace the existing YAML generation from individual YAML declarations for each service with generating YAMLs from the CUE definitions. Three new targets will assist in validating the migration from the existing definitions over to CUE: * make generate_all * For object declared in CUE, generate a file corresponding to that definition. For most of the existing YAMLs, this will overwrite the copy of the YAML in the tree. This can allow manual inspection of individual YAMLs, though the 'inspect' approach is broadly more useful for evaluating the overall diff. * make deploy * Deploy the hostport connectivity checks YAML into an existing cluster. * make inspect * Generate the YAML file for all connectivity checks, then use kubectl to diff these newly generated definitions against the running cluster (assuming it was deployed via make deploy). This commit is purely the makefile changes for easier review & inspection. Upcoming commits will use these targets to demonstrate that there is no meaningful change in the generated YAMLs for existing YAMLs in the tree. In particular, `make inspect` can be used in an iterative manner by initially deploying the current version of the YAMLs from the tree, then making changes to the CUE files and inspecting each time a change is made. When the diff in the cluster represents the changes that the developer intends to make, the developer can commit the changes to the CUE files and re-generate the tree versions of the YAMLs. Signed-off-by: Joe Stringer * connectivity-check: Replace YAMLs with cue-generated YAMLs Prior commits introduced CUE definitions that are equivalent to these YAML files, so we can now: * Remove the individual declarations which were previously source-of-truth for the connectivity checks * Update the overall connectivity-check YAMLs to reflect the minor changes that the CUE definitions represent. To validate this, heavy use of `make inspect` was used. As described in the prior commit message where this was introduced, this allows diffing the latest CUE-based YAML definitions against a running copy of the YAMLs in a cluster. There are few meaningful changes in this commit which are hard to assess directly from the git diff, but are easier using `make inspect`: * All containers are converted to use readinessProbe and not livenessProbe. * All readiness probes now specify --connect-timeout of 5s. * Readiness probes access `/public` or `/private` per the underlying container HTTP server paths rather than just accessing `/`. * DNS allow policies are converted to consistently allow both TCP and UDP-based DNS. * Container names are derived from pod names. * The new YAMLs declare additional labels for all resourcess, such as 'component' and 'topology'. Signed-off-by: Joe Stringer * connectivity-check: Introduce proxy checks These new checks configure various L7 proxy paths to validate connectivity via L7 proxies, in the following dimensions: - Apply policy on egress; ingress; or both (proxy-to-proxy) - Intra-node / Multi-node - Allow / Deny Note that proxy-to-proxy always configures egress allow policy to ensure that the traffic goes via the proxy and in the drop case the requests are only rejected at the destination. This is because applying egress deny at the source would prevent proxy-to-proxy connectivity, meaning the test would be equivalent to the egress-only reject policy case. This way, we ensure that the path via the egress proxy to the destination is tested in the reject case. These are implemented partially through a new 'echo-c' pod which always has ingress policy applied to allow GET requests to '/public'. Depending on whether ingress policy is needed to check the particular permutation the new checks may connect to 'echo-a' or 'echo-c'. These are implemented by adding pods for each permutation of policy apply point and topology; then by adding allow / deny containers within that pod to test the allow/deny cases. The 'connectivity-check-proxy.yaml' includes all of the above. Finally, the omissions: This commit does not attempt to address variations in datapath configuration. This includes IPv4 vs. IPv6; tunnel/direct-routing; endpoint config; kube proxy/free; encryption. These are left up to the cluster operator configuring Cilium in specific modes and subsequently deploying these YAMLs. Signed-off-by: Joe Stringer * connectivity-check: Minor naming fixups Make some of these resource names a bit more consistent. Signed-off-by: Joe Stringer * connectivity-check: Add quarantine label to metadata This new label will be used during YAML generation to ensure that resources which we are still working on fixes for are kept in a separate category apart from the regular connectivity checks, to allow us to check them in & distribute them without causing CI to instantly fail. Signed-off-by: Joe Stringer * connectivity-check: Add hostport + proxy checks Introduces checks for egress proxy policy when accessing a hostport on a remote node. These are added as part of the component=hostport-check to ensure they are not pulled in when running connectivity checks in environments without hostport support. Additionally, these new tests are quarantined for now as they are known to fail in some environments. Signed-off-by: Joe Stringer * connectivity-check: Expand readme for latest checks Signed-off-by: Joe Stringer * connectivity-check: Re-add liveness probes It appears that some of these checks require liveness probes rather than readiness probes to pass on the github actions smoke-test, so ensure all containers are checked with both. Signed-off-by: Joe Stringer * smoke-test: Improve state gathering upon failure Commit bb91571ea497 ("smoke-test: Print pod/deploy state on failure") attempted to improve the information available during a failure from the smoke-tests, but only added it to the quick-install test and not the conformance test. Add the same output also to the conformance test so we can more easily debug failures there. Signed-off-by: Joe Stringer * smoke-test: Disable bpf masquerading In the smoke test, we are relying on kube-proxy for service connectivity so it doesn't make sense to enable BPF masquerading. In fact, this causes issues for connectivity from a node to a pod on a remote node via ClusterIP (see related issue). For the moment, disable BPF masquerading while we figure out the longer-term solution to that issue. Related: #12699 Signed-off-by: Joe Stringer * docs: Update connectivity-check examples Signed-off-by: Joe Stringer --- .github/workflows/smoke-test.yaml | 21 +- .../k8s-install-connectivity-test.rst | 33 +- Documentation/gettingstarted/kube-router.rst | 20 +- Documentation/troubleshooting.rst | 31 +- .../kubernetes/connectivity-check/Makefile | 95 +- .../kubernetes/connectivity-check/README.md | 32 + .../connectivity-check-hostport.yaml | 1861 +++++++++++++---- .../connectivity-check-proxy.yaml | 1034 +++++++++ .../connectivity-check-quarantine.yaml | 839 ++++++++ .../connectivity-check-single-node.yaml | 732 ++++--- .../connectivity-check.yaml | 954 +++++---- .../connectivity-check/cue.mod/module.cue | 1 + .../connectivity-check/defaults.cue | 63 + .../connectivity-check/dump_tool.cue | 15 + .../kubernetes/connectivity-check/echo-a.yaml | 32 - .../kubernetes/connectivity-check/echo-b.yaml | 97 - .../connectivity-check/echo-servers.cue | 75 + .../host-to-b-multi-node-clusterip.yaml | 34 - .../host-to-b-multi-node-headless.yaml | 34 - .../kubernetes/connectivity-check/ls_tool.cue | 20 + .../connectivity-check/main_tool.cue | 95 + .../kubernetes/connectivity-check/network.cue | 14 + .../connectivity-check/pod-to-a-allowed.yaml | 58 - .../connectivity-check/pod-to-a-denied.yaml | 53 - .../connectivity-check/pod-to-a.yaml | 22 - .../pod-to-b-intra-node-hostport.yaml | 38 - .../pod-to-b-intra-node-nodeport.yaml | 38 - .../pod-to-b-intra-node.yaml | 32 - .../pod-to-b-multi-node-clusterip.yaml | 32 - .../pod-to-b-multi-node-headless.yaml | 32 - .../pod-to-b-multi-node-hostport.yaml | 38 - .../pod-to-b-multi-node-nodeport.yaml | 38 - .../pod-to-external-1111.yaml | 25 - .../pod-to-external-fqdn-allow-google.yaml | 59 - .../kubernetes/connectivity-check/policy.cue | 43 + .../kubernetes/connectivity-check/proxy.cue | 96 + .../connectivity-check/resources.cue | 289 +++ .../connectivity-check/services.cue | 38 + 38 files changed, 5267 insertions(+), 1796 deletions(-) create mode 100644 examples/kubernetes/connectivity-check/connectivity-check-proxy.yaml create mode 100644 examples/kubernetes/connectivity-check/connectivity-check-quarantine.yaml create mode 100644 examples/kubernetes/connectivity-check/cue.mod/module.cue create mode 100644 examples/kubernetes/connectivity-check/defaults.cue create mode 100644 examples/kubernetes/connectivity-check/dump_tool.cue delete mode 100644 examples/kubernetes/connectivity-check/echo-a.yaml delete mode 100644 examples/kubernetes/connectivity-check/echo-b.yaml create mode 100644 examples/kubernetes/connectivity-check/echo-servers.cue delete mode 100644 examples/kubernetes/connectivity-check/host-to-b-multi-node-clusterip.yaml delete mode 100644 examples/kubernetes/connectivity-check/host-to-b-multi-node-headless.yaml create mode 100644 examples/kubernetes/connectivity-check/ls_tool.cue create mode 100644 examples/kubernetes/connectivity-check/main_tool.cue create mode 100644 examples/kubernetes/connectivity-check/network.cue delete mode 100644 examples/kubernetes/connectivity-check/pod-to-a-allowed.yaml delete mode 100644 examples/kubernetes/connectivity-check/pod-to-a-denied.yaml delete mode 100644 examples/kubernetes/connectivity-check/pod-to-a.yaml delete mode 100644 examples/kubernetes/connectivity-check/pod-to-b-intra-node-hostport.yaml delete mode 100644 examples/kubernetes/connectivity-check/pod-to-b-intra-node-nodeport.yaml delete mode 100644 examples/kubernetes/connectivity-check/pod-to-b-intra-node.yaml delete mode 100644 examples/kubernetes/connectivity-check/pod-to-b-multi-node-clusterip.yaml delete mode 100644 examples/kubernetes/connectivity-check/pod-to-b-multi-node-headless.yaml delete mode 100644 examples/kubernetes/connectivity-check/pod-to-b-multi-node-hostport.yaml delete mode 100644 examples/kubernetes/connectivity-check/pod-to-b-multi-node-nodeport.yaml delete mode 100644 examples/kubernetes/connectivity-check/pod-to-external-1111.yaml delete mode 100644 examples/kubernetes/connectivity-check/pod-to-external-fqdn-allow-google.yaml create mode 100644 examples/kubernetes/connectivity-check/policy.cue create mode 100644 examples/kubernetes/connectivity-check/proxy.cue create mode 100644 examples/kubernetes/connectivity-check/resources.cue create mode 100644 examples/kubernetes/connectivity-check/services.cue diff --git a/.github/workflows/smoke-test.yaml b/.github/workflows/smoke-test.yaml index fa5f45a0398c..92bf28d449c3 100644 --- a/.github/workflows/smoke-test.yaml +++ b/.github/workflows/smoke-test.yaml @@ -85,14 +85,7 @@ jobs: run: | kubectl get pods -o wide kubectl get deploy -o wide - kubectl describe service echo-a - kubectl logs service/echo-a --all-containers --since=$LOG_TIME - kubectl describe service echo-b - kubectl logs service/echo-b --all-containers --since=$LOG_TIME - kubectl describe service echo-b-headless - kubectl logs service/echo-b-headless --all-containers --since=$LOG_TIME - kubectl describe service echo-b-host-headless - kubectl logs service/echo-b-host-headless --all-containers --since=$LOG_TIME + for svc in $(make -C examples/kubernetes/connectivity-check/ list | grep Service | awk '{ print $4 }'); do kubectl describe service $svc; kubectl logs service/$svc --all-containers --since=$LOG_TIME; done - name: Dump hubble related logs and events if: ${{ failure() && matrix.target.name == 'experimental-install' }} @@ -139,6 +132,7 @@ jobs: --set global.externalIPs.enabled=true \ --set global.nodePort.enabled=true \ --set global.hostPort.enabled=true \ + --set config.bpfMasquerade=false \ --set config.ipam=kubernetes \ --set global.pullPolicy=Never @@ -165,11 +159,6 @@ jobs: env: LOG_TIME: 30m run: | - kubectl describe service echo-a - kubectl logs service/echo-a --all-containers --since=$LOG_TIME - kubectl describe service echo-b - kubectl logs service/echo-b --all-containers --since=$LOG_TIME - kubectl describe service echo-b-headless - kubectl logs service/echo-b-headless --all-containers --since=$LOG_TIME - kubectl describe service echo-b-host-headless - kubectl logs service/echo-b-host-headless --all-containers --since=$LOG_TIME + kubectl get pods -o wide + kubectl get deploy -o wide + for svc in $(make -C examples/kubernetes/connectivity-check/ list | grep Service | awk '{ print $4 }'); do kubectl describe service $svc; kubectl logs service/$svc --all-containers --since=$LOG_TIME; done diff --git a/Documentation/gettingstarted/k8s-install-connectivity-test.rst b/Documentation/gettingstarted/k8s-install-connectivity-test.rst index ae4cce156af1..43ed6310b2ab 100644 --- a/Documentation/gettingstarted/k8s-install-connectivity-test.rst +++ b/Documentation/gettingstarted/k8s-install-connectivity-test.rst @@ -13,23 +13,22 @@ service load-balancing and various network policy combinations. The pod name indicates the connectivity variant and the readiness and liveness gate indicates success or failure of the test: -.. code:: bash - - NAME READY STATUS RESTARTS AGE - echo-a-5995597649-f5d5g 1/1 Running 0 4m51s - echo-b-54c9bb5f5c-p6lxf 1/1 Running 0 4m50s - echo-b-host-67446447f7-chvsp 1/1 Running 0 4m50s - host-to-b-multi-node-clusterip-78f9869d75-l8cf8 1/1 Running 0 4m50s - host-to-b-multi-node-headless-798949bd5f-vvfff 1/1 Running 0 4m50s - pod-to-a-59b5fcb7f6-gq4hd 1/1 Running 0 4m50s - pod-to-a-allowed-cnp-55f885bf8b-5lxzz 1/1 Running 0 4m50s - pod-to-a-external-1111-7ff666fd8-v5kqb 1/1 Running 0 4m48s - pod-to-a-l3-denied-cnp-64c6c75c5d-xmqhw 1/1 Running 0 4m50s - pod-to-b-intra-node-845f955cdc-5nfrt 1/1 Running 0 4m49s - pod-to-b-multi-node-clusterip-666594b445-bsn4j 1/1 Running 0 4m49s - pod-to-b-multi-node-headless-746f84dff5-prk4w 1/1 Running 0 4m49s - pod-to-b-multi-node-nodeport-7cb9c6cb8b-ksm4h 1/1 Running 0 4m49s - pod-to-external-fqdn-allow-google-cnp-b7b6bcdcb-tg9dh 1/1 Running 0 4m48s +.. code:: shell-session + + $ kubectl get pods -n cilium-test + NAME READY STATUS RESTARTS AGE + echo-a-6788c799fd-42qxx 1/1 Running 0 69s + echo-b-59757679d4-pjtdl 1/1 Running 0 69s + echo-b-host-f86bd784d-wnh4v 1/1 Running 0 68s + host-to-b-multi-node-clusterip-585db65b4d-x74nz 1/1 Running 0 68s + host-to-b-multi-node-headless-77c64bc7d8-kgf8p 1/1 Running 0 67s + pod-to-a-allowed-cnp-87b5895c8-bfw4x 1/1 Running 0 68s + pod-to-a-b76ddb6b4-2v4kb 1/1 Running 0 68s + pod-to-a-denied-cnp-677d9f567b-kkjp4 1/1 Running 0 68s + pod-to-b-multi-node-clusterip-f7655dbc8-h5bwk 1/1 Running 0 68s + pod-to-b-multi-node-headless-5fd98b9648-5bjj8 1/1 Running 0 68s + pod-to-external-1111-7489c7c46d-jhtkr 1/1 Running 0 68s + pod-to-external-fqdn-allow-google-cnp-b7b6bcdcb-97p75 1/1 Running 0 68s .. note:: diff --git a/Documentation/gettingstarted/kube-router.rst b/Documentation/gettingstarted/kube-router.rst index 945a6f068821..1b951beb0929 100644 --- a/Documentation/gettingstarted/kube-router.rst +++ b/Documentation/gettingstarted/kube-router.rst @@ -137,22 +137,4 @@ installed: * ``10.2.2.0/24 dev tun-172011760 proto 17 src 172.0.50.227`` * ``10.2.3.0/24 dev tun-1720186231 proto 17 src 172.0.50.227`` -You can test connectivity by deploying the following connectivity checker pods: - -.. parsed-literal:: - - $ kubectl create -f \ |SCM_WEB|\/examples/kubernetes/connectivity-check/connectivity-check.yaml - $ kubectl get pods - NAME READY STATUS RESTARTS AGE - echo-a-dd67f6b4b-s62jl 1/1 Running 0 2m15s - echo-b-55d8dbd74f-t8jwk 1/1 Running 0 2m15s - host-to-b-multi-node-clusterip-686f99995d-tn6kq 1/1 Running 0 2m15s - host-to-b-multi-node-headless-bdbc856d-9zv4x 1/1 Running 0 2m15s - pod-to-a-766584ffff-wh2s8 1/1 Running 0 2m15s - pod-to-a-allowed-cnp-5899c44899-f9tdv 1/1 Running 0 2m15s - pod-to-a-external-1111-55c488465-7sd55 1/1 Running 0 2m14s - pod-to-a-l3-denied-cnp-856998c977-j9dhs 1/1 Running 0 2m15s - pod-to-b-intra-node-7b6cbc6c56-hqz7r 1/1 Running 0 2m15s - pod-to-b-multi-node-clusterip-77c8446b6d-qc8ch 1/1 Running 0 2m15s - pod-to-b-multi-node-headless-854b65674d-9zlp8 1/1 Running 0 2m15s - pod-to-external-fqdn-allow-google-cnp-bb9597947-bc85q 1/1 Running 0 2m14s +.. include:: k8s-install-connectivity-test.rst diff --git a/Documentation/troubleshooting.rst b/Documentation/troubleshooting.rst index 04c135a7f764..8ab2044757b4 100644 --- a/Documentation/troubleshooting.rst +++ b/Documentation/troubleshooting.rst @@ -357,22 +357,23 @@ test: .. _test: \ |SCM_WEB|\/examples/kubernetes/connectivity-check/connectivity-check.yaml -.. code:: bash +.. code:: shell-session + + $ kubectl get pods -n cilium-test + NAME READY STATUS RESTARTS AGE + echo-a-6788c799fd-42qxx 1/1 Running 0 69s + echo-b-59757679d4-pjtdl 1/1 Running 0 69s + echo-b-host-f86bd784d-wnh4v 1/1 Running 0 68s + host-to-b-multi-node-clusterip-585db65b4d-x74nz 1/1 Running 0 68s + host-to-b-multi-node-headless-77c64bc7d8-kgf8p 1/1 Running 0 67s + pod-to-a-allowed-cnp-87b5895c8-bfw4x 1/1 Running 0 68s + pod-to-a-b76ddb6b4-2v4kb 1/1 Running 0 68s + pod-to-a-denied-cnp-677d9f567b-kkjp4 1/1 Running 0 68s + pod-to-b-multi-node-clusterip-f7655dbc8-h5bwk 1/1 Running 0 68s + pod-to-b-multi-node-headless-5fd98b9648-5bjj8 1/1 Running 0 68s + pod-to-external-1111-7489c7c46d-jhtkr 1/1 Running 0 68s + pod-to-external-fqdn-allow-google-cnp-b7b6bcdcb-97p75 1/1 Running 0 68s - $ kubectl get pods - NAME READY STATUS RESTARTS AGE - echo-a-9b85dd869-292s2 1/1 Running 0 8m37s - echo-b-c7d9f4686-gdwcs 1/1 Running 0 8m37s - host-to-b-multi-node-clusterip-6d496f7cf9-956jb 1/1 Running 0 8m37s - host-to-b-multi-node-headless-bd589bbcf-jwbh2 1/1 Running 0 8m37s - pod-to-a-7cc4b6c5b8-9jfjb 1/1 Running 0 8m36s - pod-to-a-allowed-cnp-6cc776bb4d-2cszk 1/1 Running 0 8m36s - pod-to-a-external-1111-5c75bd66db-sxfck 1/1 Running 0 8m35s - pod-to-a-l3-denied-cnp-7fdd9975dd-2pp96 1/1 Running 0 8m36s - pod-to-b-intra-node-9d9d4d6f9-qccfs 1/1 Running 0 8m35s - pod-to-b-multi-node-clusterip-5956c84b7c-hwzfg 1/1 Running 0 8m35s - pod-to-b-multi-node-headless-6698899447-xlhfw 1/1 Running 0 8m35s - pod-to-external-fqdn-allow-google-cnp-667649bbf6-v6rf8 1/1 Running 0 8m35s Information about test failures can be determined by describing a failed test pod diff --git a/examples/kubernetes/connectivity-check/Makefile b/examples/kubernetes/connectivity-check/Makefile index 159a3890ace0..7949234bb15c 100644 --- a/examples/kubernetes/connectivity-check/Makefile +++ b/examples/kubernetes/connectivity-check/Makefile @@ -1,35 +1,90 @@ # Copyright 2017-2020 Authors of Cilium # SPDX-License-Identifier: Apache-2.0 -# + +include ../../../Makefile.defs +include ../../../Makefile.quiet + DEFAULT_OUT := connectivity-check.yaml HOSTPORT_OUT := connectivity-check-hostport.yaml SINGLE_OUT := connectivity-check-single-node.yaml +PROXY_OUT := connectivity-check-proxy.yaml +QUARANTINE_OUT := connectivity-check-quarantine.yaml + +SERVERS_NAME := echo-a echo-b echo-b-host echo-b-host-headless echo-c echo-c-host echo-c-host-headless +SERVERS_OUT := servers.yaml -SRC := $(wildcard *.yaml) -HOSTPORT_SRC := $(filter-out $(DEFAULT_OUT) $(HOSTPORT_OUT) $(SINGLE_OUT), $(SRC)) -DEFAULT_SRC := $(filter-out $(wildcard *-hostport.yaml),$(HOSTPORT_SRC)) -SINGLE_SRC := $(filter-out $(wildcard *-multi-node*.yaml),$(DEFAULT_SRC)) +SRC := $(wildcard *.cue) +ALL_TARGETS := $(DEFAULT_OUT) $(HOSTPORT_OUT) $(SINGLE_OUT) $(PROXY_OUT) $(QUARANTINE_OUT) +DOCKER_RUN := $(CONTAINER_ENGINE) container run --rm \ + --workdir /src/examples/kubernetes/connectivity-check \ + --volume $(CURDIR)/../../..:/src \ + --user "$(shell id -u):$(shell id -g)" -all: $(DEFAULT_OUT) $(HOSTPORT_OUT) $(SINGLE_OUT) +CUE_IMAGE := "docker.io/cuelang/cue:v0.2.2@sha256:2ea932c771212db140c9996c6fff1d236f3f84ae82add914374cee553b6fc60c" +CUE := $(DOCKER_RUN) $(CUE_IMAGE) -$(DEFAULT_OUT): $(DEFAULT_SRC) +all: $(ALL_TARGETS) + +$(DEFAULT_OUT): $(SRC) @echo '# Automatically generated by Makefile. DO NOT EDIT' > $(DEFAULT_OUT) - for FILE in $(DEFAULT_SRC); do \ - cat $$FILE >> $(DEFAULT_OUT); \ - echo "---" >> $(DEFAULT_OUT); \ - done + $(QUIET)$(CUE) dump > $(DEFAULT_OUT) -$(HOSTPORT_OUT): $(HOSTPORT_SRC) +$(HOSTPORT_OUT): $(SRC) @echo '# Automatically generated by Makefile. DO NOT EDIT' > $(HOSTPORT_OUT) - for FILE in $(HOSTPORT_SRC); do \ - cat $$FILE >> $(HOSTPORT_OUT); \ - echo "---" >> $(HOSTPORT_OUT); \ - done + $(QUIET)$(CUE) -t component=all dump > $(HOSTPORT_OUT) -$(SINGLE_OUT): $(SINGLE_SRC) +$(SINGLE_OUT): $(SRC) @echo '# Automatically generated by Makefile. DO NOT EDIT' > $(SINGLE_OUT) - for FILE in $(SINGLE_SRC); do \ - cat $$FILE >> $(SINGLE_OUT); \ - echo "---" >> $(SINGLE_OUT); \ + $(QUIET)$(CUE) -t topology=single-node dump > $(SINGLE_OUT) + +$(PROXY_OUT): $(SRC) $(SERVERS_OUT) + @echo '# Automatically generated by Makefile. DO NOT EDIT' > $(PROXY_OUT) + $(QUIET)$(CUE) -t component=proxy dump > $@ + @cat $(SERVERS_OUT) >> $@ + +$(QUARANTINE_OUT): $(SRC) $(SERVERS_OUT) + @echo '# Automatically generated by Makefile. DO NOT EDIT' > $(QUARANTINE_OUT) + $(QUIET)$(CUE) -t component=all -t quarantine=true dump > $@ + @cat $(SERVERS_OUT) >> $@ + +clean: + @rm -f $(ALL_TARGETS) + @rm -f *.new + @rm -f *.diff + @rm -f *.yaml + +deploy: + $(QUIET)kubectl apply -f connectivity-check-hostport.yaml + +eval: + $(QUIET)$(CUE) eval -c ./... + +# To easier inspect individual yamls for specific checks, generate all YAMLs from *.cue +generate_all: $(SRC) + # TODO: Don't run docker for every command + @for deployment in $(shell $(CUE) -t component=all ls | tail -n+2 | awk '{ print $$4 }' | sort | uniq); do \ + $(CUE) -t component=all -t name=$$deployment dump > $$deployment.yaml; \ done + +inspect: + @echo "Comparing latest cue declarations against cluster deployment from "make deploy"..." + $(QUIET)$(CUE) -t component=all dump > connectivity-check-hostport.yaml.new + -@kubectl diff -f connectivity-check-hostport.yaml.new > connectivity-check-hostport.yaml.diff + @cat connectivity-check-hostport.yaml.diff + @echo + @echo "The full diff is available in connectivity-check-hostport.yaml.diff." + +help: + $(QUIET)$(CUE) cmd help + +list: + $(QUIET)$(CUE) cmd ls + +$(SERVERS_OUT): $(SRC) + @echo > $(SERVERS_OUT) + @for name in $(SERVERS_NAME); do \ + $(CUE) cmd -t name=$$name dump >> $@; \ + done + +.PHONY: all clean deploy eval generate_all help inspect list diff --git a/examples/kubernetes/connectivity-check/README.md b/examples/kubernetes/connectivity-check/README.md index ab7a9bfa240f..4e9d207529a8 100644 --- a/examples/kubernetes/connectivity-check/README.md +++ b/examples/kubernetes/connectivity-check/README.md @@ -2,3 +2,35 @@ Set of deployments that will perform a series of connectivity checks via liveness and readiness checks. An unhealthy/unready pod indicates a problem. + +## Connectivity checks + +* [Standard connectivity checks](./connectivity-check.yaml) +* [Standard connectivity checks with hostport](./connectivity-check-hostport.yaml) + * Requires either eBPF hostport to be enabled or portmap CNI chaining. +* [Single-node connectivity checks](./connectivity-check-single-node.yaml) + * Standard connectivity checks minus the checks that require multiple nodes. +* [Proxy connectivity checks](./connectivity-check-proxy.yaml) + * Extra checks for various paths involving Layer 7 policy. + +## Developer documentation + +These checks are written in [CUE](https://cuelang.org/) to define various +checks in a concise manner. The definitions for the checks are split across +multiple files per the following logic: + +* `resources.cue`: The main definitions for templating all Kubernetes resources + including Deployment, Service, and CiliumNetworkPolicy. +* `echo-servers.cue`: Data definitions for all `echo-*` servers used for other + connectivity checks. +* `defaults.cue`: Default parameters used to define how specific checks connect + to particular echo servers, including selecting the probe destination, + selecting pod affinity, and default image for all checks. +* `network.cue`, `policy.cue`, `proxy.cue`, `services.cue`: Data definitions + for various connectivity checks at different layers and using different + features. L7 policy checks are defined in `proxy.cue` and not `policy.cue`. +* `*_tool.cue`: Various CLI tools for listing and generating the YAML + definitons used above. For more information, run `make help` in this + directory. + +For more information, see https://github.com/cilium/cilium/pull/12599 . diff --git a/examples/kubernetes/connectivity-check/connectivity-check-hostport.yaml b/examples/kubernetes/connectivity-check/connectivity-check-hostport.yaml index 0664d560d6d3..e48983051bea 100644 --- a/examples/kubernetes/connectivity-check/connectivity-check-hostport.yaml +++ b/examples/kubernetes/connectivity-check/connectivity-check-hostport.yaml @@ -1,96 +1,111 @@ -# Automatically generated by Makefile. DO NOT EDIT -apiVersion: v1 -kind: Service -metadata: - name: echo-a -spec: - type: ClusterIP - ports: - - port: 80 - selector: - name: echo-a --- -apiVersion: apps/v1 -kind: Deployment metadata: name: echo-a + labels: + name: echo-a + topology: any + component: network-check + quarantine: "false" spec: - selector: - matchLabels: - name: echo-a - replicas: 1 template: metadata: labels: name: echo-a spec: + hostNetwork: false containers: - - name: echo-container + - name: echo-a-container + ports: + - containerPort: 80 image: docker.io/cilium/json-mock:1.2 imagePullPolicy: IfNotPresent readinessProbe: exec: - command: ["curl", "-sS", "--fail", "-o", "/dev/null", "localhost"] ---- -apiVersion: v1 -kind: Service -metadata: - name: echo-b -spec: - type: NodePort - ports: - - port: 80 - nodePort: 31313 - selector: - name: echo-b ---- -apiVersion: v1 -kind: Service -metadata: - name: echo-b-headless -spec: - type: ClusterIP - clusterIP: None - ports: - - port: 80 + command: + - curl + - -sS + - --fail + - --connect-timeout + - "5" + - -o + - /dev/null + - localhost + livenessProbe: + exec: + command: + - curl + - -sS + - --fail + - --connect-timeout + - "5" + - -o + - /dev/null + - localhost selector: - name: echo-b ---- + matchLabels: + name: echo-a + replicas: 1 apiVersion: apps/v1 kind: Deployment +--- metadata: name: echo-b + labels: + name: echo-b + topology: any + component: services-check + quarantine: "false" spec: - selector: - matchLabels: - name: echo-b - replicas: 1 template: metadata: labels: name: echo-b spec: + hostNetwork: false containers: - - name: echo-container - image: docker.io/cilium/json-mock:1.2 - imagePullPolicy: IfNotPresent + - name: echo-b-container ports: - containerPort: 80 hostPort: 40000 + image: docker.io/cilium/json-mock:1.2 + imagePullPolicy: IfNotPresent readinessProbe: exec: - command: ["curl", "-sS", "--fail", "-o", "/dev/null", "localhost"] ---- -# The echo-b-host pod runs in host networking on the same node as echo-b. + command: + - curl + - -sS + - --fail + - --connect-timeout + - "5" + - -o + - /dev/null + - localhost + livenessProbe: + exec: + command: + - curl + - -sS + - --fail + - --connect-timeout + - "5" + - -o + - /dev/null + - localhost + selector: + matchLabels: + name: echo-b + replicas: 1 apiVersion: apps/v1 kind: Deployment +--- metadata: name: echo-b-host + labels: + name: echo-b-host + topology: any + component: services-check + quarantine: "false" spec: - selector: - matchLabels: - name: echo-b-host - replicas: 1 template: metadata: labels: @@ -98,15 +113,35 @@ spec: spec: hostNetwork: true containers: - - name: echo-container - image: docker.io/cilium/json-mock:1.2 - imagePullPolicy: IfNotPresent + - name: echo-b-host-container env: - name: PORT value: "41000" + ports: [] + image: docker.io/cilium/json-mock:1.2 + imagePullPolicy: IfNotPresent readinessProbe: exec: - command: ["curl", "-sS", "--fail", "-o", "/dev/null", "localhost:41000"] + command: + - curl + - -sS + - --fail + - --connect-timeout + - "5" + - -o + - /dev/null + - localhost:41000 + livenessProbe: + exec: + command: + - curl + - -sS + - --fail + - --connect-timeout + - "5" + - -o + - /dev/null + - localhost:41000 affinity: podAffinity: requiredDuringSchedulingIgnoredDuringExecution: @@ -116,326 +151,437 @@ spec: operator: In values: - echo-b - topologyKey: "kubernetes.io/hostname" ---- -# Connecting to "echo-b-host-headless" will provide service discovery to the -# node IP on which echo-b* is running -apiVersion: v1 -kind: Service -metadata: - name: echo-b-host-headless -spec: - type: ClusterIP - clusterIP: None + topologyKey: kubernetes.io/hostname selector: - name: echo-b-host ---- + matchLabels: + name: echo-b-host + replicas: 1 apiVersion: apps/v1 kind: Deployment +--- metadata: - name: host-to-b-multi-node-clusterip + name: echo-c + labels: + name: echo-c + topology: any + component: proxy-check + quarantine: "false" spec: - selector: - matchLabels: - name: host-to-b-multi-node-clusterip - replicas: 1 template: metadata: labels: - name: host-to-b-multi-node-clusterip + name: echo-c spec: - hostNetwork: true - dnsPolicy: ClusterFirstWithHostNet + hostNetwork: false containers: - - name: host-to-b-multi-node-container + - name: echo-c-container + ports: + - containerPort: 80 + hostPort: 40001 + image: docker.io/cilium/json-mock:1.2 imagePullPolicy: IfNotPresent - image: docker.io/byrnedo/alpine-curl:0.1.8 - command: ["/bin/ash", "-c", "sleep 1000000000"] + readinessProbe: + exec: + command: + - curl + - -sS + - --fail + - --connect-timeout + - "5" + - -o + - /dev/null + - localhost livenessProbe: exec: - command: ["curl", "-sS", "--fail", "-o", "/dev/null", "echo-b"] - affinity: - podAntiAffinity: - requiredDuringSchedulingIgnoredDuringExecution: - - labelSelector: - matchExpressions: - - key: name - operator: In - values: - - echo-b - topologyKey: "kubernetes.io/hostname" ---- + command: + - curl + - -sS + - --fail + - --connect-timeout + - "5" + - -o + - /dev/null + - localhost + selector: + matchLabels: + name: echo-c + replicas: 1 apiVersion: apps/v1 kind: Deployment +--- metadata: - name: host-to-b-multi-node-headless + name: echo-c-host + labels: + name: echo-c-host + topology: any + component: proxy-check + quarantine: "false" spec: - selector: - matchLabels: - name: host-to-b-multi-node-headless - replicas: 1 template: metadata: labels: - name: host-to-b-multi-node-headless + name: echo-c-host spec: hostNetwork: true - dnsPolicy: ClusterFirstWithHostNet containers: - - name: host-to-b-multi-node-container + - name: echo-c-host-container + env: + - name: PORT + value: "41001" + ports: [] + image: docker.io/cilium/json-mock:1.2 imagePullPolicy: IfNotPresent - image: docker.io/byrnedo/alpine-curl:0.1.8 - command: ["/bin/ash", "-c", "sleep 1000000000"] + readinessProbe: + exec: + command: + - curl + - -sS + - --fail + - --connect-timeout + - "5" + - -o + - /dev/null + - localhost:41001 livenessProbe: exec: - command: ["curl", "-sS", "--fail", "-o", "/dev/null", "echo-b-headless"] + command: + - curl + - -sS + - --fail + - --connect-timeout + - "5" + - -o + - /dev/null + - localhost:41001 affinity: - podAntiAffinity: + podAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: name operator: In values: - - echo-b - topologyKey: "kubernetes.io/hostname" ---- + - echo-c + topologyKey: kubernetes.io/hostname + selector: + matchLabels: + name: echo-c-host + replicas: 1 apiVersion: apps/v1 kind: Deployment +--- metadata: - name: pod-to-a-allowed-cnp + name: pod-to-a + labels: + name: pod-to-a + topology: any + component: network-check + quarantine: "false" spec: - selector: - matchLabels: - name: pod-to-a-allowed-cnp - replicas: 1 template: metadata: labels: - name: pod-to-a-allowed-cnp + name: pod-to-a spec: + hostNetwork: false containers: - - name: pod-to-a-allowed-cnp-container + - name: pod-to-a-container + ports: [] image: docker.io/byrnedo/alpine-curl:0.1.8 - command: ["/bin/ash", "-c", "sleep 1000000000"] imagePullPolicy: IfNotPresent - livenessProbe: - exec: - command: ["curl", "-sS", "--fail", "-o", "/dev/null", "echo-a"] + command: + - /bin/ash + - -c + - sleep 1000000000 readinessProbe: exec: - command: ["curl", "-sS", "--fail", "-o", "/dev/null", "echo-a"] ---- -apiVersion: "cilium.io/v2" -kind: CiliumNetworkPolicy -metadata: - name: "pod-to-a-allowed-cnp" -spec: - endpointSelector: + command: + - curl + - -sS + - --fail + - --connect-timeout + - "5" + - -o + - /dev/null + - echo-a/public + livenessProbe: + exec: + command: + - curl + - -sS + - --fail + - --connect-timeout + - "5" + - -o + - /dev/null + - echo-a/public + selector: matchLabels: - name: pod-to-a-allowed-cnp - egress: - - toEndpoints: - - matchLabels: - name: echo-a - toPorts: - - ports: - - port: "80" - protocol: TCP - - toEndpoints: - - matchLabels: - k8s:io.kubernetes.pod.namespace: kube-system - k8s:k8s-app: kube-dns - toPorts: - - ports: - - port: "53" - protocol: UDP - - toEndpoints: - - matchLabels: - k8s:io.kubernetes.pod.namespace: openshift-dns - k8s:dns.operator.openshift.io/daemonset-dns: default - toPorts: - - ports: - - port: "5353" - protocol: UDP ---- + name: pod-to-a + replicas: 1 apiVersion: apps/v1 kind: Deployment +--- metadata: - name: pod-to-a-l3-denied-cnp + name: pod-to-external-1111 + labels: + name: pod-to-external-1111 + topology: any + component: network-check + quarantine: "false" spec: - selector: - matchLabels: - name: pod-to-a-l3-denied-cnp - replicas: 1 template: metadata: labels: - name: pod-to-a-l3-denied-cnp + name: pod-to-external-1111 spec: + hostNetwork: false containers: - - name: pod-to-a-l3-denied-cnp-container + - name: pod-to-external-1111-container + ports: [] image: docker.io/byrnedo/alpine-curl:0.1.8 - command: ["/bin/ash", "-c", "sleep 1000000000"] imagePullPolicy: IfNotPresent - livenessProbe: - timeoutSeconds: 7 - exec: - command: ["ash", "-c", "! curl -sS --fail --connect-timeout 5 -o /dev/null echo-a"] + command: + - /bin/ash + - -c + - sleep 1000000000 readinessProbe: - timeoutSeconds: 7 exec: - command: ["ash", "-c", "! curl -sS --fail --connect-timeout 5 -o /dev/null echo-a"] ---- -apiVersion: "cilium.io/v2" -kind: CiliumNetworkPolicy -metadata: - name: "pod-to-a-l3-denied-cnp" -spec: - endpointSelector: + command: + - curl + - -sS + - --fail + - --connect-timeout + - "5" + - -o + - /dev/null + - 1.1.1.1 + livenessProbe: + exec: + command: + - curl + - -sS + - --fail + - --connect-timeout + - "5" + - -o + - /dev/null + - 1.1.1.1 + selector: matchLabels: - name: pod-to-a-l3-denied-cnp - egress: - - toEndpoints: - - matchLabels: - k8s:io.kubernetes.pod.namespace: kube-system - k8s:k8s-app: kube-dns - toPorts: - - ports: - - port: "53" - protocol: UDP - - toEndpoints: - - matchLabels: - k8s:io.kubernetes.pod.namespace: openshift-dns - k8s:dns.operator.openshift.io/daemonset-dns: default - toPorts: - - ports: - - port: "5353" - protocol: UDP ---- + name: pod-to-external-1111 + replicas: 1 apiVersion: apps/v1 kind: Deployment +--- metadata: - name: pod-to-a + name: pod-to-a-denied-cnp + labels: + name: pod-to-a-denied-cnp + topology: any + component: policy-check + quarantine: "false" spec: - selector: - matchLabels: - name: pod-to-a - replicas: 1 template: metadata: labels: - name: pod-to-a + name: pod-to-a-denied-cnp spec: + hostNetwork: false containers: - - name: pod-to-a-container + - name: pod-to-a-denied-cnp-container + ports: [] image: docker.io/byrnedo/alpine-curl:0.1.8 - command: ["/bin/ash", "-c", "sleep 1000000000"] imagePullPolicy: IfNotPresent + command: + - /bin/ash + - -c + - sleep 1000000000 + readinessProbe: + timeoutSeconds: 7 + exec: + command: + - ash + - -c + - '! curl -s --fail --connect-timeout 5 -o /dev/null echo-a/private' livenessProbe: + timeoutSeconds: 7 exec: - command: ["curl", "-sS", "--fail", "-o", "/dev/null", "echo-a"] ---- + command: + - ash + - -c + - '! curl -s --fail --connect-timeout 5 -o /dev/null echo-a/private' + selector: + matchLabels: + name: pod-to-a-denied-cnp + replicas: 1 apiVersion: apps/v1 kind: Deployment +--- metadata: - name: pod-to-b-intra-node-hostport + name: pod-to-a-allowed-cnp + labels: + name: pod-to-a-allowed-cnp + topology: any + component: policy-check + quarantine: "false" spec: - replicas: 1 - selector: - matchLabels: - name: pod-to-b-intra-node-hostport template: metadata: labels: - name: pod-to-b-intra-node-hostport + name: pod-to-a-allowed-cnp spec: - affinity: - podAffinity: - requiredDuringSchedulingIgnoredDuringExecution: - - labelSelector: - matchExpressions: - - key: name - operator: In - values: - - echo-b - topologyKey: kubernetes.io/hostname + hostNetwork: false containers: - - command: + - name: pod-to-a-allowed-cnp-container + ports: [] + image: docker.io/byrnedo/alpine-curl:0.1.8 + imagePullPolicy: IfNotPresent + command: - /bin/ash - -c - sleep 1000000000 - image: docker.io/byrnedo/alpine-curl:0.1.8 - imagePullPolicy: IfNotPresent - livenessProbe: - exec: - command: [ "curl", "-sS", "--fail", "-o", "/dev/null", "echo-b-host-headless:40000" ] readinessProbe: exec: - command: [ "curl", "-sS", "--fail", "-o", "/dev/null", "echo-b-host-headless:40000" ] - name: pod-to-b-intra-node-hostport ---- + command: + - curl + - -sS + - --fail + - --connect-timeout + - "5" + - -o + - /dev/null + - echo-a/public + livenessProbe: + exec: + command: + - curl + - -sS + - --fail + - --connect-timeout + - "5" + - -o + - /dev/null + - echo-a/public + selector: + matchLabels: + name: pod-to-a-allowed-cnp + replicas: 1 apiVersion: apps/v1 kind: Deployment +--- metadata: - name: pod-to-b-intra-node-nodeport + name: pod-to-external-fqdn-allow-google-cnp + labels: + name: pod-to-external-fqdn-allow-google-cnp + topology: any + component: policy-check + quarantine: "false" spec: - replicas: 1 - selector: - matchLabels: - name: pod-to-b-intra-node-nodeport template: metadata: labels: - name: pod-to-b-intra-node-nodeport + name: pod-to-external-fqdn-allow-google-cnp spec: - affinity: - podAffinity: - requiredDuringSchedulingIgnoredDuringExecution: - - labelSelector: - matchExpressions: - - key: name - operator: In - values: - - echo-b - topologyKey: kubernetes.io/hostname + hostNetwork: false containers: - - command: + - name: pod-to-external-fqdn-allow-google-cnp-container + ports: [] + image: docker.io/byrnedo/alpine-curl:0.1.8 + imagePullPolicy: IfNotPresent + command: - /bin/ash - -c - sleep 1000000000 - image: docker.io/byrnedo/alpine-curl:0.1.8 - imagePullPolicy: IfNotPresent - livenessProbe: - exec: - command: [ "curl", "-sS", "--fail", "-o", "/dev/null", "echo-b-host-headless:31313" ] readinessProbe: exec: - command: [ "curl", "-sS", "--fail", "-o", "/dev/null", "echo-b-host-headless:31313" ] - name: pod-to-b-intra-node-hostport ---- + command: + - curl + - -sS + - --fail + - --connect-timeout + - "5" + - -o + - /dev/null + - www.google.com + livenessProbe: + exec: + command: + - curl + - -sS + - --fail + - --connect-timeout + - "5" + - -o + - /dev/null + - www.google.com + selector: + matchLabels: + name: pod-to-external-fqdn-allow-google-cnp + replicas: 1 apiVersion: apps/v1 kind: Deployment +--- metadata: - name: pod-to-b-intra-node + name: pod-to-a-intra-node-proxy-egress-policy + labels: + name: pod-to-a-intra-node-proxy-egress-policy + topology: intra-node + component: proxy-check + quarantine: "false" spec: - selector: - matchLabels: - name: pod-to-b-intra-node - replicas: 1 template: metadata: labels: - name: pod-to-b-intra-node + name: pod-to-a-intra-node-proxy-egress-policy spec: + hostNetwork: false containers: - - name: pod-to-b-intra-node-container + - name: pod-to-a-intra-node-proxy-egress-policy-allow-container + ports: [] + image: docker.io/byrnedo/alpine-curl:0.1.8 + imagePullPolicy: IfNotPresent + command: + - /bin/ash + - -c + - sleep 1000000000 + readinessProbe: + exec: + command: + - curl + - -sS + - --fail + - --connect-timeout + - "5" + - -o + - /dev/null + - echo-a/public + livenessProbe: + exec: + command: + - curl + - -sS + - --fail + - --connect-timeout + - "5" + - -o + - /dev/null + - echo-a/public + - name: pod-to-a-intra-node-proxy-egress-policy-reject-container + ports: [] image: docker.io/byrnedo/alpine-curl:0.1.8 - command: ["/bin/ash", "-c", "sleep 1000000000"] imagePullPolicy: IfNotPresent + command: + - /bin/ash + - -c + - sleep 1000000000 livenessProbe: + timeoutSeconds: 7 exec: - command: ["curl", "-sS", "--fail", "-o", "/dev/null", "echo-b"] + command: + - ash + - -c + - '! curl -s --fail --connect-timeout 5 -o /dev/null echo-a/private' affinity: podAffinity: requiredDuringSchedulingIgnoredDuringExecution: @@ -444,31 +590,75 @@ spec: - key: name operator: In values: - - echo-b - topologyKey: "kubernetes.io/hostname" ---- + - echo-a + topologyKey: kubernetes.io/hostname + selector: + matchLabels: + name: pod-to-a-intra-node-proxy-egress-policy + replicas: 1 apiVersion: apps/v1 kind: Deployment +--- metadata: - name: pod-to-b-multi-node-clusterip + name: pod-to-a-multi-node-proxy-egress-policy + labels: + name: pod-to-a-multi-node-proxy-egress-policy + topology: multi-node + component: proxy-check + quarantine: "false" spec: - selector: - matchLabels: - name: pod-to-b-multi-node-clusterip - replicas: 1 template: metadata: labels: - name: pod-to-b-multi-node-clusterip + name: pod-to-a-multi-node-proxy-egress-policy spec: + hostNetwork: false containers: - - name: pod-to-b-multi-node-container + - name: pod-to-a-multi-node-proxy-egress-policy-allow-container + ports: [] + image: docker.io/byrnedo/alpine-curl:0.1.8 + imagePullPolicy: IfNotPresent + command: + - /bin/ash + - -c + - sleep 1000000000 + readinessProbe: + exec: + command: + - curl + - -sS + - --fail + - --connect-timeout + - "5" + - -o + - /dev/null + - echo-a/public + livenessProbe: + exec: + command: + - curl + - -sS + - --fail + - --connect-timeout + - "5" + - -o + - /dev/null + - echo-a/public + - name: pod-to-a-multi-node-proxy-egress-policy-reject-container + ports: [] image: docker.io/byrnedo/alpine-curl:0.1.8 - command: ["/bin/ash", "-c", "sleep 1000000000"] imagePullPolicy: IfNotPresent + command: + - /bin/ash + - -c + - sleep 1000000000 livenessProbe: + timeoutSeconds: 7 exec: - command: ["curl", "-sS", "--fail", "-o", "/dev/null", "echo-b"] + command: + - ash + - -c + - '! curl -s --fail --connect-timeout 5 -o /dev/null echo-a/private' affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: @@ -477,95 +667,152 @@ spec: - key: name operator: In values: - - echo-b - topologyKey: "kubernetes.io/hostname" ---- + - echo-a + topologyKey: kubernetes.io/hostname + selector: + matchLabels: + name: pod-to-a-multi-node-proxy-egress-policy + replicas: 1 apiVersion: apps/v1 kind: Deployment +--- metadata: - name: pod-to-b-multi-node-headless + name: pod-to-c-intra-node-proxy-ingress-policy + labels: + name: pod-to-c-intra-node-proxy-ingress-policy + topology: intra-node + component: proxy-check + quarantine: "false" spec: - selector: - matchLabels: - name: pod-to-b-multi-node-headless - replicas: 1 template: metadata: labels: - name: pod-to-b-multi-node-headless + name: pod-to-c-intra-node-proxy-ingress-policy spec: + hostNetwork: false containers: - - name: pod-to-b-multi-node-container + - name: pod-to-c-intra-node-proxy-ingress-policy-allow-container + ports: [] + image: docker.io/byrnedo/alpine-curl:0.1.8 + imagePullPolicy: IfNotPresent + command: + - /bin/ash + - -c + - sleep 1000000000 + readinessProbe: + exec: + command: + - curl + - -sS + - --fail + - --connect-timeout + - "5" + - -o + - /dev/null + - echo-c/public + livenessProbe: + exec: + command: + - curl + - -sS + - --fail + - --connect-timeout + - "5" + - -o + - /dev/null + - echo-c/public + - name: pod-to-c-intra-node-proxy-ingress-policy-reject-container + ports: [] image: docker.io/byrnedo/alpine-curl:0.1.8 - command: ["/bin/ash", "-c", "sleep 1000000000"] imagePullPolicy: IfNotPresent + command: + - /bin/ash + - -c + - sleep 1000000000 livenessProbe: + timeoutSeconds: 7 exec: - command: ["curl", "-sS", "--fail", "-o", "/dev/null", "echo-b-headless"] + command: + - ash + - -c + - '! curl -s --fail --connect-timeout 5 -o /dev/null echo-c/private' affinity: - podAntiAffinity: + podAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: name operator: In values: - - echo-b - topologyKey: "kubernetes.io/hostname" ---- + - echo-c + topologyKey: kubernetes.io/hostname + selector: + matchLabels: + name: pod-to-c-intra-node-proxy-ingress-policy + replicas: 1 apiVersion: apps/v1 kind: Deployment +--- metadata: - name: pod-to-b-multi-node-hostport + name: pod-to-c-multi-node-proxy-ingress-policy + labels: + name: pod-to-c-multi-node-proxy-ingress-policy + topology: multi-node + component: proxy-check + quarantine: "false" spec: - replicas: 1 - selector: - matchLabels: - name: pod-to-b-multi-node-hostport template: metadata: labels: - name: pod-to-b-multi-node-hostport + name: pod-to-c-multi-node-proxy-ingress-policy spec: - affinity: - podAntiAffinity: - requiredDuringSchedulingIgnoredDuringExecution: - - labelSelector: - matchExpressions: - - key: name - operator: In - values: - - echo-b - topologyKey: kubernetes.io/hostname + hostNetwork: false containers: - - command: + - name: pod-to-c-multi-node-proxy-ingress-policy-allow-container + ports: [] + image: docker.io/byrnedo/alpine-curl:0.1.8 + imagePullPolicy: IfNotPresent + command: - /bin/ash - -c - sleep 1000000000 + readinessProbe: + exec: + command: + - curl + - -sS + - --fail + - --connect-timeout + - "5" + - -o + - /dev/null + - echo-c/public + livenessProbe: + exec: + command: + - curl + - -sS + - --fail + - --connect-timeout + - "5" + - -o + - /dev/null + - echo-c/public + - name: pod-to-c-multi-node-proxy-ingress-policy-reject-container + ports: [] image: docker.io/byrnedo/alpine-curl:0.1.8 imagePullPolicy: IfNotPresent + command: + - /bin/ash + - -c + - sleep 1000000000 livenessProbe: + timeoutSeconds: 7 exec: - command: [ "curl", "-sS", "--fail", "-o", "/dev/null", "echo-b-host-headless:40000" ] - readinessProbe: - exec: - command: [ "curl", "-sS", "--fail", "-o", "/dev/null", "echo-b-host-headless:40000" ] - name: pod-to-b-multi-node-hostport ---- -apiVersion: apps/v1 -kind: Deployment -metadata: - name: pod-to-b-multi-node-nodeport -spec: - replicas: 1 - selector: - matchLabels: - name: pod-to-b-multi-node-nodeport - template: - metadata: - labels: - name: pod-to-b-multi-node-nodeport - spec: + command: + - ash + - -c + - '! curl -s --fail --connect-timeout 5 -o /dev/null echo-c/private' affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: @@ -574,106 +821,954 @@ spec: - key: name operator: In values: - - echo-b + - echo-c topologyKey: kubernetes.io/hostname + selector: + matchLabels: + name: pod-to-c-multi-node-proxy-ingress-policy + replicas: 1 +apiVersion: apps/v1 +kind: Deployment +--- +metadata: + name: pod-to-c-intra-node-proxy-to-proxy-policy + labels: + name: pod-to-c-intra-node-proxy-to-proxy-policy + topology: intra-node + component: proxy-check + quarantine: "false" +spec: + template: + metadata: + labels: + name: pod-to-c-intra-node-proxy-to-proxy-policy + spec: + hostNetwork: false containers: - - command: + - name: pod-to-c-intra-node-proxy-to-proxy-policy-allow-container + ports: [] + image: docker.io/byrnedo/alpine-curl:0.1.8 + imagePullPolicy: IfNotPresent + command: - /bin/ash - -c - sleep 1000000000 + readinessProbe: + exec: + command: + - curl + - -sS + - --fail + - --connect-timeout + - "5" + - -o + - /dev/null + - echo-c/public + livenessProbe: + exec: + command: + - curl + - -sS + - --fail + - --connect-timeout + - "5" + - -o + - /dev/null + - echo-c/public + - name: pod-to-c-intra-node-proxy-to-proxy-policy-reject-container + ports: [] image: docker.io/byrnedo/alpine-curl:0.1.8 imagePullPolicy: IfNotPresent + command: + - /bin/ash + - -c + - sleep 1000000000 livenessProbe: + timeoutSeconds: 7 exec: - command: [ "curl", "-sS", "--fail", "-o", "/dev/null", "echo-b-host-headless:31313" ] - readinessProbe: - exec: - command: [ "curl", "-sS", "--fail", "-o", "/dev/null", "echo-b-host-headless:31313" ] - name: pod-to-b-multi-node-nodeport ---- + command: + - ash + - -c + - '! curl -s --fail --connect-timeout 5 -o /dev/null echo-c/private' + affinity: + podAffinity: + requiredDuringSchedulingIgnoredDuringExecution: + - labelSelector: + matchExpressions: + - key: name + operator: In + values: + - echo-c + topologyKey: kubernetes.io/hostname + selector: + matchLabels: + name: pod-to-c-intra-node-proxy-to-proxy-policy + replicas: 1 apiVersion: apps/v1 kind: Deployment +--- metadata: - name: pod-to-a-external-1111 + name: pod-to-c-multi-node-proxy-to-proxy-policy + labels: + name: pod-to-c-multi-node-proxy-to-proxy-policy + topology: multi-node + component: proxy-check + quarantine: "false" spec: - selector: - matchLabels: - name: pod-to-a-external-1111 - replicas: 1 template: metadata: labels: - name: pod-to-a-external-1111 + name: pod-to-c-multi-node-proxy-to-proxy-policy spec: + hostNetwork: false containers: - - name: pod-to-a-external-1111-container + - name: pod-to-c-multi-node-proxy-to-proxy-policy-allow-container + ports: [] image: docker.io/byrnedo/alpine-curl:0.1.8 - command: ["/bin/ash", "-c", "sleep 1000000000"] imagePullPolicy: IfNotPresent + command: + - /bin/ash + - -c + - sleep 1000000000 + readinessProbe: + exec: + command: + - curl + - -sS + - --fail + - --connect-timeout + - "5" + - -o + - /dev/null + - echo-c/public livenessProbe: exec: - command: ["curl", "-sS", "--fail", "--connect-timeout", "5", "-o", "/dev/null", "1.1.1.1"] - readinessProbe: + command: + - curl + - -sS + - --fail + - --connect-timeout + - "5" + - -o + - /dev/null + - echo-c/public + - name: pod-to-c-multi-node-proxy-to-proxy-policy-reject-container + ports: [] + image: docker.io/byrnedo/alpine-curl:0.1.8 + imagePullPolicy: IfNotPresent + command: + - /bin/ash + - -c + - sleep 1000000000 + livenessProbe: + timeoutSeconds: 7 exec: - command: ["curl", "-sS", "--fail", "--connect-timeout", "5", "-o", "/dev/null", "1.1.1.1"] ---- + command: + - ash + - -c + - '! curl -s --fail --connect-timeout 5 -o /dev/null echo-c/private' + affinity: + podAntiAffinity: + requiredDuringSchedulingIgnoredDuringExecution: + - labelSelector: + matchExpressions: + - key: name + operator: In + values: + - echo-c + topologyKey: kubernetes.io/hostname + selector: + matchLabels: + name: pod-to-c-multi-node-proxy-to-proxy-policy + replicas: 1 apiVersion: apps/v1 kind: Deployment +--- metadata: - name: pod-to-external-fqdn-allow-google-cnp + name: pod-to-b-multi-node-clusterip + labels: + name: pod-to-b-multi-node-clusterip + topology: multi-node + component: services-check + quarantine: "false" spec: - selector: - matchLabels: - name: pod-to-external-fqdn-allow-google-cnp - replicas: 1 template: metadata: labels: - name: pod-to-external-fqdn-allow-google-cnp + name: pod-to-b-multi-node-clusterip spec: + hostNetwork: false containers: - - name: pod-to-external-fqdn-allow-google-cnp-container + - name: pod-to-b-multi-node-clusterip-container + ports: [] image: docker.io/byrnedo/alpine-curl:0.1.8 - command: ["/bin/ash", "-c", "sleep 1000000000"] imagePullPolicy: IfNotPresent - livenessProbe: - exec: - command: ["curl", "-sS", "--fail", "--connect-timeout", "5", "-o", "/dev/null", "www.google.com"] + command: + - /bin/ash + - -c + - sleep 1000000000 readinessProbe: exec: - command: ["curl", "-sS", "--fail", "--connect-timeout", "5", "-o", "/dev/null", "www.google.com"] ---- -apiVersion: "cilium.io/v2" -kind: CiliumNetworkPolicy -metadata: - name: "pod-to-external-fqdn-allow-google-cnp" -spec: - endpointSelector: - matchLabels: - name: pod-to-external-fqdn-allow-google-cnp + command: + - curl + - -sS + - --fail + - --connect-timeout + - "5" + - -o + - /dev/null + - echo-b/public + livenessProbe: + exec: + command: + - curl + - -sS + - --fail + - --connect-timeout + - "5" + - -o + - /dev/null + - echo-b/public + affinity: + podAntiAffinity: + requiredDuringSchedulingIgnoredDuringExecution: + - labelSelector: + matchExpressions: + - key: name + operator: In + values: + - echo-b + topologyKey: kubernetes.io/hostname + selector: + matchLabels: + name: pod-to-b-multi-node-clusterip + replicas: 1 +apiVersion: apps/v1 +kind: Deployment +--- +metadata: + name: pod-to-b-multi-node-headless + labels: + name: pod-to-b-multi-node-headless + topology: multi-node + component: services-check + quarantine: "false" +spec: + template: + metadata: + labels: + name: pod-to-b-multi-node-headless + spec: + hostNetwork: false + containers: + - name: pod-to-b-multi-node-headless-container + ports: [] + image: docker.io/byrnedo/alpine-curl:0.1.8 + imagePullPolicy: IfNotPresent + command: + - /bin/ash + - -c + - sleep 1000000000 + readinessProbe: + exec: + command: + - curl + - -sS + - --fail + - --connect-timeout + - "5" + - -o + - /dev/null + - echo-b-headless/public + livenessProbe: + exec: + command: + - curl + - -sS + - --fail + - --connect-timeout + - "5" + - -o + - /dev/null + - echo-b-headless/public + affinity: + podAntiAffinity: + requiredDuringSchedulingIgnoredDuringExecution: + - labelSelector: + matchExpressions: + - key: name + operator: In + values: + - echo-b + topologyKey: kubernetes.io/hostname + selector: + matchLabels: + name: pod-to-b-multi-node-headless + replicas: 1 +apiVersion: apps/v1 +kind: Deployment +--- +metadata: + name: host-to-b-multi-node-clusterip + labels: + name: host-to-b-multi-node-clusterip + topology: multi-node + component: services-check + quarantine: "false" +spec: + template: + metadata: + labels: + name: host-to-b-multi-node-clusterip + spec: + hostNetwork: true + containers: + - name: host-to-b-multi-node-clusterip-container + ports: [] + image: docker.io/byrnedo/alpine-curl:0.1.8 + imagePullPolicy: IfNotPresent + command: + - /bin/ash + - -c + - sleep 1000000000 + readinessProbe: + exec: + command: + - curl + - -sS + - --fail + - --connect-timeout + - "5" + - -o + - /dev/null + - echo-b/public + livenessProbe: + exec: + command: + - curl + - -sS + - --fail + - --connect-timeout + - "5" + - -o + - /dev/null + - echo-b/public + affinity: + podAntiAffinity: + requiredDuringSchedulingIgnoredDuringExecution: + - labelSelector: + matchExpressions: + - key: name + operator: In + values: + - echo-b + topologyKey: kubernetes.io/hostname + dnsPolicy: ClusterFirstWithHostNet + selector: + matchLabels: + name: host-to-b-multi-node-clusterip + replicas: 1 +apiVersion: apps/v1 +kind: Deployment +--- +metadata: + name: host-to-b-multi-node-headless + labels: + name: host-to-b-multi-node-headless + topology: multi-node + component: services-check + quarantine: "false" +spec: + template: + metadata: + labels: + name: host-to-b-multi-node-headless + spec: + hostNetwork: true + containers: + - name: host-to-b-multi-node-headless-container + ports: [] + image: docker.io/byrnedo/alpine-curl:0.1.8 + imagePullPolicy: IfNotPresent + command: + - /bin/ash + - -c + - sleep 1000000000 + readinessProbe: + exec: + command: + - curl + - -sS + - --fail + - --connect-timeout + - "5" + - -o + - /dev/null + - echo-b-headless/public + livenessProbe: + exec: + command: + - curl + - -sS + - --fail + - --connect-timeout + - "5" + - -o + - /dev/null + - echo-b-headless/public + affinity: + podAntiAffinity: + requiredDuringSchedulingIgnoredDuringExecution: + - labelSelector: + matchExpressions: + - key: name + operator: In + values: + - echo-b + topologyKey: kubernetes.io/hostname + dnsPolicy: ClusterFirstWithHostNet + selector: + matchLabels: + name: host-to-b-multi-node-headless + replicas: 1 +apiVersion: apps/v1 +kind: Deployment +--- +metadata: + name: pod-to-b-multi-node-hostport + labels: + name: pod-to-b-multi-node-hostport + topology: multi-node + component: hostport-check + quarantine: "false" +spec: + template: + metadata: + labels: + name: pod-to-b-multi-node-hostport + spec: + hostNetwork: false + containers: + - name: pod-to-b-multi-node-hostport-container + ports: [] + image: docker.io/byrnedo/alpine-curl:0.1.8 + imagePullPolicy: IfNotPresent + command: + - /bin/ash + - -c + - sleep 1000000000 + readinessProbe: + exec: + command: + - curl + - -sS + - --fail + - --connect-timeout + - "5" + - -o + - /dev/null + - echo-b-host-headless:40000/public + livenessProbe: + exec: + command: + - curl + - -sS + - --fail + - --connect-timeout + - "5" + - -o + - /dev/null + - echo-b-host-headless:40000/public + affinity: + podAntiAffinity: + requiredDuringSchedulingIgnoredDuringExecution: + - labelSelector: + matchExpressions: + - key: name + operator: In + values: + - echo-b + topologyKey: kubernetes.io/hostname + selector: + matchLabels: + name: pod-to-b-multi-node-hostport + replicas: 1 +apiVersion: apps/v1 +kind: Deployment +--- +metadata: + name: pod-to-b-intra-node-hostport + labels: + name: pod-to-b-intra-node-hostport + topology: intra-node + component: hostport-check + quarantine: "false" +spec: + template: + metadata: + labels: + name: pod-to-b-intra-node-hostport + spec: + hostNetwork: false + containers: + - name: pod-to-b-intra-node-hostport-container + ports: [] + image: docker.io/byrnedo/alpine-curl:0.1.8 + imagePullPolicy: IfNotPresent + command: + - /bin/ash + - -c + - sleep 1000000000 + readinessProbe: + exec: + command: + - curl + - -sS + - --fail + - --connect-timeout + - "5" + - -o + - /dev/null + - echo-b-host-headless:40000/public + livenessProbe: + exec: + command: + - curl + - -sS + - --fail + - --connect-timeout + - "5" + - -o + - /dev/null + - echo-b-host-headless:40000/public + affinity: + podAffinity: + requiredDuringSchedulingIgnoredDuringExecution: + - labelSelector: + matchExpressions: + - key: name + operator: In + values: + - echo-b + topologyKey: kubernetes.io/hostname + selector: + matchLabels: + name: pod-to-b-intra-node-hostport + replicas: 1 +apiVersion: apps/v1 +kind: Deployment +--- +metadata: + name: echo-a + labels: + name: echo-a + topology: any + component: network-check + quarantine: "false" +spec: + ports: + - port: 80 + selector: + name: echo-a + type: ClusterIP +apiVersion: v1 +kind: Service +--- +metadata: + name: echo-b + labels: + name: echo-b + topology: any + component: services-check + quarantine: "false" +spec: + ports: + - port: 80 + nodePort: 31313 + selector: + name: echo-b + type: NodePort +apiVersion: v1 +kind: Service +--- +metadata: + name: echo-c + labels: + name: echo-c + topology: any + component: proxy-check + quarantine: "false" +spec: + ports: + - port: 80 + selector: + name: echo-c + type: ClusterIP +apiVersion: v1 +kind: Service +--- +metadata: + name: echo-b-headless + labels: + name: echo-b-headless + topology: any + component: services-check + quarantine: "false" +spec: + ports: + - port: 80 + selector: + name: echo-b + type: ClusterIP + clusterIP: None +apiVersion: v1 +kind: Service +--- +metadata: + name: echo-b-host-headless + labels: + name: echo-b-host-headless + topology: any + component: services-check + quarantine: "false" +spec: + ports: [] + selector: + name: echo-b-host + type: ClusterIP + clusterIP: None +apiVersion: v1 +kind: Service +--- +metadata: + name: echo-c-headless + labels: + name: echo-c-headless + topology: any + component: proxy-check + quarantine: "false" +spec: + ports: + - port: 80 + selector: + name: echo-c + type: ClusterIP + clusterIP: None +apiVersion: v1 +kind: Service +--- +metadata: + name: echo-c-host-headless + labels: + name: echo-c-host-headless + topology: any + component: proxy-check + quarantine: "false" +spec: + ports: [] + selector: + name: echo-c-host + type: ClusterIP + clusterIP: None +apiVersion: v1 +kind: Service +--- +metadata: + name: pod-to-a-denied-cnp + labels: + name: pod-to-a-denied-cnp + topology: any + component: policy-check + quarantine: "false" +spec: + endpointSelector: + matchLabels: + name: pod-to-a-denied-cnp egress: - - toEndpoints: + - toPorts: + - ports: + - port: "53" + protocol: ANY + toEndpoints: + - matchLabels: + k8s:io.kubernetes.pod.namespace: kube-system + k8s:k8s-app: kube-dns + - toPorts: + - ports: + - port: "5353" + protocol: UDP + toEndpoints: + - matchLabels: + k8s:io.kubernetes.pod.namespace: openshift-dns + k8s:dns.operator.openshift.io/daemonset-dns: default +apiVersion: cilium.io/v2 +kind: CiliumNetworkPolicy +--- +metadata: + name: pod-to-a-allowed-cnp + labels: + name: pod-to-a-allowed-cnp + topology: any + component: policy-check + quarantine: "false" +spec: + endpointSelector: + matchLabels: + name: pod-to-a-allowed-cnp + egress: + - toPorts: + - ports: + - port: "80" + protocol: TCP + toEndpoints: + - matchLabels: + name: echo-a + - toPorts: + - ports: + - port: "53" + protocol: ANY + toEndpoints: - matchLabels: - "k8s:io.kubernetes.pod.namespace": kube-system - "k8s:k8s-app": kube-dns - toPorts: + k8s:io.kubernetes.pod.namespace: kube-system + k8s:k8s-app: kube-dns + - toPorts: + - ports: + - port: "5353" + protocol: UDP + toEndpoints: + - matchLabels: + k8s:io.kubernetes.pod.namespace: openshift-dns + k8s:dns.operator.openshift.io/daemonset-dns: default +apiVersion: cilium.io/v2 +kind: CiliumNetworkPolicy +--- +metadata: + name: pod-to-external-fqdn-allow-google-cnp + labels: + name: pod-to-external-fqdn-allow-google-cnp + topology: any + component: policy-check + quarantine: "false" +spec: + endpointSelector: + matchLabels: + name: pod-to-external-fqdn-allow-google-cnp + egress: + - toFQDNs: + - matchPattern: '*.google.com' + - toPorts: - ports: - port: "53" protocol: ANY rules: dns: - - matchPattern: "*" - - toEndpoints: + - matchPattern: '*' + toEndpoints: + - matchLabels: + k8s:io.kubernetes.pod.namespace: kube-system + k8s:k8s-app: kube-dns + - toPorts: + - ports: + - port: "5353" + protocol: UDP + rules: + dns: + - matchPattern: '*' + toEndpoints: - matchLabels: k8s:io.kubernetes.pod.namespace: openshift-dns k8s:dns.operator.openshift.io/daemonset-dns: default - toPorts: +apiVersion: cilium.io/v2 +kind: CiliumNetworkPolicy +--- +metadata: + name: pod-to-a-intra-node-proxy-egress-policy + labels: + name: pod-to-a-intra-node-proxy-egress-policy + topology: intra-node + component: proxy-check + quarantine: "false" +spec: + endpointSelector: + matchLabels: + name: pod-to-a-intra-node-proxy-egress-policy + egress: + - toPorts: + - ports: + - port: "80" + protocol: TCP + rules: + http: + - path: /public$ + method: GET + toEndpoints: + - matchLabels: + name: echo-a + - toPorts: + - ports: + - port: "53" + protocol: ANY + toEndpoints: + - matchLabels: + k8s:io.kubernetes.pod.namespace: kube-system + k8s:k8s-app: kube-dns + - toPorts: - ports: - port: "5353" protocol: UDP + toEndpoints: + - matchLabels: + k8s:io.kubernetes.pod.namespace: openshift-dns + k8s:dns.operator.openshift.io/daemonset-dns: default +apiVersion: cilium.io/v2 +kind: CiliumNetworkPolicy +--- +metadata: + name: pod-to-a-multi-node-proxy-egress-policy + labels: + name: pod-to-a-multi-node-proxy-egress-policy + topology: multi-node + component: proxy-check + quarantine: "false" +spec: + endpointSelector: + matchLabels: + name: pod-to-a-multi-node-proxy-egress-policy + egress: + - toPorts: + - ports: + - port: "80" + protocol: TCP rules: - dns: - - matchPattern: "*" - - toFQDNs: - - matchPattern: "*.google.com" + http: + - path: /public$ + method: GET + toEndpoints: + - matchLabels: + name: echo-a + - toPorts: + - ports: + - port: "53" + protocol: ANY + toEndpoints: + - matchLabels: + k8s:io.kubernetes.pod.namespace: kube-system + k8s:k8s-app: kube-dns + - toPorts: + - ports: + - port: "5353" + protocol: UDP + toEndpoints: + - matchLabels: + k8s:io.kubernetes.pod.namespace: openshift-dns + k8s:dns.operator.openshift.io/daemonset-dns: default +apiVersion: cilium.io/v2 +kind: CiliumNetworkPolicy --- +metadata: + name: pod-to-c-intra-node-proxy-to-proxy-policy + labels: + name: pod-to-c-intra-node-proxy-to-proxy-policy + topology: intra-node + component: proxy-check + quarantine: "false" +spec: + endpointSelector: + matchLabels: + name: pod-to-c-intra-node-proxy-to-proxy-policy + egress: + - toPorts: + - ports: + - port: "80" + protocol: TCP + rules: + http: + - path: /public$ + method: GET + toEndpoints: + - matchLabels: + name: echo-c + - toPorts: + - ports: + - port: "53" + protocol: ANY + toEndpoints: + - matchLabels: + k8s:io.kubernetes.pod.namespace: kube-system + k8s:k8s-app: kube-dns + - toPorts: + - ports: + - port: "5353" + protocol: UDP + toEndpoints: + - matchLabels: + k8s:io.kubernetes.pod.namespace: openshift-dns + k8s:dns.operator.openshift.io/daemonset-dns: default +apiVersion: cilium.io/v2 +kind: CiliumNetworkPolicy +--- +metadata: + name: pod-to-c-multi-node-proxy-to-proxy-policy + labels: + name: pod-to-c-multi-node-proxy-to-proxy-policy + topology: multi-node + component: proxy-check + quarantine: "false" +spec: + endpointSelector: + matchLabels: + name: pod-to-c-multi-node-proxy-to-proxy-policy + egress: + - toPorts: + - ports: + - port: "80" + protocol: TCP + rules: + http: + - path: /public$ + method: GET + toEndpoints: + - matchLabels: + name: echo-c + - toPorts: + - ports: + - port: "53" + protocol: ANY + toEndpoints: + - matchLabels: + k8s:io.kubernetes.pod.namespace: kube-system + k8s:k8s-app: kube-dns + - toPorts: + - ports: + - port: "5353" + protocol: UDP + toEndpoints: + - matchLabels: + k8s:io.kubernetes.pod.namespace: openshift-dns + k8s:dns.operator.openshift.io/daemonset-dns: default +apiVersion: cilium.io/v2 +kind: CiliumNetworkPolicy +--- +metadata: + name: echo-c + labels: + name: echo-c + topology: any + component: proxy-check + quarantine: "false" +spec: + endpointSelector: + matchLabels: + name: echo-c + ingress: + - toPorts: + - ports: + - port: "80" + protocol: TCP + rules: + http: + - path: /public$ + method: GET +apiVersion: cilium.io/v2 +kind: CiliumNetworkPolicy + diff --git a/examples/kubernetes/connectivity-check/connectivity-check-proxy.yaml b/examples/kubernetes/connectivity-check/connectivity-check-proxy.yaml new file mode 100644 index 000000000000..50318b27400c --- /dev/null +++ b/examples/kubernetes/connectivity-check/connectivity-check-proxy.yaml @@ -0,0 +1,1034 @@ +--- +metadata: + name: echo-c + labels: + name: echo-c + topology: any + component: proxy-check + quarantine: "false" +spec: + template: + metadata: + labels: + name: echo-c + spec: + hostNetwork: false + containers: + - name: echo-c-container + ports: + - containerPort: 80 + hostPort: 40001 + image: docker.io/cilium/json-mock:1.2 + imagePullPolicy: IfNotPresent + readinessProbe: + exec: + command: + - curl + - -sS + - --fail + - --connect-timeout + - "5" + - -o + - /dev/null + - localhost + livenessProbe: + exec: + command: + - curl + - -sS + - --fail + - --connect-timeout + - "5" + - -o + - /dev/null + - localhost + selector: + matchLabels: + name: echo-c + replicas: 1 +apiVersion: apps/v1 +kind: Deployment +--- +metadata: + name: echo-c-host + labels: + name: echo-c-host + topology: any + component: proxy-check + quarantine: "false" +spec: + template: + metadata: + labels: + name: echo-c-host + spec: + hostNetwork: true + containers: + - name: echo-c-host-container + env: + - name: PORT + value: "41001" + ports: [] + image: docker.io/cilium/json-mock:1.2 + imagePullPolicy: IfNotPresent + readinessProbe: + exec: + command: + - curl + - -sS + - --fail + - --connect-timeout + - "5" + - -o + - /dev/null + - localhost:41001 + livenessProbe: + exec: + command: + - curl + - -sS + - --fail + - --connect-timeout + - "5" + - -o + - /dev/null + - localhost:41001 + affinity: + podAffinity: + requiredDuringSchedulingIgnoredDuringExecution: + - labelSelector: + matchExpressions: + - key: name + operator: In + values: + - echo-c + topologyKey: kubernetes.io/hostname + selector: + matchLabels: + name: echo-c-host + replicas: 1 +apiVersion: apps/v1 +kind: Deployment +--- +metadata: + name: pod-to-a-intra-node-proxy-egress-policy + labels: + name: pod-to-a-intra-node-proxy-egress-policy + topology: intra-node + component: proxy-check + quarantine: "false" +spec: + template: + metadata: + labels: + name: pod-to-a-intra-node-proxy-egress-policy + spec: + hostNetwork: false + containers: + - name: pod-to-a-intra-node-proxy-egress-policy-allow-container + ports: [] + image: docker.io/byrnedo/alpine-curl:0.1.8 + imagePullPolicy: IfNotPresent + command: + - /bin/ash + - -c + - sleep 1000000000 + readinessProbe: + exec: + command: + - curl + - -sS + - --fail + - --connect-timeout + - "5" + - -o + - /dev/null + - echo-a/public + livenessProbe: + exec: + command: + - curl + - -sS + - --fail + - --connect-timeout + - "5" + - -o + - /dev/null + - echo-a/public + - name: pod-to-a-intra-node-proxy-egress-policy-reject-container + ports: [] + image: docker.io/byrnedo/alpine-curl:0.1.8 + imagePullPolicy: IfNotPresent + command: + - /bin/ash + - -c + - sleep 1000000000 + livenessProbe: + timeoutSeconds: 7 + exec: + command: + - ash + - -c + - '! curl -s --fail --connect-timeout 5 -o /dev/null echo-a/private' + affinity: + podAffinity: + requiredDuringSchedulingIgnoredDuringExecution: + - labelSelector: + matchExpressions: + - key: name + operator: In + values: + - echo-a + topologyKey: kubernetes.io/hostname + selector: + matchLabels: + name: pod-to-a-intra-node-proxy-egress-policy + replicas: 1 +apiVersion: apps/v1 +kind: Deployment +--- +metadata: + name: pod-to-a-multi-node-proxy-egress-policy + labels: + name: pod-to-a-multi-node-proxy-egress-policy + topology: multi-node + component: proxy-check + quarantine: "false" +spec: + template: + metadata: + labels: + name: pod-to-a-multi-node-proxy-egress-policy + spec: + hostNetwork: false + containers: + - name: pod-to-a-multi-node-proxy-egress-policy-allow-container + ports: [] + image: docker.io/byrnedo/alpine-curl:0.1.8 + imagePullPolicy: IfNotPresent + command: + - /bin/ash + - -c + - sleep 1000000000 + readinessProbe: + exec: + command: + - curl + - -sS + - --fail + - --connect-timeout + - "5" + - -o + - /dev/null + - echo-a/public + livenessProbe: + exec: + command: + - curl + - -sS + - --fail + - --connect-timeout + - "5" + - -o + - /dev/null + - echo-a/public + - name: pod-to-a-multi-node-proxy-egress-policy-reject-container + ports: [] + image: docker.io/byrnedo/alpine-curl:0.1.8 + imagePullPolicy: IfNotPresent + command: + - /bin/ash + - -c + - sleep 1000000000 + livenessProbe: + timeoutSeconds: 7 + exec: + command: + - ash + - -c + - '! curl -s --fail --connect-timeout 5 -o /dev/null echo-a/private' + affinity: + podAntiAffinity: + requiredDuringSchedulingIgnoredDuringExecution: + - labelSelector: + matchExpressions: + - key: name + operator: In + values: + - echo-a + topologyKey: kubernetes.io/hostname + selector: + matchLabels: + name: pod-to-a-multi-node-proxy-egress-policy + replicas: 1 +apiVersion: apps/v1 +kind: Deployment +--- +metadata: + name: pod-to-c-intra-node-proxy-ingress-policy + labels: + name: pod-to-c-intra-node-proxy-ingress-policy + topology: intra-node + component: proxy-check + quarantine: "false" +spec: + template: + metadata: + labels: + name: pod-to-c-intra-node-proxy-ingress-policy + spec: + hostNetwork: false + containers: + - name: pod-to-c-intra-node-proxy-ingress-policy-allow-container + ports: [] + image: docker.io/byrnedo/alpine-curl:0.1.8 + imagePullPolicy: IfNotPresent + command: + - /bin/ash + - -c + - sleep 1000000000 + readinessProbe: + exec: + command: + - curl + - -sS + - --fail + - --connect-timeout + - "5" + - -o + - /dev/null + - echo-c/public + livenessProbe: + exec: + command: + - curl + - -sS + - --fail + - --connect-timeout + - "5" + - -o + - /dev/null + - echo-c/public + - name: pod-to-c-intra-node-proxy-ingress-policy-reject-container + ports: [] + image: docker.io/byrnedo/alpine-curl:0.1.8 + imagePullPolicy: IfNotPresent + command: + - /bin/ash + - -c + - sleep 1000000000 + livenessProbe: + timeoutSeconds: 7 + exec: + command: + - ash + - -c + - '! curl -s --fail --connect-timeout 5 -o /dev/null echo-c/private' + affinity: + podAffinity: + requiredDuringSchedulingIgnoredDuringExecution: + - labelSelector: + matchExpressions: + - key: name + operator: In + values: + - echo-c + topologyKey: kubernetes.io/hostname + selector: + matchLabels: + name: pod-to-c-intra-node-proxy-ingress-policy + replicas: 1 +apiVersion: apps/v1 +kind: Deployment +--- +metadata: + name: pod-to-c-multi-node-proxy-ingress-policy + labels: + name: pod-to-c-multi-node-proxy-ingress-policy + topology: multi-node + component: proxy-check + quarantine: "false" +spec: + template: + metadata: + labels: + name: pod-to-c-multi-node-proxy-ingress-policy + spec: + hostNetwork: false + containers: + - name: pod-to-c-multi-node-proxy-ingress-policy-allow-container + ports: [] + image: docker.io/byrnedo/alpine-curl:0.1.8 + imagePullPolicy: IfNotPresent + command: + - /bin/ash + - -c + - sleep 1000000000 + readinessProbe: + exec: + command: + - curl + - -sS + - --fail + - --connect-timeout + - "5" + - -o + - /dev/null + - echo-c/public + livenessProbe: + exec: + command: + - curl + - -sS + - --fail + - --connect-timeout + - "5" + - -o + - /dev/null + - echo-c/public + - name: pod-to-c-multi-node-proxy-ingress-policy-reject-container + ports: [] + image: docker.io/byrnedo/alpine-curl:0.1.8 + imagePullPolicy: IfNotPresent + command: + - /bin/ash + - -c + - sleep 1000000000 + livenessProbe: + timeoutSeconds: 7 + exec: + command: + - ash + - -c + - '! curl -s --fail --connect-timeout 5 -o /dev/null echo-c/private' + affinity: + podAntiAffinity: + requiredDuringSchedulingIgnoredDuringExecution: + - labelSelector: + matchExpressions: + - key: name + operator: In + values: + - echo-c + topologyKey: kubernetes.io/hostname + selector: + matchLabels: + name: pod-to-c-multi-node-proxy-ingress-policy + replicas: 1 +apiVersion: apps/v1 +kind: Deployment +--- +metadata: + name: pod-to-c-intra-node-proxy-to-proxy-policy + labels: + name: pod-to-c-intra-node-proxy-to-proxy-policy + topology: intra-node + component: proxy-check + quarantine: "false" +spec: + template: + metadata: + labels: + name: pod-to-c-intra-node-proxy-to-proxy-policy + spec: + hostNetwork: false + containers: + - name: pod-to-c-intra-node-proxy-to-proxy-policy-allow-container + ports: [] + image: docker.io/byrnedo/alpine-curl:0.1.8 + imagePullPolicy: IfNotPresent + command: + - /bin/ash + - -c + - sleep 1000000000 + readinessProbe: + exec: + command: + - curl + - -sS + - --fail + - --connect-timeout + - "5" + - -o + - /dev/null + - echo-c/public + livenessProbe: + exec: + command: + - curl + - -sS + - --fail + - --connect-timeout + - "5" + - -o + - /dev/null + - echo-c/public + - name: pod-to-c-intra-node-proxy-to-proxy-policy-reject-container + ports: [] + image: docker.io/byrnedo/alpine-curl:0.1.8 + imagePullPolicy: IfNotPresent + command: + - /bin/ash + - -c + - sleep 1000000000 + livenessProbe: + timeoutSeconds: 7 + exec: + command: + - ash + - -c + - '! curl -s --fail --connect-timeout 5 -o /dev/null echo-c/private' + affinity: + podAffinity: + requiredDuringSchedulingIgnoredDuringExecution: + - labelSelector: + matchExpressions: + - key: name + operator: In + values: + - echo-c + topologyKey: kubernetes.io/hostname + selector: + matchLabels: + name: pod-to-c-intra-node-proxy-to-proxy-policy + replicas: 1 +apiVersion: apps/v1 +kind: Deployment +--- +metadata: + name: pod-to-c-multi-node-proxy-to-proxy-policy + labels: + name: pod-to-c-multi-node-proxy-to-proxy-policy + topology: multi-node + component: proxy-check + quarantine: "false" +spec: + template: + metadata: + labels: + name: pod-to-c-multi-node-proxy-to-proxy-policy + spec: + hostNetwork: false + containers: + - name: pod-to-c-multi-node-proxy-to-proxy-policy-allow-container + ports: [] + image: docker.io/byrnedo/alpine-curl:0.1.8 + imagePullPolicy: IfNotPresent + command: + - /bin/ash + - -c + - sleep 1000000000 + readinessProbe: + exec: + command: + - curl + - -sS + - --fail + - --connect-timeout + - "5" + - -o + - /dev/null + - echo-c/public + livenessProbe: + exec: + command: + - curl + - -sS + - --fail + - --connect-timeout + - "5" + - -o + - /dev/null + - echo-c/public + - name: pod-to-c-multi-node-proxy-to-proxy-policy-reject-container + ports: [] + image: docker.io/byrnedo/alpine-curl:0.1.8 + imagePullPolicy: IfNotPresent + command: + - /bin/ash + - -c + - sleep 1000000000 + livenessProbe: + timeoutSeconds: 7 + exec: + command: + - ash + - -c + - '! curl -s --fail --connect-timeout 5 -o /dev/null echo-c/private' + affinity: + podAntiAffinity: + requiredDuringSchedulingIgnoredDuringExecution: + - labelSelector: + matchExpressions: + - key: name + operator: In + values: + - echo-c + topologyKey: kubernetes.io/hostname + selector: + matchLabels: + name: pod-to-c-multi-node-proxy-to-proxy-policy + replicas: 1 +apiVersion: apps/v1 +kind: Deployment +--- +metadata: + name: echo-c + labels: + name: echo-c + topology: any + component: proxy-check + quarantine: "false" +spec: + ports: + - port: 80 + selector: + name: echo-c + type: ClusterIP +apiVersion: v1 +kind: Service +--- +metadata: + name: echo-c-headless + labels: + name: echo-c-headless + topology: any + component: proxy-check + quarantine: "false" +spec: + ports: + - port: 80 + selector: + name: echo-c + type: ClusterIP + clusterIP: None +apiVersion: v1 +kind: Service +--- +metadata: + name: echo-c-host-headless + labels: + name: echo-c-host-headless + topology: any + component: proxy-check + quarantine: "false" +spec: + ports: [] + selector: + name: echo-c-host + type: ClusterIP + clusterIP: None +apiVersion: v1 +kind: Service +--- +metadata: + name: pod-to-a-intra-node-proxy-egress-policy + labels: + name: pod-to-a-intra-node-proxy-egress-policy + topology: intra-node + component: proxy-check + quarantine: "false" +spec: + endpointSelector: + matchLabels: + name: pod-to-a-intra-node-proxy-egress-policy + egress: + - toPorts: + - ports: + - port: "80" + protocol: TCP + rules: + http: + - path: /public$ + method: GET + toEndpoints: + - matchLabels: + name: echo-a + - toPorts: + - ports: + - port: "53" + protocol: ANY + toEndpoints: + - matchLabels: + k8s:io.kubernetes.pod.namespace: kube-system + k8s:k8s-app: kube-dns + - toPorts: + - ports: + - port: "5353" + protocol: UDP + toEndpoints: + - matchLabels: + k8s:io.kubernetes.pod.namespace: openshift-dns + k8s:dns.operator.openshift.io/daemonset-dns: default +apiVersion: cilium.io/v2 +kind: CiliumNetworkPolicy +--- +metadata: + name: pod-to-a-multi-node-proxy-egress-policy + labels: + name: pod-to-a-multi-node-proxy-egress-policy + topology: multi-node + component: proxy-check + quarantine: "false" +spec: + endpointSelector: + matchLabels: + name: pod-to-a-multi-node-proxy-egress-policy + egress: + - toPorts: + - ports: + - port: "80" + protocol: TCP + rules: + http: + - path: /public$ + method: GET + toEndpoints: + - matchLabels: + name: echo-a + - toPorts: + - ports: + - port: "53" + protocol: ANY + toEndpoints: + - matchLabels: + k8s:io.kubernetes.pod.namespace: kube-system + k8s:k8s-app: kube-dns + - toPorts: + - ports: + - port: "5353" + protocol: UDP + toEndpoints: + - matchLabels: + k8s:io.kubernetes.pod.namespace: openshift-dns + k8s:dns.operator.openshift.io/daemonset-dns: default +apiVersion: cilium.io/v2 +kind: CiliumNetworkPolicy +--- +metadata: + name: pod-to-c-intra-node-proxy-to-proxy-policy + labels: + name: pod-to-c-intra-node-proxy-to-proxy-policy + topology: intra-node + component: proxy-check + quarantine: "false" +spec: + endpointSelector: + matchLabels: + name: pod-to-c-intra-node-proxy-to-proxy-policy + egress: + - toPorts: + - ports: + - port: "80" + protocol: TCP + rules: + http: + - path: /public$ + method: GET + toEndpoints: + - matchLabels: + name: echo-c + - toPorts: + - ports: + - port: "53" + protocol: ANY + toEndpoints: + - matchLabels: + k8s:io.kubernetes.pod.namespace: kube-system + k8s:k8s-app: kube-dns + - toPorts: + - ports: + - port: "5353" + protocol: UDP + toEndpoints: + - matchLabels: + k8s:io.kubernetes.pod.namespace: openshift-dns + k8s:dns.operator.openshift.io/daemonset-dns: default +apiVersion: cilium.io/v2 +kind: CiliumNetworkPolicy +--- +metadata: + name: pod-to-c-multi-node-proxy-to-proxy-policy + labels: + name: pod-to-c-multi-node-proxy-to-proxy-policy + topology: multi-node + component: proxy-check + quarantine: "false" +spec: + endpointSelector: + matchLabels: + name: pod-to-c-multi-node-proxy-to-proxy-policy + egress: + - toPorts: + - ports: + - port: "80" + protocol: TCP + rules: + http: + - path: /public$ + method: GET + toEndpoints: + - matchLabels: + name: echo-c + - toPorts: + - ports: + - port: "53" + protocol: ANY + toEndpoints: + - matchLabels: + k8s:io.kubernetes.pod.namespace: kube-system + k8s:k8s-app: kube-dns + - toPorts: + - ports: + - port: "5353" + protocol: UDP + toEndpoints: + - matchLabels: + k8s:io.kubernetes.pod.namespace: openshift-dns + k8s:dns.operator.openshift.io/daemonset-dns: default +apiVersion: cilium.io/v2 +kind: CiliumNetworkPolicy +--- +metadata: + name: echo-c + labels: + name: echo-c + topology: any + component: proxy-check + quarantine: "false" +spec: + endpointSelector: + matchLabels: + name: echo-c + ingress: + - toPorts: + - ports: + - port: "80" + protocol: TCP + rules: + http: + - path: /public$ + method: GET +apiVersion: cilium.io/v2 +kind: CiliumNetworkPolicy + + +--- +metadata: + name: echo-a + labels: + name: echo-a + topology: any + component: network-check + quarantine: "false" +spec: + template: + metadata: + labels: + name: echo-a + spec: + hostNetwork: false + containers: + - name: echo-a-container + ports: + - containerPort: 80 + image: docker.io/cilium/json-mock:1.2 + imagePullPolicy: IfNotPresent + readinessProbe: + exec: + command: + - curl + - -sS + - --fail + - --connect-timeout + - "5" + - -o + - /dev/null + - localhost + livenessProbe: + exec: + command: + - curl + - -sS + - --fail + - --connect-timeout + - "5" + - -o + - /dev/null + - localhost + selector: + matchLabels: + name: echo-a + replicas: 1 +apiVersion: apps/v1 +kind: Deployment +--- +metadata: + name: echo-a + labels: + name: echo-a + topology: any + component: network-check + quarantine: "false" +spec: + ports: + - port: 80 + selector: + name: echo-a + type: ClusterIP +apiVersion: v1 +kind: Service + +--- +metadata: + name: echo-b + labels: + name: echo-b + topology: any + component: services-check + quarantine: "false" +spec: + template: + metadata: + labels: + name: echo-b + spec: + hostNetwork: false + containers: + - name: echo-b-container + ports: + - containerPort: 80 + hostPort: 40000 + image: docker.io/cilium/json-mock:1.2 + imagePullPolicy: IfNotPresent + readinessProbe: + exec: + command: + - curl + - -sS + - --fail + - --connect-timeout + - "5" + - -o + - /dev/null + - localhost + livenessProbe: + exec: + command: + - curl + - -sS + - --fail + - --connect-timeout + - "5" + - -o + - /dev/null + - localhost + selector: + matchLabels: + name: echo-b + replicas: 1 +apiVersion: apps/v1 +kind: Deployment +--- +metadata: + name: echo-b + labels: + name: echo-b + topology: any + component: services-check + quarantine: "false" +spec: + ports: + - port: 80 + nodePort: 31313 + selector: + name: echo-b + type: NodePort +apiVersion: v1 +kind: Service + +--- +metadata: + name: echo-b-host + labels: + name: echo-b-host + topology: any + component: services-check + quarantine: "false" +spec: + template: + metadata: + labels: + name: echo-b-host + spec: + hostNetwork: true + containers: + - name: echo-b-host-container + env: + - name: PORT + value: "41000" + ports: [] + image: docker.io/cilium/json-mock:1.2 + imagePullPolicy: IfNotPresent + readinessProbe: + exec: + command: + - curl + - -sS + - --fail + - --connect-timeout + - "5" + - -o + - /dev/null + - localhost:41000 + livenessProbe: + exec: + command: + - curl + - -sS + - --fail + - --connect-timeout + - "5" + - -o + - /dev/null + - localhost:41000 + affinity: + podAffinity: + requiredDuringSchedulingIgnoredDuringExecution: + - labelSelector: + matchExpressions: + - key: name + operator: In + values: + - echo-b + topologyKey: kubernetes.io/hostname + selector: + matchLabels: + name: echo-b-host + replicas: 1 +apiVersion: apps/v1 +kind: Deployment + +--- +metadata: + name: echo-b-host-headless + labels: + name: echo-b-host-headless + topology: any + component: services-check + quarantine: "false" +spec: + ports: [] + selector: + name: echo-b-host + type: ClusterIP + clusterIP: None +apiVersion: v1 +kind: Service + +--- + +--- + +--- + diff --git a/examples/kubernetes/connectivity-check/connectivity-check-quarantine.yaml b/examples/kubernetes/connectivity-check/connectivity-check-quarantine.yaml new file mode 100644 index 000000000000..53524370dbb2 --- /dev/null +++ b/examples/kubernetes/connectivity-check/connectivity-check-quarantine.yaml @@ -0,0 +1,839 @@ +--- +metadata: + name: pod-to-a-multi-node-hostport-proxy-egress + labels: + name: pod-to-a-multi-node-hostport-proxy-egress + topology: multi-node + component: hostport-check + quarantine: "true" +spec: + template: + metadata: + labels: + name: pod-to-a-multi-node-hostport-proxy-egress + spec: + hostNetwork: false + containers: + - name: pod-to-a-multi-node-hostport-proxy-egress-allow-container + ports: [] + image: docker.io/byrnedo/alpine-curl:0.1.8 + imagePullPolicy: IfNotPresent + command: + - /bin/ash + - -c + - sleep 1000000000 + readinessProbe: + exec: + command: + - curl + - -sS + - --fail + - --connect-timeout + - "5" + - -o + - /dev/null + - echo-c-host-headless:40001/public + livenessProbe: + exec: + command: + - curl + - -sS + - --fail + - --connect-timeout + - "5" + - -o + - /dev/null + - echo-c-host-headless:40001/public + - name: pod-to-a-multi-node-hostport-proxy-egress-reject-container + ports: [] + image: docker.io/byrnedo/alpine-curl:0.1.8 + imagePullPolicy: IfNotPresent + command: + - /bin/ash + - -c + - sleep 1000000000 + livenessProbe: + timeoutSeconds: 7 + exec: + command: + - ash + - -c + - '! curl -s --fail --connect-timeout 5 -o /dev/null echo-c-host-headless:40001/private' + affinity: + podAntiAffinity: + requiredDuringSchedulingIgnoredDuringExecution: + - labelSelector: + matchExpressions: + - key: name + operator: In + values: + - echo-a + topologyKey: kubernetes.io/hostname + selector: + matchLabels: + name: pod-to-a-multi-node-hostport-proxy-egress + replicas: 1 +apiVersion: apps/v1 +kind: Deployment +--- +metadata: + name: pod-to-a-intra-node-hostport-proxy-egress + labels: + name: pod-to-a-intra-node-hostport-proxy-egress + topology: intra-node + component: hostport-check + quarantine: "true" +spec: + template: + metadata: + labels: + name: pod-to-a-intra-node-hostport-proxy-egress + spec: + hostNetwork: false + containers: + - name: pod-to-a-intra-node-hostport-proxy-egress-allow-container + ports: [] + image: docker.io/byrnedo/alpine-curl:0.1.8 + imagePullPolicy: IfNotPresent + command: + - /bin/ash + - -c + - sleep 1000000000 + readinessProbe: + exec: + command: + - curl + - -sS + - --fail + - --connect-timeout + - "5" + - -o + - /dev/null + - echo-c-host-headless:40001/public + livenessProbe: + exec: + command: + - curl + - -sS + - --fail + - --connect-timeout + - "5" + - -o + - /dev/null + - echo-c-host-headless:40001/public + - name: pod-to-a-intra-node-hostport-proxy-egress-reject-container + ports: [] + image: docker.io/byrnedo/alpine-curl:0.1.8 + imagePullPolicy: IfNotPresent + command: + - /bin/ash + - -c + - sleep 1000000000 + livenessProbe: + timeoutSeconds: 7 + exec: + command: + - ash + - -c + - '! curl -s --fail --connect-timeout 5 -o /dev/null echo-c-host-headless:40001/private' + affinity: + podAffinity: + requiredDuringSchedulingIgnoredDuringExecution: + - labelSelector: + matchExpressions: + - key: name + operator: In + values: + - echo-a + topologyKey: kubernetes.io/hostname + selector: + matchLabels: + name: pod-to-a-intra-node-hostport-proxy-egress + replicas: 1 +apiVersion: apps/v1 +kind: Deployment +--- +metadata: + name: pod-to-c-multi-node-hostport-proxy-ingress + labels: + name: pod-to-c-multi-node-hostport-proxy-ingress + topology: multi-node + component: hostport-check + quarantine: "true" +spec: + template: + metadata: + labels: + name: pod-to-c-multi-node-hostport-proxy-ingress + spec: + hostNetwork: false + containers: + - name: pod-to-c-multi-node-hostport-proxy-ingress-allow-container + ports: [] + image: docker.io/byrnedo/alpine-curl:0.1.8 + imagePullPolicy: IfNotPresent + command: + - /bin/ash + - -c + - sleep 1000000000 + readinessProbe: + exec: + command: + - curl + - -sS + - --fail + - --connect-timeout + - "5" + - -o + - /dev/null + - echo-c-host-headless:40001/public + livenessProbe: + exec: + command: + - curl + - -sS + - --fail + - --connect-timeout + - "5" + - -o + - /dev/null + - echo-c-host-headless:40001/public + - name: pod-to-c-multi-node-hostport-proxy-ingress-reject-container + ports: [] + image: docker.io/byrnedo/alpine-curl:0.1.8 + imagePullPolicy: IfNotPresent + command: + - /bin/ash + - -c + - sleep 1000000000 + livenessProbe: + timeoutSeconds: 7 + exec: + command: + - ash + - -c + - '! curl -s --fail --connect-timeout 5 -o /dev/null echo-c-host-headless:40001/private' + affinity: + podAntiAffinity: + requiredDuringSchedulingIgnoredDuringExecution: + - labelSelector: + matchExpressions: + - key: name + operator: In + values: + - echo-c + topologyKey: kubernetes.io/hostname + selector: + matchLabels: + name: pod-to-c-multi-node-hostport-proxy-ingress + replicas: 1 +apiVersion: apps/v1 +kind: Deployment +--- +metadata: + name: pod-to-c-intra-node-hostport-proxy-ingress + labels: + name: pod-to-c-intra-node-hostport-proxy-ingress + topology: intra-node + component: hostport-check + quarantine: "true" +spec: + template: + metadata: + labels: + name: pod-to-c-intra-node-hostport-proxy-ingress + spec: + hostNetwork: false + containers: + - name: pod-to-c-intra-node-hostport-proxy-ingress-allow-container + ports: [] + image: docker.io/byrnedo/alpine-curl:0.1.8 + imagePullPolicy: IfNotPresent + command: + - /bin/ash + - -c + - sleep 1000000000 + readinessProbe: + exec: + command: + - curl + - -sS + - --fail + - --connect-timeout + - "5" + - -o + - /dev/null + - echo-c-host-headless:40001/public + livenessProbe: + exec: + command: + - curl + - -sS + - --fail + - --connect-timeout + - "5" + - -o + - /dev/null + - echo-c-host-headless:40001/public + - name: pod-to-c-intra-node-hostport-proxy-ingress-reject-container + ports: [] + image: docker.io/byrnedo/alpine-curl:0.1.8 + imagePullPolicy: IfNotPresent + command: + - /bin/ash + - -c + - sleep 1000000000 + livenessProbe: + timeoutSeconds: 7 + exec: + command: + - ash + - -c + - '! curl -s --fail --connect-timeout 5 -o /dev/null echo-c-host-headless:40001/private' + affinity: + podAffinity: + requiredDuringSchedulingIgnoredDuringExecution: + - labelSelector: + matchExpressions: + - key: name + operator: In + values: + - echo-c + topologyKey: kubernetes.io/hostname + selector: + matchLabels: + name: pod-to-c-intra-node-hostport-proxy-ingress + replicas: 1 +apiVersion: apps/v1 +kind: Deployment +--- +metadata: + name: pod-to-c-multi-node-hostport-proxy-to-proxy + labels: + name: pod-to-c-multi-node-hostport-proxy-to-proxy + topology: multi-node + component: hostport-check + quarantine: "true" +spec: + template: + metadata: + labels: + name: pod-to-c-multi-node-hostport-proxy-to-proxy + spec: + hostNetwork: false + containers: + - name: pod-to-c-multi-node-hostport-proxy-to-proxy-allow-container + ports: [] + image: docker.io/byrnedo/alpine-curl:0.1.8 + imagePullPolicy: IfNotPresent + command: + - /bin/ash + - -c + - sleep 1000000000 + readinessProbe: + exec: + command: + - curl + - -sS + - --fail + - --connect-timeout + - "5" + - -o + - /dev/null + - echo-c-host-headless:40001/public + livenessProbe: + exec: + command: + - curl + - -sS + - --fail + - --connect-timeout + - "5" + - -o + - /dev/null + - echo-c-host-headless:40001/public + - name: pod-to-c-multi-node-hostport-proxy-to-proxy-reject-container + ports: [] + image: docker.io/byrnedo/alpine-curl:0.1.8 + imagePullPolicy: IfNotPresent + command: + - /bin/ash + - -c + - sleep 1000000000 + livenessProbe: + timeoutSeconds: 7 + exec: + command: + - ash + - -c + - '! curl -s --fail --connect-timeout 5 -o /dev/null echo-c-host-headless:40001/private' + affinity: + podAntiAffinity: + requiredDuringSchedulingIgnoredDuringExecution: + - labelSelector: + matchExpressions: + - key: name + operator: In + values: + - echo-c + topologyKey: kubernetes.io/hostname + selector: + matchLabels: + name: pod-to-c-multi-node-hostport-proxy-to-proxy + replicas: 1 +apiVersion: apps/v1 +kind: Deployment +--- +metadata: + name: pod-to-c-intra-node-hostport-proxy-to-proxy + labels: + name: pod-to-c-intra-node-hostport-proxy-to-proxy + topology: intra-node + component: hostport-check + quarantine: "true" +spec: + template: + metadata: + labels: + name: pod-to-c-intra-node-hostport-proxy-to-proxy + spec: + hostNetwork: false + containers: + - name: pod-to-c-intra-node-hostport-proxy-to-proxy-allow-container + ports: [] + image: docker.io/byrnedo/alpine-curl:0.1.8 + imagePullPolicy: IfNotPresent + command: + - /bin/ash + - -c + - sleep 1000000000 + readinessProbe: + exec: + command: + - curl + - -sS + - --fail + - --connect-timeout + - "5" + - -o + - /dev/null + - echo-c-host-headless:40001/public + livenessProbe: + exec: + command: + - curl + - -sS + - --fail + - --connect-timeout + - "5" + - -o + - /dev/null + - echo-c-host-headless:40001/public + - name: pod-to-c-intra-node-hostport-proxy-to-proxy-reject-container + ports: [] + image: docker.io/byrnedo/alpine-curl:0.1.8 + imagePullPolicy: IfNotPresent + command: + - /bin/ash + - -c + - sleep 1000000000 + livenessProbe: + timeoutSeconds: 7 + exec: + command: + - ash + - -c + - '! curl -s --fail --connect-timeout 5 -o /dev/null echo-c-host-headless:40001/private' + affinity: + podAffinity: + requiredDuringSchedulingIgnoredDuringExecution: + - labelSelector: + matchExpressions: + - key: name + operator: In + values: + - echo-c + topologyKey: kubernetes.io/hostname + selector: + matchLabels: + name: pod-to-c-intra-node-hostport-proxy-to-proxy + replicas: 1 +apiVersion: apps/v1 +kind: Deployment +--- +metadata: + name: pod-to-a-multi-node-hostport-proxy-egress + labels: + name: pod-to-a-multi-node-hostport-proxy-egress + topology: multi-node + component: hostport-check + quarantine: "true" +spec: + endpointSelector: + matchLabels: + name: pod-to-a-multi-node-hostport-proxy-egress + egress: + - toPorts: + - ports: + - port: "40001" + protocol: TCP + rules: + http: + - path: /public$ + method: GET + - toPorts: + - ports: + - port: "53" + protocol: ANY + toEndpoints: + - matchLabels: + k8s:io.kubernetes.pod.namespace: kube-system + k8s:k8s-app: kube-dns + - toPorts: + - ports: + - port: "5353" + protocol: UDP + toEndpoints: + - matchLabels: + k8s:io.kubernetes.pod.namespace: openshift-dns + k8s:dns.operator.openshift.io/daemonset-dns: default +apiVersion: cilium.io/v2 +kind: CiliumNetworkPolicy +--- +metadata: + name: pod-to-a-intra-node-hostport-proxy-egress + labels: + name: pod-to-a-intra-node-hostport-proxy-egress + topology: intra-node + component: hostport-check + quarantine: "true" +spec: + endpointSelector: + matchLabels: + name: pod-to-a-intra-node-hostport-proxy-egress + egress: + - toPorts: + - ports: + - port: "40001" + protocol: TCP + rules: + http: + - path: /public$ + method: GET + - toPorts: + - ports: + - port: "53" + protocol: ANY + toEndpoints: + - matchLabels: + k8s:io.kubernetes.pod.namespace: kube-system + k8s:k8s-app: kube-dns + - toPorts: + - ports: + - port: "5353" + protocol: UDP + toEndpoints: + - matchLabels: + k8s:io.kubernetes.pod.namespace: openshift-dns + k8s:dns.operator.openshift.io/daemonset-dns: default +apiVersion: cilium.io/v2 +kind: CiliumNetworkPolicy +--- +metadata: + name: pod-to-c-multi-node-hostport-proxy-to-proxy + labels: + name: pod-to-c-multi-node-hostport-proxy-to-proxy + topology: multi-node + component: hostport-check + quarantine: "true" +spec: + endpointSelector: + matchLabels: + name: pod-to-c-multi-node-hostport-proxy-to-proxy + egress: + - toPorts: + - ports: + - port: "40001" + protocol: TCP + rules: + http: + - path: /public$ + method: GET + - toPorts: + - ports: + - port: "53" + protocol: ANY + toEndpoints: + - matchLabels: + k8s:io.kubernetes.pod.namespace: kube-system + k8s:k8s-app: kube-dns + - toPorts: + - ports: + - port: "5353" + protocol: UDP + toEndpoints: + - matchLabels: + k8s:io.kubernetes.pod.namespace: openshift-dns + k8s:dns.operator.openshift.io/daemonset-dns: default +apiVersion: cilium.io/v2 +kind: CiliumNetworkPolicy +--- +metadata: + name: pod-to-c-intra-node-hostport-proxy-to-proxy + labels: + name: pod-to-c-intra-node-hostport-proxy-to-proxy + topology: intra-node + component: hostport-check + quarantine: "true" +spec: + endpointSelector: + matchLabels: + name: pod-to-c-intra-node-hostport-proxy-to-proxy + egress: + - toPorts: + - ports: + - port: "40001" + protocol: TCP + rules: + http: + - path: /public$ + method: GET + - toPorts: + - ports: + - port: "53" + protocol: ANY + toEndpoints: + - matchLabels: + k8s:io.kubernetes.pod.namespace: kube-system + k8s:k8s-app: kube-dns + - toPorts: + - ports: + - port: "5353" + protocol: UDP + toEndpoints: + - matchLabels: + k8s:io.kubernetes.pod.namespace: openshift-dns + k8s:dns.operator.openshift.io/daemonset-dns: default +apiVersion: cilium.io/v2 +kind: CiliumNetworkPolicy + + +--- +metadata: + name: echo-a + labels: + name: echo-a + topology: any + component: network-check + quarantine: "false" +spec: + template: + metadata: + labels: + name: echo-a + spec: + hostNetwork: false + containers: + - name: echo-a-container + ports: + - containerPort: 80 + image: docker.io/cilium/json-mock:1.2 + imagePullPolicy: IfNotPresent + readinessProbe: + exec: + command: + - curl + - -sS + - --fail + - --connect-timeout + - "5" + - -o + - /dev/null + - localhost + livenessProbe: + exec: + command: + - curl + - -sS + - --fail + - --connect-timeout + - "5" + - -o + - /dev/null + - localhost + selector: + matchLabels: + name: echo-a + replicas: 1 +apiVersion: apps/v1 +kind: Deployment +--- +metadata: + name: echo-a + labels: + name: echo-a + topology: any + component: network-check + quarantine: "false" +spec: + ports: + - port: 80 + selector: + name: echo-a + type: ClusterIP +apiVersion: v1 +kind: Service + +--- +metadata: + name: echo-b + labels: + name: echo-b + topology: any + component: services-check + quarantine: "false" +spec: + template: + metadata: + labels: + name: echo-b + spec: + hostNetwork: false + containers: + - name: echo-b-container + ports: + - containerPort: 80 + hostPort: 40000 + image: docker.io/cilium/json-mock:1.2 + imagePullPolicy: IfNotPresent + readinessProbe: + exec: + command: + - curl + - -sS + - --fail + - --connect-timeout + - "5" + - -o + - /dev/null + - localhost + livenessProbe: + exec: + command: + - curl + - -sS + - --fail + - --connect-timeout + - "5" + - -o + - /dev/null + - localhost + selector: + matchLabels: + name: echo-b + replicas: 1 +apiVersion: apps/v1 +kind: Deployment +--- +metadata: + name: echo-b + labels: + name: echo-b + topology: any + component: services-check + quarantine: "false" +spec: + ports: + - port: 80 + nodePort: 31313 + selector: + name: echo-b + type: NodePort +apiVersion: v1 +kind: Service + +--- +metadata: + name: echo-b-host + labels: + name: echo-b-host + topology: any + component: services-check + quarantine: "false" +spec: + template: + metadata: + labels: + name: echo-b-host + spec: + hostNetwork: true + containers: + - name: echo-b-host-container + env: + - name: PORT + value: "41000" + ports: [] + image: docker.io/cilium/json-mock:1.2 + imagePullPolicy: IfNotPresent + readinessProbe: + exec: + command: + - curl + - -sS + - --fail + - --connect-timeout + - "5" + - -o + - /dev/null + - localhost:41000 + livenessProbe: + exec: + command: + - curl + - -sS + - --fail + - --connect-timeout + - "5" + - -o + - /dev/null + - localhost:41000 + affinity: + podAffinity: + requiredDuringSchedulingIgnoredDuringExecution: + - labelSelector: + matchExpressions: + - key: name + operator: In + values: + - echo-b + topologyKey: kubernetes.io/hostname + selector: + matchLabels: + name: echo-b-host + replicas: 1 +apiVersion: apps/v1 +kind: Deployment + +--- +metadata: + name: echo-b-host-headless + labels: + name: echo-b-host-headless + topology: any + component: services-check + quarantine: "false" +spec: + ports: [] + selector: + name: echo-b-host + type: ClusterIP + clusterIP: None +apiVersion: v1 +kind: Service + +--- + +--- + +--- + diff --git a/examples/kubernetes/connectivity-check/connectivity-check-single-node.yaml b/examples/kubernetes/connectivity-check/connectivity-check-single-node.yaml index d8939278a9fa..8610b2fff5f2 100644 --- a/examples/kubernetes/connectivity-check/connectivity-check-single-node.yaml +++ b/examples/kubernetes/connectivity-check/connectivity-check-single-node.yaml @@ -1,96 +1,111 @@ -# Automatically generated by Makefile. DO NOT EDIT -apiVersion: v1 -kind: Service -metadata: - name: echo-a -spec: - type: ClusterIP - ports: - - port: 80 - selector: - name: echo-a --- -apiVersion: apps/v1 -kind: Deployment metadata: name: echo-a + labels: + name: echo-a + topology: any + component: network-check + quarantine: "false" spec: - selector: - matchLabels: - name: echo-a - replicas: 1 template: metadata: labels: name: echo-a spec: + hostNetwork: false containers: - - name: echo-container + - name: echo-a-container + ports: + - containerPort: 80 image: docker.io/cilium/json-mock:1.2 imagePullPolicy: IfNotPresent readinessProbe: exec: - command: ["curl", "-sS", "--fail", "-o", "/dev/null", "localhost"] ---- -apiVersion: v1 -kind: Service -metadata: - name: echo-b -spec: - type: NodePort - ports: - - port: 80 - nodePort: 31313 - selector: - name: echo-b ---- -apiVersion: v1 -kind: Service -metadata: - name: echo-b-headless -spec: - type: ClusterIP - clusterIP: None - ports: - - port: 80 + command: + - curl + - -sS + - --fail + - --connect-timeout + - "5" + - -o + - /dev/null + - localhost + livenessProbe: + exec: + command: + - curl + - -sS + - --fail + - --connect-timeout + - "5" + - -o + - /dev/null + - localhost selector: - name: echo-b ---- + matchLabels: + name: echo-a + replicas: 1 apiVersion: apps/v1 kind: Deployment +--- metadata: name: echo-b + labels: + name: echo-b + topology: any + component: services-check + quarantine: "false" spec: - selector: - matchLabels: - name: echo-b - replicas: 1 template: metadata: labels: name: echo-b spec: + hostNetwork: false containers: - - name: echo-container - image: docker.io/cilium/json-mock:1.2 - imagePullPolicy: IfNotPresent + - name: echo-b-container ports: - containerPort: 80 hostPort: 40000 + image: docker.io/cilium/json-mock:1.2 + imagePullPolicy: IfNotPresent readinessProbe: exec: - command: ["curl", "-sS", "--fail", "-o", "/dev/null", "localhost"] ---- -# The echo-b-host pod runs in host networking on the same node as echo-b. + command: + - curl + - -sS + - --fail + - --connect-timeout + - "5" + - -o + - /dev/null + - localhost + livenessProbe: + exec: + command: + - curl + - -sS + - --fail + - --connect-timeout + - "5" + - -o + - /dev/null + - localhost + selector: + matchLabels: + name: echo-b + replicas: 1 apiVersion: apps/v1 kind: Deployment +--- metadata: name: echo-b-host + labels: + name: echo-b-host + topology: any + component: services-check + quarantine: "false" spec: - selector: - matchLabels: - name: echo-b-host - replicas: 1 template: metadata: labels: @@ -98,15 +113,35 @@ spec: spec: hostNetwork: true containers: - - name: echo-container - image: docker.io/cilium/json-mock:1.2 - imagePullPolicy: IfNotPresent + - name: echo-b-host-container env: - name: PORT value: "41000" + ports: [] + image: docker.io/cilium/json-mock:1.2 + imagePullPolicy: IfNotPresent readinessProbe: exec: - command: ["curl", "-sS", "--fail", "-o", "/dev/null", "localhost:41000"] + command: + - curl + - -sS + - --fail + - --connect-timeout + - "5" + - -o + - /dev/null + - localhost:41000 + livenessProbe: + exec: + command: + - curl + - -sS + - --fail + - --connect-timeout + - "5" + - -o + - /dev/null + - localhost:41000 affinity: podAffinity: requiredDuringSchedulingIgnoredDuringExecution: @@ -116,350 +151,437 @@ spec: operator: In values: - echo-b - topologyKey: "kubernetes.io/hostname" ---- -# Connecting to "echo-b-host-headless" will provide service discovery to the -# node IP on which echo-b* is running -apiVersion: v1 -kind: Service -metadata: - name: echo-b-host-headless -spec: - type: ClusterIP - clusterIP: None + topologyKey: kubernetes.io/hostname selector: - name: echo-b-host ---- + matchLabels: + name: echo-b-host + replicas: 1 apiVersion: apps/v1 kind: Deployment +--- metadata: - name: pod-to-a-allowed-cnp + name: pod-to-a + labels: + name: pod-to-a + topology: any + component: network-check + quarantine: "false" spec: - selector: - matchLabels: - name: pod-to-a-allowed-cnp - replicas: 1 template: metadata: labels: - name: pod-to-a-allowed-cnp + name: pod-to-a spec: + hostNetwork: false containers: - - name: pod-to-a-allowed-cnp-container + - name: pod-to-a-container + ports: [] image: docker.io/byrnedo/alpine-curl:0.1.8 - command: ["/bin/ash", "-c", "sleep 1000000000"] imagePullPolicy: IfNotPresent - livenessProbe: - exec: - command: ["curl", "-sS", "--fail", "-o", "/dev/null", "echo-a"] + command: + - /bin/ash + - -c + - sleep 1000000000 readinessProbe: exec: - command: ["curl", "-sS", "--fail", "-o", "/dev/null", "echo-a"] ---- -apiVersion: "cilium.io/v2" -kind: CiliumNetworkPolicy -metadata: - name: "pod-to-a-allowed-cnp" -spec: - endpointSelector: + command: + - curl + - -sS + - --fail + - --connect-timeout + - "5" + - -o + - /dev/null + - echo-a/public + livenessProbe: + exec: + command: + - curl + - -sS + - --fail + - --connect-timeout + - "5" + - -o + - /dev/null + - echo-a/public + selector: matchLabels: - name: pod-to-a-allowed-cnp - egress: - - toEndpoints: - - matchLabels: - name: echo-a - toPorts: - - ports: - - port: "80" - protocol: TCP - - toEndpoints: - - matchLabels: - k8s:io.kubernetes.pod.namespace: kube-system - k8s:k8s-app: kube-dns - toPorts: - - ports: - - port: "53" - protocol: UDP - - toEndpoints: - - matchLabels: - k8s:io.kubernetes.pod.namespace: openshift-dns - k8s:dns.operator.openshift.io/daemonset-dns: default - toPorts: - - ports: - - port: "5353" - protocol: UDP ---- + name: pod-to-a + replicas: 1 apiVersion: apps/v1 kind: Deployment +--- metadata: - name: pod-to-a-l3-denied-cnp + name: pod-to-external-1111 + labels: + name: pod-to-external-1111 + topology: any + component: network-check + quarantine: "false" spec: - selector: - matchLabels: - name: pod-to-a-l3-denied-cnp - replicas: 1 template: metadata: labels: - name: pod-to-a-l3-denied-cnp + name: pod-to-external-1111 spec: + hostNetwork: false containers: - - name: pod-to-a-l3-denied-cnp-container + - name: pod-to-external-1111-container + ports: [] image: docker.io/byrnedo/alpine-curl:0.1.8 - command: ["/bin/ash", "-c", "sleep 1000000000"] imagePullPolicy: IfNotPresent - livenessProbe: - timeoutSeconds: 7 - exec: - command: ["ash", "-c", "! curl -sS --fail --connect-timeout 5 -o /dev/null echo-a"] + command: + - /bin/ash + - -c + - sleep 1000000000 readinessProbe: - timeoutSeconds: 7 exec: - command: ["ash", "-c", "! curl -sS --fail --connect-timeout 5 -o /dev/null echo-a"] ---- -apiVersion: "cilium.io/v2" -kind: CiliumNetworkPolicy -metadata: - name: "pod-to-a-l3-denied-cnp" -spec: - endpointSelector: + command: + - curl + - -sS + - --fail + - --connect-timeout + - "5" + - -o + - /dev/null + - 1.1.1.1 + livenessProbe: + exec: + command: + - curl + - -sS + - --fail + - --connect-timeout + - "5" + - -o + - /dev/null + - 1.1.1.1 + selector: matchLabels: - name: pod-to-a-l3-denied-cnp - egress: - - toEndpoints: - - matchLabels: - k8s:io.kubernetes.pod.namespace: kube-system - k8s:k8s-app: kube-dns - toPorts: - - ports: - - port: "53" - protocol: UDP - - toEndpoints: - - matchLabels: - k8s:io.kubernetes.pod.namespace: openshift-dns - k8s:dns.operator.openshift.io/daemonset-dns: default - toPorts: - - ports: - - port: "5353" - protocol: UDP ---- + name: pod-to-external-1111 + replicas: 1 apiVersion: apps/v1 kind: Deployment +--- metadata: - name: pod-to-a + name: pod-to-a-denied-cnp + labels: + name: pod-to-a-denied-cnp + topology: any + component: policy-check + quarantine: "false" spec: - selector: - matchLabels: - name: pod-to-a - replicas: 1 template: metadata: labels: - name: pod-to-a + name: pod-to-a-denied-cnp spec: + hostNetwork: false containers: - - name: pod-to-a-container + - name: pod-to-a-denied-cnp-container + ports: [] image: docker.io/byrnedo/alpine-curl:0.1.8 - command: ["/bin/ash", "-c", "sleep 1000000000"] imagePullPolicy: IfNotPresent + command: + - /bin/ash + - -c + - sleep 1000000000 + readinessProbe: + timeoutSeconds: 7 + exec: + command: + - ash + - -c + - '! curl -s --fail --connect-timeout 5 -o /dev/null echo-a/private' livenessProbe: + timeoutSeconds: 7 exec: - command: ["curl", "-sS", "--fail", "-o", "/dev/null", "echo-a"] ---- + command: + - ash + - -c + - '! curl -s --fail --connect-timeout 5 -o /dev/null echo-a/private' + selector: + matchLabels: + name: pod-to-a-denied-cnp + replicas: 1 apiVersion: apps/v1 kind: Deployment +--- metadata: - name: pod-to-b-intra-node-hostport + name: pod-to-a-allowed-cnp + labels: + name: pod-to-a-allowed-cnp + topology: any + component: policy-check + quarantine: "false" spec: - replicas: 1 - selector: - matchLabels: - name: pod-to-b-intra-node-hostport template: metadata: labels: - name: pod-to-b-intra-node-hostport + name: pod-to-a-allowed-cnp spec: - affinity: - podAffinity: - requiredDuringSchedulingIgnoredDuringExecution: - - labelSelector: - matchExpressions: - - key: name - operator: In - values: - - echo-b - topologyKey: kubernetes.io/hostname + hostNetwork: false containers: - - command: + - name: pod-to-a-allowed-cnp-container + ports: [] + image: docker.io/byrnedo/alpine-curl:0.1.8 + imagePullPolicy: IfNotPresent + command: - /bin/ash - -c - sleep 1000000000 - image: docker.io/byrnedo/alpine-curl:0.1.8 - imagePullPolicy: IfNotPresent - livenessProbe: - exec: - command: [ "curl", "-sS", "--fail", "-o", "/dev/null", "echo-b-host-headless:40000" ] readinessProbe: exec: - command: [ "curl", "-sS", "--fail", "-o", "/dev/null", "echo-b-host-headless:40000" ] - name: pod-to-b-intra-node-hostport ---- + command: + - curl + - -sS + - --fail + - --connect-timeout + - "5" + - -o + - /dev/null + - echo-a/public + livenessProbe: + exec: + command: + - curl + - -sS + - --fail + - --connect-timeout + - "5" + - -o + - /dev/null + - echo-a/public + selector: + matchLabels: + name: pod-to-a-allowed-cnp + replicas: 1 apiVersion: apps/v1 kind: Deployment +--- metadata: - name: pod-to-b-intra-node-nodeport + name: pod-to-external-fqdn-allow-google-cnp + labels: + name: pod-to-external-fqdn-allow-google-cnp + topology: any + component: policy-check + quarantine: "false" spec: - replicas: 1 - selector: - matchLabels: - name: pod-to-b-intra-node-nodeport template: metadata: labels: - name: pod-to-b-intra-node-nodeport + name: pod-to-external-fqdn-allow-google-cnp spec: - affinity: - podAffinity: - requiredDuringSchedulingIgnoredDuringExecution: - - labelSelector: - matchExpressions: - - key: name - operator: In - values: - - echo-b - topologyKey: kubernetes.io/hostname + hostNetwork: false containers: - - command: + - name: pod-to-external-fqdn-allow-google-cnp-container + ports: [] + image: docker.io/byrnedo/alpine-curl:0.1.8 + imagePullPolicy: IfNotPresent + command: - /bin/ash - -c - sleep 1000000000 - image: docker.io/byrnedo/alpine-curl:0.1.8 - imagePullPolicy: IfNotPresent - livenessProbe: - exec: - command: [ "curl", "-sS", "--fail", "-o", "/dev/null", "echo-b-host-headless:31313" ] readinessProbe: exec: - command: [ "curl", "-sS", "--fail", "-o", "/dev/null", "echo-b-host-headless:31313" ] - name: pod-to-b-intra-node-hostport ---- + command: + - curl + - -sS + - --fail + - --connect-timeout + - "5" + - -o + - /dev/null + - www.google.com + livenessProbe: + exec: + command: + - curl + - -sS + - --fail + - --connect-timeout + - "5" + - -o + - /dev/null + - www.google.com + selector: + matchLabels: + name: pod-to-external-fqdn-allow-google-cnp + replicas: 1 apiVersion: apps/v1 kind: Deployment +--- metadata: - name: pod-to-b-intra-node + name: echo-a + labels: + name: echo-a + topology: any + component: network-check + quarantine: "false" spec: + ports: + - port: 80 selector: - matchLabels: - name: pod-to-b-intra-node - replicas: 1 - template: - metadata: - labels: - name: pod-to-b-intra-node - spec: - containers: - - name: pod-to-b-intra-node-container - image: docker.io/byrnedo/alpine-curl:0.1.8 - command: ["/bin/ash", "-c", "sleep 1000000000"] - imagePullPolicy: IfNotPresent - livenessProbe: - exec: - command: ["curl", "-sS", "--fail", "-o", "/dev/null", "echo-b"] - affinity: - podAffinity: - requiredDuringSchedulingIgnoredDuringExecution: - - labelSelector: - matchExpressions: - - key: name - operator: In - values: - - echo-b - topologyKey: "kubernetes.io/hostname" + name: echo-a + type: ClusterIP +apiVersion: v1 +kind: Service --- -apiVersion: apps/v1 -kind: Deployment metadata: - name: pod-to-a-external-1111 + name: echo-b + labels: + name: echo-b + topology: any + component: services-check + quarantine: "false" spec: + ports: + - port: 80 + nodePort: 31313 selector: - matchLabels: - name: pod-to-a-external-1111 - replicas: 1 - template: - metadata: - labels: - name: pod-to-a-external-1111 - spec: - containers: - - name: pod-to-a-external-1111-container - image: docker.io/byrnedo/alpine-curl:0.1.8 - command: ["/bin/ash", "-c", "sleep 1000000000"] - imagePullPolicy: IfNotPresent - livenessProbe: - exec: - command: ["curl", "-sS", "--fail", "--connect-timeout", "5", "-o", "/dev/null", "1.1.1.1"] - readinessProbe: - exec: - command: ["curl", "-sS", "--fail", "--connect-timeout", "5", "-o", "/dev/null", "1.1.1.1"] + name: echo-b + type: NodePort +apiVersion: v1 +kind: Service --- -apiVersion: apps/v1 -kind: Deployment metadata: - name: pod-to-external-fqdn-allow-google-cnp + name: echo-b-headless + labels: + name: echo-b-headless + topology: any + component: services-check + quarantine: "false" spec: + ports: + - port: 80 selector: + name: echo-b + type: ClusterIP + clusterIP: None +apiVersion: v1 +kind: Service +--- +metadata: + name: echo-b-host-headless + labels: + name: echo-b-host-headless + topology: any + component: services-check + quarantine: "false" +spec: + ports: [] + selector: + name: echo-b-host + type: ClusterIP + clusterIP: None +apiVersion: v1 +kind: Service +--- +metadata: + name: pod-to-a-denied-cnp + labels: + name: pod-to-a-denied-cnp + topology: any + component: policy-check + quarantine: "false" +spec: + endpointSelector: matchLabels: - name: pod-to-external-fqdn-allow-google-cnp - replicas: 1 - template: - metadata: - labels: - name: pod-to-external-fqdn-allow-google-cnp - spec: - containers: - - name: pod-to-external-fqdn-allow-google-cnp-container - image: docker.io/byrnedo/alpine-curl:0.1.8 - command: ["/bin/ash", "-c", "sleep 1000000000"] - imagePullPolicy: IfNotPresent - livenessProbe: - exec: - command: ["curl", "-sS", "--fail", "--connect-timeout", "5", "-o", "/dev/null", "www.google.com"] - readinessProbe: - exec: - command: ["curl", "-sS", "--fail", "--connect-timeout", "5", "-o", "/dev/null", "www.google.com"] + name: pod-to-a-denied-cnp + egress: + - toPorts: + - ports: + - port: "53" + protocol: ANY + toEndpoints: + - matchLabels: + k8s:io.kubernetes.pod.namespace: kube-system + k8s:k8s-app: kube-dns + - toPorts: + - ports: + - port: "5353" + protocol: UDP + toEndpoints: + - matchLabels: + k8s:io.kubernetes.pod.namespace: openshift-dns + k8s:dns.operator.openshift.io/daemonset-dns: default +apiVersion: cilium.io/v2 +kind: CiliumNetworkPolicy --- -apiVersion: "cilium.io/v2" +metadata: + name: pod-to-a-allowed-cnp + labels: + name: pod-to-a-allowed-cnp + topology: any + component: policy-check + quarantine: "false" +spec: + endpointSelector: + matchLabels: + name: pod-to-a-allowed-cnp + egress: + - toPorts: + - ports: + - port: "80" + protocol: TCP + toEndpoints: + - matchLabels: + name: echo-a + - toPorts: + - ports: + - port: "53" + protocol: ANY + toEndpoints: + - matchLabels: + k8s:io.kubernetes.pod.namespace: kube-system + k8s:k8s-app: kube-dns + - toPorts: + - ports: + - port: "5353" + protocol: UDP + toEndpoints: + - matchLabels: + k8s:io.kubernetes.pod.namespace: openshift-dns + k8s:dns.operator.openshift.io/daemonset-dns: default +apiVersion: cilium.io/v2 kind: CiliumNetworkPolicy +--- metadata: - name: "pod-to-external-fqdn-allow-google-cnp" + name: pod-to-external-fqdn-allow-google-cnp + labels: + name: pod-to-external-fqdn-allow-google-cnp + topology: any + component: policy-check + quarantine: "false" spec: endpointSelector: matchLabels: name: pod-to-external-fqdn-allow-google-cnp egress: - - toEndpoints: - - matchLabels: - "k8s:io.kubernetes.pod.namespace": kube-system - "k8s:k8s-app": kube-dns - toPorts: + - toFQDNs: + - matchPattern: '*.google.com' + - toPorts: - ports: - port: "53" protocol: ANY rules: dns: - - matchPattern: "*" - - toEndpoints: + - matchPattern: '*' + toEndpoints: - matchLabels: - k8s:io.kubernetes.pod.namespace: openshift-dns - k8s:dns.operator.openshift.io/daemonset-dns: default - toPorts: + k8s:io.kubernetes.pod.namespace: kube-system + k8s:k8s-app: kube-dns + - toPorts: - ports: - port: "5353" protocol: UDP rules: dns: - - matchPattern: "*" - - toFQDNs: - - matchPattern: "*.google.com" ---- + - matchPattern: '*' + toEndpoints: + - matchLabels: + k8s:io.kubernetes.pod.namespace: openshift-dns + k8s:dns.operator.openshift.io/daemonset-dns: default +apiVersion: cilium.io/v2 +kind: CiliumNetworkPolicy + diff --git a/examples/kubernetes/connectivity-check/connectivity-check.yaml b/examples/kubernetes/connectivity-check/connectivity-check.yaml index 1722bb8f0dcb..33d5e1c940b4 100644 --- a/examples/kubernetes/connectivity-check/connectivity-check.yaml +++ b/examples/kubernetes/connectivity-check/connectivity-check.yaml @@ -1,96 +1,111 @@ -# Automatically generated by Makefile. DO NOT EDIT -apiVersion: v1 -kind: Service -metadata: - name: echo-a -spec: - type: ClusterIP - ports: - - port: 80 - selector: - name: echo-a --- -apiVersion: apps/v1 -kind: Deployment metadata: name: echo-a + labels: + name: echo-a + topology: any + component: network-check + quarantine: "false" spec: - selector: - matchLabels: - name: echo-a - replicas: 1 template: metadata: labels: name: echo-a spec: + hostNetwork: false containers: - - name: echo-container + - name: echo-a-container + ports: + - containerPort: 80 image: docker.io/cilium/json-mock:1.2 imagePullPolicy: IfNotPresent readinessProbe: exec: - command: ["curl", "-sS", "--fail", "-o", "/dev/null", "localhost"] ---- -apiVersion: v1 -kind: Service -metadata: - name: echo-b -spec: - type: NodePort - ports: - - port: 80 - nodePort: 31313 - selector: - name: echo-b ---- -apiVersion: v1 -kind: Service -metadata: - name: echo-b-headless -spec: - type: ClusterIP - clusterIP: None - ports: - - port: 80 + command: + - curl + - -sS + - --fail + - --connect-timeout + - "5" + - -o + - /dev/null + - localhost + livenessProbe: + exec: + command: + - curl + - -sS + - --fail + - --connect-timeout + - "5" + - -o + - /dev/null + - localhost selector: - name: echo-b ---- + matchLabels: + name: echo-a + replicas: 1 apiVersion: apps/v1 kind: Deployment +--- metadata: name: echo-b + labels: + name: echo-b + topology: any + component: services-check + quarantine: "false" spec: - selector: - matchLabels: - name: echo-b - replicas: 1 template: metadata: labels: name: echo-b spec: + hostNetwork: false containers: - - name: echo-container - image: docker.io/cilium/json-mock:1.2 - imagePullPolicy: IfNotPresent + - name: echo-b-container ports: - containerPort: 80 hostPort: 40000 + image: docker.io/cilium/json-mock:1.2 + imagePullPolicy: IfNotPresent readinessProbe: exec: - command: ["curl", "-sS", "--fail", "-o", "/dev/null", "localhost"] ---- -# The echo-b-host pod runs in host networking on the same node as echo-b. + command: + - curl + - -sS + - --fail + - --connect-timeout + - "5" + - -o + - /dev/null + - localhost + livenessProbe: + exec: + command: + - curl + - -sS + - --fail + - --connect-timeout + - "5" + - -o + - /dev/null + - localhost + selector: + matchLabels: + name: echo-b + replicas: 1 apiVersion: apps/v1 kind: Deployment +--- metadata: name: echo-b-host + labels: + name: echo-b-host + topology: any + component: services-check + quarantine: "false" spec: - selector: - matchLabels: - name: echo-b-host - replicas: 1 template: metadata: labels: @@ -98,15 +113,35 @@ spec: spec: hostNetwork: true containers: - - name: echo-container - image: docker.io/cilium/json-mock:1.2 - imagePullPolicy: IfNotPresent + - name: echo-b-host-container env: - name: PORT value: "41000" + ports: [] + image: docker.io/cilium/json-mock:1.2 + imagePullPolicy: IfNotPresent readinessProbe: exec: - command: ["curl", "-sS", "--fail", "-o", "/dev/null", "localhost:41000"] + command: + - curl + - -sS + - --fail + - --connect-timeout + - "5" + - -o + - /dev/null + - localhost:41000 + livenessProbe: + exec: + command: + - curl + - -sS + - --fail + - --connect-timeout + - "5" + - -o + - /dev/null + - localhost:41000 affinity: podAffinity: requiredDuringSchedulingIgnoredDuringExecution: @@ -116,289 +151,313 @@ spec: operator: In values: - echo-b - topologyKey: "kubernetes.io/hostname" ---- -# Connecting to "echo-b-host-headless" will provide service discovery to the -# node IP on which echo-b* is running -apiVersion: v1 -kind: Service -metadata: - name: echo-b-host-headless -spec: - type: ClusterIP - clusterIP: None + topologyKey: kubernetes.io/hostname selector: - name: echo-b-host ---- + matchLabels: + name: echo-b-host + replicas: 1 apiVersion: apps/v1 kind: Deployment +--- metadata: - name: host-to-b-multi-node-clusterip + name: pod-to-a + labels: + name: pod-to-a + topology: any + component: network-check + quarantine: "false" spec: - selector: - matchLabels: - name: host-to-b-multi-node-clusterip - replicas: 1 template: metadata: labels: - name: host-to-b-multi-node-clusterip + name: pod-to-a spec: - hostNetwork: true - dnsPolicy: ClusterFirstWithHostNet + hostNetwork: false containers: - - name: host-to-b-multi-node-container - imagePullPolicy: IfNotPresent + - name: pod-to-a-container + ports: [] image: docker.io/byrnedo/alpine-curl:0.1.8 - command: ["/bin/ash", "-c", "sleep 1000000000"] + imagePullPolicy: IfNotPresent + command: + - /bin/ash + - -c + - sleep 1000000000 + readinessProbe: + exec: + command: + - curl + - -sS + - --fail + - --connect-timeout + - "5" + - -o + - /dev/null + - echo-a/public livenessProbe: exec: - command: ["curl", "-sS", "--fail", "-o", "/dev/null", "echo-b"] - affinity: - podAntiAffinity: - requiredDuringSchedulingIgnoredDuringExecution: - - labelSelector: - matchExpressions: - - key: name - operator: In - values: - - echo-b - topologyKey: "kubernetes.io/hostname" ---- -apiVersion: apps/v1 -kind: Deployment -metadata: - name: host-to-b-multi-node-headless -spec: + command: + - curl + - -sS + - --fail + - --connect-timeout + - "5" + - -o + - /dev/null + - echo-a/public selector: matchLabels: - name: host-to-b-multi-node-headless + name: pod-to-a replicas: 1 - template: - metadata: - labels: - name: host-to-b-multi-node-headless - spec: - hostNetwork: true - dnsPolicy: ClusterFirstWithHostNet - containers: - - name: host-to-b-multi-node-container - imagePullPolicy: IfNotPresent - image: docker.io/byrnedo/alpine-curl:0.1.8 - command: ["/bin/ash", "-c", "sleep 1000000000"] - livenessProbe: - exec: - command: ["curl", "-sS", "--fail", "-o", "/dev/null", "echo-b-headless"] - affinity: - podAntiAffinity: - requiredDuringSchedulingIgnoredDuringExecution: - - labelSelector: - matchExpressions: - - key: name - operator: In - values: - - echo-b - topologyKey: "kubernetes.io/hostname" ---- apiVersion: apps/v1 kind: Deployment +--- metadata: - name: pod-to-a-allowed-cnp + name: pod-to-external-1111 + labels: + name: pod-to-external-1111 + topology: any + component: network-check + quarantine: "false" spec: - selector: - matchLabels: - name: pod-to-a-allowed-cnp - replicas: 1 template: metadata: labels: - name: pod-to-a-allowed-cnp + name: pod-to-external-1111 spec: + hostNetwork: false containers: - - name: pod-to-a-allowed-cnp-container + - name: pod-to-external-1111-container + ports: [] image: docker.io/byrnedo/alpine-curl:0.1.8 - command: ["/bin/ash", "-c", "sleep 1000000000"] imagePullPolicy: IfNotPresent - livenessProbe: - exec: - command: ["curl", "-sS", "--fail", "-o", "/dev/null", "echo-a"] + command: + - /bin/ash + - -c + - sleep 1000000000 readinessProbe: exec: - command: ["curl", "-sS", "--fail", "-o", "/dev/null", "echo-a"] ---- -apiVersion: "cilium.io/v2" -kind: CiliumNetworkPolicy -metadata: - name: "pod-to-a-allowed-cnp" -spec: - endpointSelector: + command: + - curl + - -sS + - --fail + - --connect-timeout + - "5" + - -o + - /dev/null + - 1.1.1.1 + livenessProbe: + exec: + command: + - curl + - -sS + - --fail + - --connect-timeout + - "5" + - -o + - /dev/null + - 1.1.1.1 + selector: matchLabels: - name: pod-to-a-allowed-cnp - egress: - - toEndpoints: - - matchLabels: - name: echo-a - toPorts: - - ports: - - port: "80" - protocol: TCP - - toEndpoints: - - matchLabels: - k8s:io.kubernetes.pod.namespace: kube-system - k8s:k8s-app: kube-dns - toPorts: - - ports: - - port: "53" - protocol: UDP - - toEndpoints: - - matchLabels: - k8s:io.kubernetes.pod.namespace: openshift-dns - k8s:dns.operator.openshift.io/daemonset-dns: default - toPorts: - - ports: - - port: "5353" - protocol: UDP ---- + name: pod-to-external-1111 + replicas: 1 apiVersion: apps/v1 kind: Deployment +--- metadata: - name: pod-to-a-l3-denied-cnp + name: pod-to-a-denied-cnp + labels: + name: pod-to-a-denied-cnp + topology: any + component: policy-check + quarantine: "false" spec: - selector: - matchLabels: - name: pod-to-a-l3-denied-cnp - replicas: 1 template: metadata: labels: - name: pod-to-a-l3-denied-cnp + name: pod-to-a-denied-cnp spec: + hostNetwork: false containers: - - name: pod-to-a-l3-denied-cnp-container + - name: pod-to-a-denied-cnp-container + ports: [] image: docker.io/byrnedo/alpine-curl:0.1.8 - command: ["/bin/ash", "-c", "sleep 1000000000"] imagePullPolicy: IfNotPresent - livenessProbe: + command: + - /bin/ash + - -c + - sleep 1000000000 + readinessProbe: timeoutSeconds: 7 exec: - command: ["ash", "-c", "! curl -sS --fail --connect-timeout 5 -o /dev/null echo-a"] - readinessProbe: + command: + - ash + - -c + - '! curl -s --fail --connect-timeout 5 -o /dev/null echo-a/private' + livenessProbe: timeoutSeconds: 7 exec: - command: ["ash", "-c", "! curl -sS --fail --connect-timeout 5 -o /dev/null echo-a"] ---- -apiVersion: "cilium.io/v2" -kind: CiliumNetworkPolicy -metadata: - name: "pod-to-a-l3-denied-cnp" -spec: - endpointSelector: + command: + - ash + - -c + - '! curl -s --fail --connect-timeout 5 -o /dev/null echo-a/private' + selector: matchLabels: - name: pod-to-a-l3-denied-cnp - egress: - - toEndpoints: - - matchLabels: - k8s:io.kubernetes.pod.namespace: kube-system - k8s:k8s-app: kube-dns - toPorts: - - ports: - - port: "53" - protocol: UDP - - toEndpoints: - - matchLabels: - k8s:io.kubernetes.pod.namespace: openshift-dns - k8s:dns.operator.openshift.io/daemonset-dns: default - toPorts: - - ports: - - port: "5353" - protocol: UDP ---- + name: pod-to-a-denied-cnp + replicas: 1 apiVersion: apps/v1 kind: Deployment +--- metadata: - name: pod-to-a + name: pod-to-a-allowed-cnp + labels: + name: pod-to-a-allowed-cnp + topology: any + component: policy-check + quarantine: "false" spec: - selector: - matchLabels: - name: pod-to-a - replicas: 1 template: metadata: labels: - name: pod-to-a + name: pod-to-a-allowed-cnp spec: + hostNetwork: false containers: - - name: pod-to-a-container + - name: pod-to-a-allowed-cnp-container + ports: [] image: docker.io/byrnedo/alpine-curl:0.1.8 - command: ["/bin/ash", "-c", "sleep 1000000000"] imagePullPolicy: IfNotPresent + command: + - /bin/ash + - -c + - sleep 1000000000 + readinessProbe: + exec: + command: + - curl + - -sS + - --fail + - --connect-timeout + - "5" + - -o + - /dev/null + - echo-a/public livenessProbe: exec: - command: ["curl", "-sS", "--fail", "-o", "/dev/null", "echo-a"] ---- + command: + - curl + - -sS + - --fail + - --connect-timeout + - "5" + - -o + - /dev/null + - echo-a/public + selector: + matchLabels: + name: pod-to-a-allowed-cnp + replicas: 1 apiVersion: apps/v1 kind: Deployment +--- metadata: - name: pod-to-b-intra-node-nodeport + name: pod-to-external-fqdn-allow-google-cnp + labels: + name: pod-to-external-fqdn-allow-google-cnp + topology: any + component: policy-check + quarantine: "false" spec: - replicas: 1 - selector: - matchLabels: - name: pod-to-b-intra-node-nodeport template: metadata: labels: - name: pod-to-b-intra-node-nodeport + name: pod-to-external-fqdn-allow-google-cnp spec: - affinity: - podAffinity: - requiredDuringSchedulingIgnoredDuringExecution: - - labelSelector: - matchExpressions: - - key: name - operator: In - values: - - echo-b - topologyKey: kubernetes.io/hostname + hostNetwork: false containers: - - command: + - name: pod-to-external-fqdn-allow-google-cnp-container + ports: [] + image: docker.io/byrnedo/alpine-curl:0.1.8 + imagePullPolicy: IfNotPresent + command: - /bin/ash - -c - sleep 1000000000 - image: docker.io/byrnedo/alpine-curl:0.1.8 - imagePullPolicy: IfNotPresent - livenessProbe: - exec: - command: [ "curl", "-sS", "--fail", "-o", "/dev/null", "echo-b-host-headless:31313" ] readinessProbe: exec: - command: [ "curl", "-sS", "--fail", "-o", "/dev/null", "echo-b-host-headless:31313" ] - name: pod-to-b-intra-node-hostport ---- + command: + - curl + - -sS + - --fail + - --connect-timeout + - "5" + - -o + - /dev/null + - www.google.com + livenessProbe: + exec: + command: + - curl + - -sS + - --fail + - --connect-timeout + - "5" + - -o + - /dev/null + - www.google.com + selector: + matchLabels: + name: pod-to-external-fqdn-allow-google-cnp + replicas: 1 apiVersion: apps/v1 kind: Deployment +--- metadata: - name: pod-to-b-intra-node + name: pod-to-b-multi-node-clusterip + labels: + name: pod-to-b-multi-node-clusterip + topology: multi-node + component: services-check + quarantine: "false" spec: - selector: - matchLabels: - name: pod-to-b-intra-node - replicas: 1 template: metadata: labels: - name: pod-to-b-intra-node + name: pod-to-b-multi-node-clusterip spec: + hostNetwork: false containers: - - name: pod-to-b-intra-node-container + - name: pod-to-b-multi-node-clusterip-container + ports: [] image: docker.io/byrnedo/alpine-curl:0.1.8 - command: ["/bin/ash", "-c", "sleep 1000000000"] imagePullPolicy: IfNotPresent + command: + - /bin/ash + - -c + - sleep 1000000000 + readinessProbe: + exec: + command: + - curl + - -sS + - --fail + - --connect-timeout + - "5" + - -o + - /dev/null + - echo-b/public livenessProbe: exec: - command: ["curl", "-sS", "--fail", "-o", "/dev/null", "echo-b"] + command: + - curl + - -sS + - --fail + - --connect-timeout + - "5" + - -o + - /dev/null + - echo-b/public affinity: - podAffinity: + podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: @@ -406,30 +465,59 @@ spec: operator: In values: - echo-b - topologyKey: "kubernetes.io/hostname" ---- -apiVersion: apps/v1 -kind: Deployment -metadata: - name: pod-to-b-multi-node-clusterip -spec: + topologyKey: kubernetes.io/hostname selector: matchLabels: name: pod-to-b-multi-node-clusterip replicas: 1 +apiVersion: apps/v1 +kind: Deployment +--- +metadata: + name: pod-to-b-multi-node-headless + labels: + name: pod-to-b-multi-node-headless + topology: multi-node + component: services-check + quarantine: "false" +spec: template: metadata: labels: - name: pod-to-b-multi-node-clusterip + name: pod-to-b-multi-node-headless spec: + hostNetwork: false containers: - - name: pod-to-b-multi-node-container + - name: pod-to-b-multi-node-headless-container + ports: [] image: docker.io/byrnedo/alpine-curl:0.1.8 - command: ["/bin/ash", "-c", "sleep 1000000000"] imagePullPolicy: IfNotPresent + command: + - /bin/ash + - -c + - sleep 1000000000 + readinessProbe: + exec: + command: + - curl + - -sS + - --fail + - --connect-timeout + - "5" + - -o + - /dev/null + - echo-b-headless/public livenessProbe: exec: - command: ["curl", "-sS", "--fail", "-o", "/dev/null", "echo-b"] + command: + - curl + - -sS + - --fail + - --connect-timeout + - "5" + - -o + - /dev/null + - echo-b-headless/public affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: @@ -439,30 +527,59 @@ spec: operator: In values: - echo-b - topologyKey: "kubernetes.io/hostname" ---- -apiVersion: apps/v1 -kind: Deployment -metadata: - name: pod-to-b-multi-node-headless -spec: + topologyKey: kubernetes.io/hostname selector: matchLabels: name: pod-to-b-multi-node-headless replicas: 1 +apiVersion: apps/v1 +kind: Deployment +--- +metadata: + name: host-to-b-multi-node-clusterip + labels: + name: host-to-b-multi-node-clusterip + topology: multi-node + component: services-check + quarantine: "false" +spec: template: metadata: labels: - name: pod-to-b-multi-node-headless + name: host-to-b-multi-node-clusterip spec: + hostNetwork: true containers: - - name: pod-to-b-multi-node-container + - name: host-to-b-multi-node-clusterip-container + ports: [] image: docker.io/byrnedo/alpine-curl:0.1.8 - command: ["/bin/ash", "-c", "sleep 1000000000"] imagePullPolicy: IfNotPresent + command: + - /bin/ash + - -c + - sleep 1000000000 + readinessProbe: + exec: + command: + - curl + - -sS + - --fail + - --connect-timeout + - "5" + - -o + - /dev/null + - echo-b/public livenessProbe: exec: - command: ["curl", "-sS", "--fail", "-o", "/dev/null", "echo-b-headless"] + command: + - curl + - -sS + - --fail + - --connect-timeout + - "5" + - -o + - /dev/null + - echo-b/public affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: @@ -472,22 +589,60 @@ spec: operator: In values: - echo-b - topologyKey: "kubernetes.io/hostname" ---- + topologyKey: kubernetes.io/hostname + dnsPolicy: ClusterFirstWithHostNet + selector: + matchLabels: + name: host-to-b-multi-node-clusterip + replicas: 1 apiVersion: apps/v1 kind: Deployment +--- metadata: - name: pod-to-b-multi-node-nodeport + name: host-to-b-multi-node-headless + labels: + name: host-to-b-multi-node-headless + topology: multi-node + component: services-check + quarantine: "false" spec: - replicas: 1 - selector: - matchLabels: - name: pod-to-b-multi-node-nodeport template: metadata: labels: - name: pod-to-b-multi-node-nodeport + name: host-to-b-multi-node-headless spec: + hostNetwork: true + containers: + - name: host-to-b-multi-node-headless-container + ports: [] + image: docker.io/byrnedo/alpine-curl:0.1.8 + imagePullPolicy: IfNotPresent + command: + - /bin/ash + - -c + - sleep 1000000000 + readinessProbe: + exec: + command: + - curl + - -sS + - --fail + - --connect-timeout + - "5" + - -o + - /dev/null + - echo-b-headless/public + livenessProbe: + exec: + command: + - curl + - -sS + - --fail + - --connect-timeout + - "5" + - -o + - /dev/null + - echo-b-headless/public affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: @@ -498,104 +653,185 @@ spec: values: - echo-b topologyKey: kubernetes.io/hostname - containers: - - command: - - /bin/ash - - -c - - sleep 1000000000 - image: docker.io/byrnedo/alpine-curl:0.1.8 - imagePullPolicy: IfNotPresent - livenessProbe: - exec: - command: [ "curl", "-sS", "--fail", "-o", "/dev/null", "echo-b-host-headless:31313" ] - readinessProbe: - exec: - command: [ "curl", "-sS", "--fail", "-o", "/dev/null", "echo-b-host-headless:31313" ] - name: pod-to-b-multi-node-nodeport ---- + dnsPolicy: ClusterFirstWithHostNet + selector: + matchLabels: + name: host-to-b-multi-node-headless + replicas: 1 apiVersion: apps/v1 kind: Deployment +--- metadata: - name: pod-to-a-external-1111 + name: echo-a + labels: + name: echo-a + topology: any + component: network-check + quarantine: "false" spec: + ports: + - port: 80 selector: - matchLabels: - name: pod-to-a-external-1111 - replicas: 1 - template: - metadata: - labels: - name: pod-to-a-external-1111 - spec: - containers: - - name: pod-to-a-external-1111-container - image: docker.io/byrnedo/alpine-curl:0.1.8 - command: ["/bin/ash", "-c", "sleep 1000000000"] - imagePullPolicy: IfNotPresent - livenessProbe: - exec: - command: ["curl", "-sS", "--fail", "--connect-timeout", "5", "-o", "/dev/null", "1.1.1.1"] - readinessProbe: - exec: - command: ["curl", "-sS", "--fail", "--connect-timeout", "5", "-o", "/dev/null", "1.1.1.1"] + name: echo-a + type: ClusterIP +apiVersion: v1 +kind: Service --- -apiVersion: apps/v1 -kind: Deployment metadata: - name: pod-to-external-fqdn-allow-google-cnp + name: echo-b + labels: + name: echo-b + topology: any + component: services-check + quarantine: "false" +spec: + ports: + - port: 80 + nodePort: 31313 + selector: + name: echo-b + type: NodePort +apiVersion: v1 +kind: Service +--- +metadata: + name: echo-b-headless + labels: + name: echo-b-headless + topology: any + component: services-check + quarantine: "false" +spec: + ports: + - port: 80 + selector: + name: echo-b + type: ClusterIP + clusterIP: None +apiVersion: v1 +kind: Service +--- +metadata: + name: echo-b-host-headless + labels: + name: echo-b-host-headless + topology: any + component: services-check + quarantine: "false" spec: + ports: [] selector: + name: echo-b-host + type: ClusterIP + clusterIP: None +apiVersion: v1 +kind: Service +--- +metadata: + name: pod-to-a-denied-cnp + labels: + name: pod-to-a-denied-cnp + topology: any + component: policy-check + quarantine: "false" +spec: + endpointSelector: matchLabels: - name: pod-to-external-fqdn-allow-google-cnp - replicas: 1 - template: - metadata: - labels: - name: pod-to-external-fqdn-allow-google-cnp - spec: - containers: - - name: pod-to-external-fqdn-allow-google-cnp-container - image: docker.io/byrnedo/alpine-curl:0.1.8 - command: ["/bin/ash", "-c", "sleep 1000000000"] - imagePullPolicy: IfNotPresent - livenessProbe: - exec: - command: ["curl", "-sS", "--fail", "--connect-timeout", "5", "-o", "/dev/null", "www.google.com"] - readinessProbe: - exec: - command: ["curl", "-sS", "--fail", "--connect-timeout", "5", "-o", "/dev/null", "www.google.com"] + name: pod-to-a-denied-cnp + egress: + - toPorts: + - ports: + - port: "53" + protocol: ANY + toEndpoints: + - matchLabels: + k8s:io.kubernetes.pod.namespace: kube-system + k8s:k8s-app: kube-dns + - toPorts: + - ports: + - port: "5353" + protocol: UDP + toEndpoints: + - matchLabels: + k8s:io.kubernetes.pod.namespace: openshift-dns + k8s:dns.operator.openshift.io/daemonset-dns: default +apiVersion: cilium.io/v2 +kind: CiliumNetworkPolicy --- -apiVersion: "cilium.io/v2" +metadata: + name: pod-to-a-allowed-cnp + labels: + name: pod-to-a-allowed-cnp + topology: any + component: policy-check + quarantine: "false" +spec: + endpointSelector: + matchLabels: + name: pod-to-a-allowed-cnp + egress: + - toPorts: + - ports: + - port: "80" + protocol: TCP + toEndpoints: + - matchLabels: + name: echo-a + - toPorts: + - ports: + - port: "53" + protocol: ANY + toEndpoints: + - matchLabels: + k8s:io.kubernetes.pod.namespace: kube-system + k8s:k8s-app: kube-dns + - toPorts: + - ports: + - port: "5353" + protocol: UDP + toEndpoints: + - matchLabels: + k8s:io.kubernetes.pod.namespace: openshift-dns + k8s:dns.operator.openshift.io/daemonset-dns: default +apiVersion: cilium.io/v2 kind: CiliumNetworkPolicy +--- metadata: - name: "pod-to-external-fqdn-allow-google-cnp" + name: pod-to-external-fqdn-allow-google-cnp + labels: + name: pod-to-external-fqdn-allow-google-cnp + topology: any + component: policy-check + quarantine: "false" spec: endpointSelector: matchLabels: name: pod-to-external-fqdn-allow-google-cnp egress: - - toEndpoints: - - matchLabels: - "k8s:io.kubernetes.pod.namespace": kube-system - "k8s:k8s-app": kube-dns - toPorts: + - toFQDNs: + - matchPattern: '*.google.com' + - toPorts: - ports: - port: "53" protocol: ANY rules: dns: - - matchPattern: "*" - - toEndpoints: + - matchPattern: '*' + toEndpoints: - matchLabels: - k8s:io.kubernetes.pod.namespace: openshift-dns - k8s:dns.operator.openshift.io/daemonset-dns: default - toPorts: + k8s:io.kubernetes.pod.namespace: kube-system + k8s:k8s-app: kube-dns + - toPorts: - ports: - port: "5353" protocol: UDP rules: dns: - - matchPattern: "*" - - toFQDNs: - - matchPattern: "*.google.com" ---- + - matchPattern: '*' + toEndpoints: + - matchLabels: + k8s:io.kubernetes.pod.namespace: openshift-dns + k8s:dns.operator.openshift.io/daemonset-dns: default +apiVersion: cilium.io/v2 +kind: CiliumNetworkPolicy + diff --git a/examples/kubernetes/connectivity-check/cue.mod/module.cue b/examples/kubernetes/connectivity-check/cue.mod/module.cue new file mode 100644 index 000000000000..f8af9cef913e --- /dev/null +++ b/examples/kubernetes/connectivity-check/cue.mod/module.cue @@ -0,0 +1 @@ +module: "" diff --git a/examples/kubernetes/connectivity-check/defaults.cue b/examples/kubernetes/connectivity-check/defaults.cue new file mode 100644 index 000000000000..d017abc44670 --- /dev/null +++ b/examples/kubernetes/connectivity-check/defaults.cue @@ -0,0 +1,63 @@ +package connectivity_check + +// Default parameters for echo clients (may be overridden). +deployment: [ID=_]: { + // General pod parameters + if ID =~ "^pod-to-.*$" || ID =~ "^host-to-.*$" { + _image: "docker.io/byrnedo/alpine-curl:0.1.8" + _command: ["/bin/ash", "-c", "sleep 1000000000"] + } + + // readinessProbe target name + if ID =~ "^pod-to-a.*$" || ID =~ "^host-to-a.*$" { + _probeTarget: *"echo-a" | string + } + if ID =~ "^pod-to-b.*$" || ID =~ "^host-to-b.*$" { + _probeTarget: *"echo-b" | string + } + if ID =~ "^pod-to-c.*$" || ID =~ "^host-to-c.*$" { + _probeTarget: *"echo-c" | string + } +} + +// Default parameters for echo clients (final). +deployment: [ID=_]: { + // Topology + if ID =~ "^.*intra-node.*$" { + metadata: labels: topology: "intra-node" + } + if ID =~ "^.*multi-node.*$" { + metadata: labels: topology: "multi-node" + } + + // Affinity + if ID =~ "^.*to-a-intra-node-.*$" { + _affinity: "echo-a" + } + if ID =~ "^.*to-a-multi-node-.*$" { + _antiAffinity: "echo-a" + } + if ID =~ "^.*to-b-intra-node-.*$" { + _affinity: "echo-b" + } + if ID =~ "^.*to-b-multi-node-.*$" { + _antiAffinity: "echo-b" + } + if ID =~ "^.*to-c-intra-node-.*$" { + _affinity: "echo-c" + } + if ID =~ "^.*to-c-multi-node-.*$" { + _antiAffinity: "echo-c" + } +} + +// Default parameters for policies. +egressCNP: [ID=_]: { + // Topology + if ID =~ "^.*intra-node.*$" { + metadata: labels: topology: "intra-node" + } + if ID =~ "^.*multi-node.*$" { + metadata: labels: topology: "multi-node" + } +} diff --git a/examples/kubernetes/connectivity-check/dump_tool.cue b/examples/kubernetes/connectivity-check/dump_tool.cue new file mode 100644 index 000000000000..e9df4356f134 --- /dev/null +++ b/examples/kubernetes/connectivity-check/dump_tool.cue @@ -0,0 +1,15 @@ +package connectivity_check + +import ( + "encoding/yaml" + "tool/cli" +) + +command: dump: ccCommand & { + usage: "cue \(globalFlags) dump" + short: "Generate connectivity-check YAMLs from the cuelang scripts" + + task: print: cli.Print & { + text: "---\n" + yaml.MarshalStream(task.filter.resources) + } +} diff --git a/examples/kubernetes/connectivity-check/echo-a.yaml b/examples/kubernetes/connectivity-check/echo-a.yaml deleted file mode 100644 index e943ae2a26c3..000000000000 --- a/examples/kubernetes/connectivity-check/echo-a.yaml +++ /dev/null @@ -1,32 +0,0 @@ -apiVersion: v1 -kind: Service -metadata: - name: echo-a -spec: - type: ClusterIP - ports: - - port: 80 - selector: - name: echo-a ---- -apiVersion: apps/v1 -kind: Deployment -metadata: - name: echo-a -spec: - selector: - matchLabels: - name: echo-a - replicas: 1 - template: - metadata: - labels: - name: echo-a - spec: - containers: - - name: echo-container - image: docker.io/cilium/json-mock:1.2 - imagePullPolicy: IfNotPresent - readinessProbe: - exec: - command: ["curl", "-sS", "--fail", "-o", "/dev/null", "localhost"] diff --git a/examples/kubernetes/connectivity-check/echo-b.yaml b/examples/kubernetes/connectivity-check/echo-b.yaml deleted file mode 100644 index 973ca03444c7..000000000000 --- a/examples/kubernetes/connectivity-check/echo-b.yaml +++ /dev/null @@ -1,97 +0,0 @@ -apiVersion: v1 -kind: Service -metadata: - name: echo-b -spec: - type: NodePort - ports: - - port: 80 - nodePort: 31313 - selector: - name: echo-b ---- -apiVersion: v1 -kind: Service -metadata: - name: echo-b-headless -spec: - type: ClusterIP - clusterIP: None - ports: - - port: 80 - selector: - name: echo-b ---- -apiVersion: apps/v1 -kind: Deployment -metadata: - name: echo-b -spec: - selector: - matchLabels: - name: echo-b - replicas: 1 - template: - metadata: - labels: - name: echo-b - spec: - containers: - - name: echo-container - image: docker.io/cilium/json-mock:1.2 - imagePullPolicy: IfNotPresent - ports: - - containerPort: 80 - hostPort: 40000 - readinessProbe: - exec: - command: ["curl", "-sS", "--fail", "-o", "/dev/null", "localhost"] ---- -# The echo-b-host pod runs in host networking on the same node as echo-b. -apiVersion: apps/v1 -kind: Deployment -metadata: - name: echo-b-host -spec: - selector: - matchLabels: - name: echo-b-host - replicas: 1 - template: - metadata: - labels: - name: echo-b-host - spec: - hostNetwork: true - containers: - - name: echo-container - image: docker.io/cilium/json-mock:1.2 - imagePullPolicy: IfNotPresent - env: - - name: PORT - value: "41000" - readinessProbe: - exec: - command: ["curl", "-sS", "--fail", "-o", "/dev/null", "localhost:41000"] - affinity: - podAffinity: - requiredDuringSchedulingIgnoredDuringExecution: - - labelSelector: - matchExpressions: - - key: name - operator: In - values: - - echo-b - topologyKey: "kubernetes.io/hostname" ---- -# Connecting to "echo-b-host-headless" will provide service discovery to the -# node IP on which echo-b* is running -apiVersion: v1 -kind: Service -metadata: - name: echo-b-host-headless -spec: - type: ClusterIP - clusterIP: None - selector: - name: echo-b-host diff --git a/examples/kubernetes/connectivity-check/echo-servers.cue b/examples/kubernetes/connectivity-check/echo-servers.cue new file mode 100644 index 000000000000..77b698dfc705 --- /dev/null +++ b/examples/kubernetes/connectivity-check/echo-servers.cue @@ -0,0 +1,75 @@ +package connectivity_check + +// Default parameters for echo servers (may be overridden). +_echoDeployment: { + _image: "docker.io/cilium/json-mock:1.2" + _probeTarget: *"localhost" | string + _probePath: "" +} + +_echoDeploymentWithHostPort: _echoDeployment & { + _exposeHeadless: true + + spec: template: spec: hostNetwork: true +} + +// Regular service exposed via ClusterIP. +deployment: "echo-a": _echoDeployment & { + _exposeClusterIP: true + metadata: labels: component: "network-check" + spec: template: spec: containers: [{ports: [{_expose: true, containerPort: 80}]}] +} + +// Service exposed via NodePort + headless svc. +deployment: "echo-b": _echoDeployment & { + _exposeNodePort: true + _exposeHeadless: true + _nodePort: 31313 + + metadata: labels: component: "services-check" + spec: template: spec: containers: [{ports: [{_expose: true, containerPort: 80, hostPort: 40000}]}] +} +// Expose hostport by deploying a host pod and adding a headless service with no port. +deployment: "echo-b-host": _echoDeploymentWithHostPort & { + _serverPort: "41000" + _affinity: "echo-b" + + metadata: labels: component: "services-check" +} + +ingressL7Policy: { + _allowDNS: true + _port: *"80" | string + _rules: [{ + toPorts: [{ + ports: [{ + port: _port + protocol: "TCP" + }] + rules: + http: [{ + method: "GET" + path: "/public$" + }] + }] + }] + + metadata: labels: component: "proxy-check" +} + +// Service with policy applied. +deployment: "echo-c": _echoDeployment & { + _exposeClusterIP: true + _exposeHeadless: true + + metadata: labels: component: "proxy-check" + spec: template: spec: containers: [{ports: [{_expose: true, containerPort: 80, hostPort: 40001}]}] +} +ingressCNP: "echo-c": ingressL7Policy & {} +// Expose hostport by deploying a host pod and adding a headless service with no port. +// No ingress policy will apply in this case. +deployment: "echo-c-host": _echoDeploymentWithHostPort & { + _serverPort: "41001" + _affinity: "echo-c" + metadata: labels: component: "proxy-check" +} diff --git a/examples/kubernetes/connectivity-check/host-to-b-multi-node-clusterip.yaml b/examples/kubernetes/connectivity-check/host-to-b-multi-node-clusterip.yaml deleted file mode 100644 index 78c5a68f78b4..000000000000 --- a/examples/kubernetes/connectivity-check/host-to-b-multi-node-clusterip.yaml +++ /dev/null @@ -1,34 +0,0 @@ -apiVersion: apps/v1 -kind: Deployment -metadata: - name: host-to-b-multi-node-clusterip -spec: - selector: - matchLabels: - name: host-to-b-multi-node-clusterip - replicas: 1 - template: - metadata: - labels: - name: host-to-b-multi-node-clusterip - spec: - hostNetwork: true - dnsPolicy: ClusterFirstWithHostNet - containers: - - name: host-to-b-multi-node-container - imagePullPolicy: IfNotPresent - image: docker.io/byrnedo/alpine-curl:0.1.8 - command: ["/bin/ash", "-c", "sleep 1000000000"] - livenessProbe: - exec: - command: ["curl", "-sS", "--fail", "-o", "/dev/null", "echo-b"] - affinity: - podAntiAffinity: - requiredDuringSchedulingIgnoredDuringExecution: - - labelSelector: - matchExpressions: - - key: name - operator: In - values: - - echo-b - topologyKey: "kubernetes.io/hostname" diff --git a/examples/kubernetes/connectivity-check/host-to-b-multi-node-headless.yaml b/examples/kubernetes/connectivity-check/host-to-b-multi-node-headless.yaml deleted file mode 100644 index aa61f59967dc..000000000000 --- a/examples/kubernetes/connectivity-check/host-to-b-multi-node-headless.yaml +++ /dev/null @@ -1,34 +0,0 @@ -apiVersion: apps/v1 -kind: Deployment -metadata: - name: host-to-b-multi-node-headless -spec: - selector: - matchLabels: - name: host-to-b-multi-node-headless - replicas: 1 - template: - metadata: - labels: - name: host-to-b-multi-node-headless - spec: - hostNetwork: true - dnsPolicy: ClusterFirstWithHostNet - containers: - - name: host-to-b-multi-node-container - imagePullPolicy: IfNotPresent - image: docker.io/byrnedo/alpine-curl:0.1.8 - command: ["/bin/ash", "-c", "sleep 1000000000"] - livenessProbe: - exec: - command: ["curl", "-sS", "--fail", "-o", "/dev/null", "echo-b-headless"] - affinity: - podAntiAffinity: - requiredDuringSchedulingIgnoredDuringExecution: - - labelSelector: - matchExpressions: - - key: name - operator: In - values: - - echo-b - topologyKey: "kubernetes.io/hostname" diff --git a/examples/kubernetes/connectivity-check/ls_tool.cue b/examples/kubernetes/connectivity-check/ls_tool.cue new file mode 100644 index 000000000000..5bddae3b7823 --- /dev/null +++ b/examples/kubernetes/connectivity-check/ls_tool.cue @@ -0,0 +1,20 @@ +package connectivity_check + +import ( + "text/tabwriter" + "tool/cli" +) + +command: ls: ccCommand & { + usage: "cue \(globalFlags) ls" + short: "List connectivity-check resources specified in this directory" + + task: print: cli.Print & { + header: ["KIND \tCOMPONENT \tTOPOLOGY \tNAME", ...] + text: tabwriter.Write(header + [ + for x in task.filter.resources { + "\(x.kind) \t\(x.metadata.labels.component) \t\(x.metadata.labels.topology) \t\(x.metadata.name)" + }, + ]) + } +} diff --git a/examples/kubernetes/connectivity-check/main_tool.cue b/examples/kubernetes/connectivity-check/main_tool.cue new file mode 100644 index 000000000000..c31ce12c8ef5 --- /dev/null +++ b/examples/kubernetes/connectivity-check/main_tool.cue @@ -0,0 +1,95 @@ +package connectivity_check + +import ( + "list" + "text/tabwriter" + "tool/cli" +) + +objects: [ for v in objectSets for x in v {x}] + +objectSets: [ + deployment, + service, + egressCNP, + ingressCNP, +] + +globalFlags: "[-t component=] [-t kind=] [-t name=] [-t topology=] [-t quarantine=true]" + +ccCommand: { + #flags: { + component: "all" | *"default" | "network" | "policy" | "services" | "hostport" | "proxy" @tag(component,short=all|default|network|policy|services|hostport|proxy) + name: *"" | string @tag(name) + topology: *"any" | "single-node" @tag(topology,short=any|single-node) + kind: *"" | "Deployment" | "Service" | "CiliumNetworkPolicy" @tag(kind,short=Deployment|Service|CiliumNetworkPolicy) + quarantine: *"false" | "true" @tag(quarantine,short=false|true) + } + + task: filterComponent: { + if #flags.component == "all" { + resources: objects + } + defaultExclusions: [ "hostport-check", "proxy-check"] + if #flags.component == "default" { + resources: [ for x in objects if !list.Contains(defaultExclusions, x.metadata.labels.component) {x}] + } + if #flags.component != "all" && #flags.component != "default" { + resources: [ for x in objects if x.metadata.labels.component == "\(#flags.component)-check" {x}] + } + } + + task: filterQuarantine: { + resources: [ for x in task.filterComponent.resources if x.metadata.labels.quarantine == #flags.quarantine {x}] + } + + task: filterTopology: { + if #flags.topology == "any" { + resources: task.filterQuarantine.resources + } + if #flags.topology == "single-node" { + resources: [ for x in task.filterQuarantine.resources if x.metadata.labels.topology != "multi-node" {x}] + } + } + + task: filterKind: { + if #flags.kind == "" { + resources: task.filterTopology.resources + } + if #flags.kind != "" { + resources: [ for x in task.filterTopology.resources if x.kind == #flags.kind {x}] + } + } + + task: filterName: { + if #flags.name == "" { + resources: task.filterKind.resources + } + if #flags.name != "" { + resources: [ for x in task.filterKind.resources if x.metadata.labels.name == #flags.name {x}] + } + } + + task: filter: { + resources: task.filterName.resources + } +} + +command: help: ccCommand & { + usage: "cue \(globalFlags) " + short: "List connectivity-check resources specified in this directory" + + task: print: cli.Print & { + helpText: [ + short, + "", + "Usage:", + " \(usage)", + "", + "Available Commands:", + " dump\t\t\t\(command.dump.short)", + " ls \t\t\t\(command.ls.short)", + ] + text: tabwriter.Write(helpText) + } +} diff --git a/examples/kubernetes/connectivity-check/network.cue b/examples/kubernetes/connectivity-check/network.cue new file mode 100644 index 000000000000..f0b4842d93b4 --- /dev/null +++ b/examples/kubernetes/connectivity-check/network.cue @@ -0,0 +1,14 @@ +package connectivity_check + +// deployment (defaults.cue) implicitly configures the deployments below such +// that deployments with names matching 'pod-to-', '*-[intra|multi]-node' +// and '*-headless' will contact the related echo server via the related +// service and will be scheduled with affinity / anti-affinity to that server. +_networkCheck: { + metadata: labels: component: "network-check" +} +deployment: "pod-to-a": _networkCheck +deployment: "pod-to-external-1111": _networkCheck & { + _probeTarget: "1.1.1.1" + _probePath: "" +} diff --git a/examples/kubernetes/connectivity-check/pod-to-a-allowed.yaml b/examples/kubernetes/connectivity-check/pod-to-a-allowed.yaml deleted file mode 100644 index 49e25c994404..000000000000 --- a/examples/kubernetes/connectivity-check/pod-to-a-allowed.yaml +++ /dev/null @@ -1,58 +0,0 @@ -apiVersion: apps/v1 -kind: Deployment -metadata: - name: pod-to-a-allowed-cnp -spec: - selector: - matchLabels: - name: pod-to-a-allowed-cnp - replicas: 1 - template: - metadata: - labels: - name: pod-to-a-allowed-cnp - spec: - containers: - - name: pod-to-a-allowed-cnp-container - image: docker.io/byrnedo/alpine-curl:0.1.8 - command: ["/bin/ash", "-c", "sleep 1000000000"] - imagePullPolicy: IfNotPresent - livenessProbe: - exec: - command: ["curl", "-sS", "--fail", "-o", "/dev/null", "echo-a"] - readinessProbe: - exec: - command: ["curl", "-sS", "--fail", "-o", "/dev/null", "echo-a"] ---- -apiVersion: "cilium.io/v2" -kind: CiliumNetworkPolicy -metadata: - name: "pod-to-a-allowed-cnp" -spec: - endpointSelector: - matchLabels: - name: pod-to-a-allowed-cnp - egress: - - toEndpoints: - - matchLabels: - name: echo-a - toPorts: - - ports: - - port: "80" - protocol: TCP - - toEndpoints: - - matchLabels: - k8s:io.kubernetes.pod.namespace: kube-system - k8s:k8s-app: kube-dns - toPorts: - - ports: - - port: "53" - protocol: UDP - - toEndpoints: - - matchLabels: - k8s:io.kubernetes.pod.namespace: openshift-dns - k8s:dns.operator.openshift.io/daemonset-dns: default - toPorts: - - ports: - - port: "5353" - protocol: UDP diff --git a/examples/kubernetes/connectivity-check/pod-to-a-denied.yaml b/examples/kubernetes/connectivity-check/pod-to-a-denied.yaml deleted file mode 100644 index e63e13d14c35..000000000000 --- a/examples/kubernetes/connectivity-check/pod-to-a-denied.yaml +++ /dev/null @@ -1,53 +0,0 @@ -apiVersion: apps/v1 -kind: Deployment -metadata: - name: pod-to-a-l3-denied-cnp -spec: - selector: - matchLabels: - name: pod-to-a-l3-denied-cnp - replicas: 1 - template: - metadata: - labels: - name: pod-to-a-l3-denied-cnp - spec: - containers: - - name: pod-to-a-l3-denied-cnp-container - image: docker.io/byrnedo/alpine-curl:0.1.8 - command: ["/bin/ash", "-c", "sleep 1000000000"] - imagePullPolicy: IfNotPresent - livenessProbe: - timeoutSeconds: 7 - exec: - command: ["ash", "-c", "! curl -sS --fail --connect-timeout 5 -o /dev/null echo-a"] - readinessProbe: - timeoutSeconds: 7 - exec: - command: ["ash", "-c", "! curl -sS --fail --connect-timeout 5 -o /dev/null echo-a"] ---- -apiVersion: "cilium.io/v2" -kind: CiliumNetworkPolicy -metadata: - name: "pod-to-a-l3-denied-cnp" -spec: - endpointSelector: - matchLabels: - name: pod-to-a-l3-denied-cnp - egress: - - toEndpoints: - - matchLabels: - k8s:io.kubernetes.pod.namespace: kube-system - k8s:k8s-app: kube-dns - toPorts: - - ports: - - port: "53" - protocol: UDP - - toEndpoints: - - matchLabels: - k8s:io.kubernetes.pod.namespace: openshift-dns - k8s:dns.operator.openshift.io/daemonset-dns: default - toPorts: - - ports: - - port: "5353" - protocol: UDP diff --git a/examples/kubernetes/connectivity-check/pod-to-a.yaml b/examples/kubernetes/connectivity-check/pod-to-a.yaml deleted file mode 100644 index f5b0ad89ecd8..000000000000 --- a/examples/kubernetes/connectivity-check/pod-to-a.yaml +++ /dev/null @@ -1,22 +0,0 @@ -apiVersion: apps/v1 -kind: Deployment -metadata: - name: pod-to-a -spec: - selector: - matchLabels: - name: pod-to-a - replicas: 1 - template: - metadata: - labels: - name: pod-to-a - spec: - containers: - - name: pod-to-a-container - image: docker.io/byrnedo/alpine-curl:0.1.8 - command: ["/bin/ash", "-c", "sleep 1000000000"] - imagePullPolicy: IfNotPresent - livenessProbe: - exec: - command: ["curl", "-sS", "--fail", "-o", "/dev/null", "echo-a"] diff --git a/examples/kubernetes/connectivity-check/pod-to-b-intra-node-hostport.yaml b/examples/kubernetes/connectivity-check/pod-to-b-intra-node-hostport.yaml deleted file mode 100644 index c8a90fc1097d..000000000000 --- a/examples/kubernetes/connectivity-check/pod-to-b-intra-node-hostport.yaml +++ /dev/null @@ -1,38 +0,0 @@ -apiVersion: apps/v1 -kind: Deployment -metadata: - name: pod-to-b-intra-node-hostport -spec: - replicas: 1 - selector: - matchLabels: - name: pod-to-b-intra-node-hostport - template: - metadata: - labels: - name: pod-to-b-intra-node-hostport - spec: - affinity: - podAffinity: - requiredDuringSchedulingIgnoredDuringExecution: - - labelSelector: - matchExpressions: - - key: name - operator: In - values: - - echo-b - topologyKey: kubernetes.io/hostname - containers: - - command: - - /bin/ash - - -c - - sleep 1000000000 - image: docker.io/byrnedo/alpine-curl:0.1.8 - imagePullPolicy: IfNotPresent - livenessProbe: - exec: - command: [ "curl", "-sS", "--fail", "-o", "/dev/null", "echo-b-host-headless:40000" ] - readinessProbe: - exec: - command: [ "curl", "-sS", "--fail", "-o", "/dev/null", "echo-b-host-headless:40000" ] - name: pod-to-b-intra-node-hostport diff --git a/examples/kubernetes/connectivity-check/pod-to-b-intra-node-nodeport.yaml b/examples/kubernetes/connectivity-check/pod-to-b-intra-node-nodeport.yaml deleted file mode 100644 index 934e19e8572f..000000000000 --- a/examples/kubernetes/connectivity-check/pod-to-b-intra-node-nodeport.yaml +++ /dev/null @@ -1,38 +0,0 @@ -apiVersion: apps/v1 -kind: Deployment -metadata: - name: pod-to-b-intra-node-nodeport -spec: - replicas: 1 - selector: - matchLabels: - name: pod-to-b-intra-node-nodeport - template: - metadata: - labels: - name: pod-to-b-intra-node-nodeport - spec: - affinity: - podAffinity: - requiredDuringSchedulingIgnoredDuringExecution: - - labelSelector: - matchExpressions: - - key: name - operator: In - values: - - echo-b - topologyKey: kubernetes.io/hostname - containers: - - command: - - /bin/ash - - -c - - sleep 1000000000 - image: docker.io/byrnedo/alpine-curl:0.1.8 - imagePullPolicy: IfNotPresent - livenessProbe: - exec: - command: [ "curl", "-sS", "--fail", "-o", "/dev/null", "echo-b-host-headless:31313" ] - readinessProbe: - exec: - command: [ "curl", "-sS", "--fail", "-o", "/dev/null", "echo-b-host-headless:31313" ] - name: pod-to-b-intra-node-hostport diff --git a/examples/kubernetes/connectivity-check/pod-to-b-intra-node.yaml b/examples/kubernetes/connectivity-check/pod-to-b-intra-node.yaml deleted file mode 100644 index 482d21569b03..000000000000 --- a/examples/kubernetes/connectivity-check/pod-to-b-intra-node.yaml +++ /dev/null @@ -1,32 +0,0 @@ -apiVersion: apps/v1 -kind: Deployment -metadata: - name: pod-to-b-intra-node -spec: - selector: - matchLabels: - name: pod-to-b-intra-node - replicas: 1 - template: - metadata: - labels: - name: pod-to-b-intra-node - spec: - containers: - - name: pod-to-b-intra-node-container - image: docker.io/byrnedo/alpine-curl:0.1.8 - command: ["/bin/ash", "-c", "sleep 1000000000"] - imagePullPolicy: IfNotPresent - livenessProbe: - exec: - command: ["curl", "-sS", "--fail", "-o", "/dev/null", "echo-b"] - affinity: - podAffinity: - requiredDuringSchedulingIgnoredDuringExecution: - - labelSelector: - matchExpressions: - - key: name - operator: In - values: - - echo-b - topologyKey: "kubernetes.io/hostname" diff --git a/examples/kubernetes/connectivity-check/pod-to-b-multi-node-clusterip.yaml b/examples/kubernetes/connectivity-check/pod-to-b-multi-node-clusterip.yaml deleted file mode 100644 index bad9824d40ce..000000000000 --- a/examples/kubernetes/connectivity-check/pod-to-b-multi-node-clusterip.yaml +++ /dev/null @@ -1,32 +0,0 @@ -apiVersion: apps/v1 -kind: Deployment -metadata: - name: pod-to-b-multi-node-clusterip -spec: - selector: - matchLabels: - name: pod-to-b-multi-node-clusterip - replicas: 1 - template: - metadata: - labels: - name: pod-to-b-multi-node-clusterip - spec: - containers: - - name: pod-to-b-multi-node-container - image: docker.io/byrnedo/alpine-curl:0.1.8 - command: ["/bin/ash", "-c", "sleep 1000000000"] - imagePullPolicy: IfNotPresent - livenessProbe: - exec: - command: ["curl", "-sS", "--fail", "-o", "/dev/null", "echo-b"] - affinity: - podAntiAffinity: - requiredDuringSchedulingIgnoredDuringExecution: - - labelSelector: - matchExpressions: - - key: name - operator: In - values: - - echo-b - topologyKey: "kubernetes.io/hostname" diff --git a/examples/kubernetes/connectivity-check/pod-to-b-multi-node-headless.yaml b/examples/kubernetes/connectivity-check/pod-to-b-multi-node-headless.yaml deleted file mode 100644 index e443bb51e3d0..000000000000 --- a/examples/kubernetes/connectivity-check/pod-to-b-multi-node-headless.yaml +++ /dev/null @@ -1,32 +0,0 @@ -apiVersion: apps/v1 -kind: Deployment -metadata: - name: pod-to-b-multi-node-headless -spec: - selector: - matchLabels: - name: pod-to-b-multi-node-headless - replicas: 1 - template: - metadata: - labels: - name: pod-to-b-multi-node-headless - spec: - containers: - - name: pod-to-b-multi-node-container - image: docker.io/byrnedo/alpine-curl:0.1.8 - command: ["/bin/ash", "-c", "sleep 1000000000"] - imagePullPolicy: IfNotPresent - livenessProbe: - exec: - command: ["curl", "-sS", "--fail", "-o", "/dev/null", "echo-b-headless"] - affinity: - podAntiAffinity: - requiredDuringSchedulingIgnoredDuringExecution: - - labelSelector: - matchExpressions: - - key: name - operator: In - values: - - echo-b - topologyKey: "kubernetes.io/hostname" diff --git a/examples/kubernetes/connectivity-check/pod-to-b-multi-node-hostport.yaml b/examples/kubernetes/connectivity-check/pod-to-b-multi-node-hostport.yaml deleted file mode 100644 index d8667c7a241c..000000000000 --- a/examples/kubernetes/connectivity-check/pod-to-b-multi-node-hostport.yaml +++ /dev/null @@ -1,38 +0,0 @@ -apiVersion: apps/v1 -kind: Deployment -metadata: - name: pod-to-b-multi-node-hostport -spec: - replicas: 1 - selector: - matchLabels: - name: pod-to-b-multi-node-hostport - template: - metadata: - labels: - name: pod-to-b-multi-node-hostport - spec: - affinity: - podAntiAffinity: - requiredDuringSchedulingIgnoredDuringExecution: - - labelSelector: - matchExpressions: - - key: name - operator: In - values: - - echo-b - topologyKey: kubernetes.io/hostname - containers: - - command: - - /bin/ash - - -c - - sleep 1000000000 - image: docker.io/byrnedo/alpine-curl:0.1.8 - imagePullPolicy: IfNotPresent - livenessProbe: - exec: - command: [ "curl", "-sS", "--fail", "-o", "/dev/null", "echo-b-host-headless:40000" ] - readinessProbe: - exec: - command: [ "curl", "-sS", "--fail", "-o", "/dev/null", "echo-b-host-headless:40000" ] - name: pod-to-b-multi-node-hostport diff --git a/examples/kubernetes/connectivity-check/pod-to-b-multi-node-nodeport.yaml b/examples/kubernetes/connectivity-check/pod-to-b-multi-node-nodeport.yaml deleted file mode 100644 index a036723ddcfa..000000000000 --- a/examples/kubernetes/connectivity-check/pod-to-b-multi-node-nodeport.yaml +++ /dev/null @@ -1,38 +0,0 @@ -apiVersion: apps/v1 -kind: Deployment -metadata: - name: pod-to-b-multi-node-nodeport -spec: - replicas: 1 - selector: - matchLabels: - name: pod-to-b-multi-node-nodeport - template: - metadata: - labels: - name: pod-to-b-multi-node-nodeport - spec: - affinity: - podAntiAffinity: - requiredDuringSchedulingIgnoredDuringExecution: - - labelSelector: - matchExpressions: - - key: name - operator: In - values: - - echo-b - topologyKey: kubernetes.io/hostname - containers: - - command: - - /bin/ash - - -c - - sleep 1000000000 - image: docker.io/byrnedo/alpine-curl:0.1.8 - imagePullPolicy: IfNotPresent - livenessProbe: - exec: - command: [ "curl", "-sS", "--fail", "-o", "/dev/null", "echo-b-host-headless:31313" ] - readinessProbe: - exec: - command: [ "curl", "-sS", "--fail", "-o", "/dev/null", "echo-b-host-headless:31313" ] - name: pod-to-b-multi-node-nodeport diff --git a/examples/kubernetes/connectivity-check/pod-to-external-1111.yaml b/examples/kubernetes/connectivity-check/pod-to-external-1111.yaml deleted file mode 100644 index a5f3732d984f..000000000000 --- a/examples/kubernetes/connectivity-check/pod-to-external-1111.yaml +++ /dev/null @@ -1,25 +0,0 @@ -apiVersion: apps/v1 -kind: Deployment -metadata: - name: pod-to-a-external-1111 -spec: - selector: - matchLabels: - name: pod-to-a-external-1111 - replicas: 1 - template: - metadata: - labels: - name: pod-to-a-external-1111 - spec: - containers: - - name: pod-to-a-external-1111-container - image: docker.io/byrnedo/alpine-curl:0.1.8 - command: ["/bin/ash", "-c", "sleep 1000000000"] - imagePullPolicy: IfNotPresent - livenessProbe: - exec: - command: ["curl", "-sS", "--fail", "--connect-timeout", "5", "-o", "/dev/null", "1.1.1.1"] - readinessProbe: - exec: - command: ["curl", "-sS", "--fail", "--connect-timeout", "5", "-o", "/dev/null", "1.1.1.1"] diff --git a/examples/kubernetes/connectivity-check/pod-to-external-fqdn-allow-google.yaml b/examples/kubernetes/connectivity-check/pod-to-external-fqdn-allow-google.yaml deleted file mode 100644 index be3a1fef2e5b..000000000000 --- a/examples/kubernetes/connectivity-check/pod-to-external-fqdn-allow-google.yaml +++ /dev/null @@ -1,59 +0,0 @@ -apiVersion: apps/v1 -kind: Deployment -metadata: - name: pod-to-external-fqdn-allow-google-cnp -spec: - selector: - matchLabels: - name: pod-to-external-fqdn-allow-google-cnp - replicas: 1 - template: - metadata: - labels: - name: pod-to-external-fqdn-allow-google-cnp - spec: - containers: - - name: pod-to-external-fqdn-allow-google-cnp-container - image: docker.io/byrnedo/alpine-curl:0.1.8 - command: ["/bin/ash", "-c", "sleep 1000000000"] - imagePullPolicy: IfNotPresent - livenessProbe: - exec: - command: ["curl", "-sS", "--fail", "--connect-timeout", "5", "-o", "/dev/null", "www.google.com"] - readinessProbe: - exec: - command: ["curl", "-sS", "--fail", "--connect-timeout", "5", "-o", "/dev/null", "www.google.com"] ---- -apiVersion: "cilium.io/v2" -kind: CiliumNetworkPolicy -metadata: - name: "pod-to-external-fqdn-allow-google-cnp" -spec: - endpointSelector: - matchLabels: - name: pod-to-external-fqdn-allow-google-cnp - egress: - - toEndpoints: - - matchLabels: - "k8s:io.kubernetes.pod.namespace": kube-system - "k8s:k8s-app": kube-dns - toPorts: - - ports: - - port: "53" - protocol: ANY - rules: - dns: - - matchPattern: "*" - - toEndpoints: - - matchLabels: - k8s:io.kubernetes.pod.namespace: openshift-dns - k8s:dns.operator.openshift.io/daemonset-dns: default - toPorts: - - ports: - - port: "5353" - protocol: UDP - rules: - dns: - - matchPattern: "*" - - toFQDNs: - - matchPattern: "*.google.com" diff --git a/examples/kubernetes/connectivity-check/policy.cue b/examples/kubernetes/connectivity-check/policy.cue new file mode 100644 index 000000000000..8023c0d5fdbd --- /dev/null +++ b/examples/kubernetes/connectivity-check/policy.cue @@ -0,0 +1,43 @@ +package connectivity_check + +// deployment (defaults.cue) implicitly configures the deployments below such +// that deployments with names matching 'pod-to-', '*-[intra|multi]-node' +// and '*-headless' will contact the related echo server via the related +// service and will be scheduled with affinity / anti-affinity to that server. +_policyResource: { + _allowDNS: true + + metadata: labels: component: "policy-check" +} +deployment: "pod-to-a-denied-cnp": _policyResource & { + _probeExpectFail: true +} +egressCNP: "pod-to-a-denied-cnp": _policyResource +deployment: "pod-to-a-allowed-cnp": _policyResource +egressCNP: "pod-to-a-allowed-cnp": _policyResource & { + _rules: [{ + toEndpoints: [{ + matchLabels: { + name: "echo-a" + } + }] + toPorts: [{ + ports: [{ + port: "80" + protocol: "TCP" + }] + }] + }] +} + +deployment: "pod-to-external-fqdn-allow-google-cnp": _policyResource & { + _probeTarget: "www.google.com" + _probePath: "" +} +egressCNP: "pod-to-external-fqdn-allow-google-cnp": _policyResource & { + // _allowDNS (default true) + 'toFQDNs' rules automatically applies + // DNS policy visibility via resources.cue. + _rules: [{ + toFQDNs: [{matchPattern: "*.google.com"}] + }] +} diff --git a/examples/kubernetes/connectivity-check/proxy.cue b/examples/kubernetes/connectivity-check/proxy.cue new file mode 100644 index 000000000000..d6424aabea98 --- /dev/null +++ b/examples/kubernetes/connectivity-check/proxy.cue @@ -0,0 +1,96 @@ +package connectivity_check + +// deployment (defaults.cue) implicitly configures the deployments below such +// that deployments with names matching 'pod-to-', '*-[intra|multi]-node' +// and '*-headless' will contact the related echo server via the related +// service and will be scheduled with affinity / anti-affinity to that server. +_proxyResource: { + _enableMultipleContainers: true + + metadata: labels: component: "proxy-check" +} + +_egressL7Policy: { + _allowDNS: true + + _port: *"80" | string + _target: *"" | string + _rules: [{ + if _target != "" { + toEndpoints: [{ + matchLabels: { + name: _target + } + }] + } + toPorts: [{ + ports: [{ + port: _port + protocol: "TCP" + }] + rules: { + http: [{ + method: "GET" + path: "/public$" + }] + } + }] + }] +} + +// Pod-to-a (egress policy, no ingress policy) +_egressEchoAPolicy: _egressL7Policy & { + _target: "echo-a" + metadata: labels: component: "proxy-check" +} +deployment: "pod-to-a-intra-node-proxy-egress-policy": _proxyResource +egressCNP: "pod-to-a-intra-node-proxy-egress-policy": _proxyResource & _egressEchoAPolicy +deployment: "pod-to-a-multi-node-proxy-egress-policy": _proxyResource +egressCNP: "pod-to-a-multi-node-proxy-egress-policy": _proxyResource & _egressEchoAPolicy + +// Pod-to-c (no egress policy, ingress policy via echo-servers.cue) +deployment: "pod-to-c-intra-node-proxy-ingress-policy": _proxyResource +deployment: "pod-to-c-multi-node-proxy-ingress-policy": _proxyResource + +// Pod-to-c (egress + ingress policy) +_egressEchoCPolicy: _egressL7Policy & { + _target: "echo-c" + metadata: labels: component: "proxy-check" +} +deployment: "pod-to-c-intra-node-proxy-to-proxy-policy": _proxyResource +egressCNP: "pod-to-c-intra-node-proxy-to-proxy-policy": _proxyResource & _egressEchoCPolicy +deployment: "pod-to-c-multi-node-proxy-to-proxy-policy": _proxyResource +egressCNP: "pod-to-c-multi-node-proxy-to-proxy-policy": _proxyResource & _egressEchoCPolicy + +// Pod-to-hostport (egress policy, no ingress policy) +_hostPortProxyResource: { + _enableMultipleContainers: true + _probeTarget: "echo-c-host-headless:40001" + + metadata: labels: { + component: "hostport-check" + quarantine: "true" + } +} +_hostPortProxyPolicy: _egressL7Policy & { + _port: "40001" + metadata: labels: { + component: "hostport-check" + quarantine: "true" + } +} +// Pod-to-a (egress policy, no ingress policy) +deployment: "pod-to-a-multi-node-hostport-proxy-egress": _hostPortProxyResource +egressCNP: "pod-to-a-multi-node-hostport-proxy-egress": _hostPortProxyPolicy +deployment: "pod-to-a-intra-node-hostport-proxy-egress": _hostPortProxyResource +egressCNP: "pod-to-a-intra-node-hostport-proxy-egress": _hostPortProxyPolicy + +// Pod-to-c (no egress policy, ingress policy via echo-servers.cue) +deployment: "pod-to-c-multi-node-hostport-proxy-ingress": _hostPortProxyResource +deployment: "pod-to-c-intra-node-hostport-proxy-ingress": _hostPortProxyResource + +// Pod-to-c (egress + ingress policy) +deployment: "pod-to-c-multi-node-hostport-proxy-to-proxy": _hostPortProxyResource +egressCNP: "pod-to-c-multi-node-hostport-proxy-to-proxy": _hostPortProxyPolicy +deployment: "pod-to-c-intra-node-hostport-proxy-to-proxy": _hostPortProxyResource +egressCNP: "pod-to-c-intra-node-hostport-proxy-to-proxy": _hostPortProxyPolicy diff --git a/examples/kubernetes/connectivity-check/resources.cue b/examples/kubernetes/connectivity-check/resources.cue new file mode 100644 index 000000000000..fc2f02865d1c --- /dev/null +++ b/examples/kubernetes/connectivity-check/resources.cue @@ -0,0 +1,289 @@ +package connectivity_check + +_probeFailureTimeout: 5 // seconds + +_spec: { + _name: string + _image: string + _command: [...string] + + _affinity: *"" | string + _antiAffinity: *"" | string + _serverPort: *"" | string + _probeTarget: string + _probePath: *"/public" | string + _probeExpectFail: *false | true + + _containers: [...{}] + _enableMultipleContainers: *false | true + + _container: { + image: _image + imagePullPolicy: "IfNotPresent" + if len(_command) > 0 { + command: _command + } + if _serverPort != "" { + env: [{ + name: "PORT" + value: _serverPort + }] + } + ports: [...{ + _expose: *false | true + }] + } + + _allowProbe: [ "curl", "-sS", "--fail", "--connect-timeout", "\(_probeFailureTimeout)", "-o", "/dev/null", "\(_probeTarget)\(_probePath)"] + _rejectProbe: [ "ash", "-c", "! curl -s --fail --connect-timeout \(_probeFailureTimeout) -o /dev/null \(_probeTarget)/private"] + if !_enableMultipleContainers { + _c1: _container & { + name: "\(_name)-container" + if _probeExpectFail { + readinessProbe: { + timeoutSeconds: _probeFailureTimeout + 2 + exec: command: _rejectProbe + } + livenessProbe: { + timeoutSeconds: _probeFailureTimeout + 2 + exec: command: _rejectProbe + } + } + if !_probeExpectFail { + readinessProbe: exec: command: _allowProbe + livenessProbe: exec: command: _allowProbe + } + } + _containers: [_c1] + } + if _enableMultipleContainers { + _c1: _container & { + name: "\(_name)-allow-container" + readinessProbe: exec: command: _allowProbe + livenessProbe: exec: command: _allowProbe + } + _c2: _container & { + name: "\(_name)-reject-container" + livenessProbe: { + timeoutSeconds: _probeFailureTimeout + 2 + exec: command: _rejectProbe + } + livenessProbe: { + timeoutSeconds: _probeFailureTimeout + 2 + exec: command: _rejectProbe + } + } + _containers: [_c1] + [_c2] + } + + apiVersion: "apps/v1" + kind: "Deployment" + metadata: { + name: _name + labels: { + name: _name + topology: *"any" | string + component: *"invalid" | string + quarantine: *"false" | "true" + } + } + spec: { + selector: matchLabels: name: _name + template: { + metadata: labels: name: _name + spec: containers: _containers + if _affinity != "" { + spec: affinity: podAffinity: requiredDuringSchedulingIgnoredDuringExecution: [{ + labelSelector: matchExpressions: [{ + key: "name" + operator: "In" + values: [ + _affinity, + ] + }] + topologyKey: "kubernetes.io/hostname" + }] + } + if _antiAffinity != "" { + spec: affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: [{ + labelSelector: matchExpressions: [{ + key: "name" + operator: "In" + values: [ + _antiAffinity, + ] + }] + topologyKey: "kubernetes.io/hostname" + }] + } + } + } +} + +deployment: [ID=_]: _spec & { + _name: ID + _image: string + + // Expose services + _exposeClusterIP: *false | true + _exposeNodePort: *false | true + _exposeHeadless: *false | true + + // Pod ports + _serverPort: *"" | string + if _serverPort != "" { + _probeTarget: "localhost:\(_serverPort)" + } + + spec: { + replicas: *1 | int + template: spec: { + hostNetwork: *false | true + } + } +} + +service: [ID=_]: { + _name: ID + _selector: ID | string + + apiVersion: "v1" + kind: "Service" + metadata: { + name: ID + labels: { + name: _name + topology: *"any" | string + component: *"invalid" | string + quarantine: *"false" | "true" + } + } + spec: { + type: *"ClusterIP" | string + selector: name: _selector + } +} + +_cnp: { + _name: string + + apiVersion: "cilium.io/v2" + kind: "CiliumNetworkPolicy" + metadata: { + name: _name + labels: { + name: _name + topology: *"any" | string + component: *"invalid" | string + quarantine: *"false" | "true" + } + } + spec: endpointSelector: matchLabels: name: _name +} + +egressCNP: [ID=_]: _cnp & { + _name: ID + _rules: [...{}] + _allowDNS: *true | false + + // Implicitly open DNS visibility if FQDN rule is specified. + _enableDNSVisibility: *false | true + for r in _rules if len(r.toFQDNs) > 0 { + _enableDNSVisibility: true + } + + if !_allowDNS { + spec: egress: _rules + } + if _allowDNS { + spec: egress: _rules + [ + { + toEndpoints: [{ + matchLabels: { + "k8s:io.kubernetes.pod.namespace": "kube-system" + "k8s:k8s-app": "kube-dns" + } + }] + toPorts: [{ + ports: [{ + port: "53" + protocol: "ANY" + }] + if _enableDNSVisibility { + rules: dns: [{matchPattern: "*"}] + } + }] + }, + { + toEndpoints: [{ + matchLabels: { + "k8s:io.kubernetes.pod.namespace": "openshift-dns" + "k8s:dns.operator.openshift.io/daemonset-dns": "default" + } + }] + toPorts: [{ + ports: [{ + port: "5353" + protocol: "UDP" + }] + if _enableDNSVisibility { + rules: dns: [{matchPattern: "*"}] + } + }] + }, + ] + } +} + +ingressCNP: [ID=_]: _cnp & { + _name: ID + _rules: [...{}] + + spec: ingress: _rules +} + +// Create services for each deployment that have relevant configuration. +for x in [deployment] for k, v in x { + if v._exposeClusterIP || v._exposeNodePort { + service: "\(k)": { + metadata: v.metadata + spec: selector: v.spec.template.metadata.labels + if v._exposeNodePort { + spec: type: "NodePort" + } + spec: ports: [ + for c in v.spec.template.spec.containers + for p in c.ports + if p._expose { + let Port = p.containerPort // Port is an alias + port: *Port | int + if v._exposeNodePort { + nodePort: v._nodePort + } + }, + ] + } + } + if v._exposeHeadless { + service: "\(k)-headless": { + _selector: k + metadata: name: "\(v.metadata.name)-headless" + metadata: labels: { + name: "\(v.metadata.name)-headless" + component: v.metadata.labels.component + topology: *"any" | string + quarantine: *"false" | "true" + } + spec: selector: v.spec.template.metadata.labels + spec: clusterIP: "None" + spec: ports: [ + for c in v.spec.template.spec.containers + for p in c.ports + if p._expose { + let Port = p.containerPort // Port is an alias + port: *Port | int + }, + ] + } + } +} diff --git a/examples/kubernetes/connectivity-check/services.cue b/examples/kubernetes/connectivity-check/services.cue new file mode 100644 index 000000000000..2e149247637b --- /dev/null +++ b/examples/kubernetes/connectivity-check/services.cue @@ -0,0 +1,38 @@ +package connectivity_check + +deployment: [ID=_]: { + if ID =~ "^[-_a-zA-Z0-9]*-headless$" { + _probeTarget: "echo-b-headless" + } +} + +// deployment (defaults.cue) implicitly configures the deployments below such +// that deployments with names matching 'pod-to-', '*-[intra|multi]-node' +// and '*-headless' will contact the related echo server via the related +// service and will be scheduled with affinity / anti-affinity to that server. +_serviceDeployment: { + metadata: labels: component: "services-check" +} + +// Service checks +deployment: "pod-to-b-multi-node-clusterip": _serviceDeployment +deployment: "pod-to-b-multi-node-headless": _serviceDeployment +//deployment: "pod-to-b-intra-node-clusterip": _serviceDeployment +//deployment: "pod-to-b-intra-node-headless": _serviceDeployment + +_hostnetDeployment: _serviceDeployment & { + spec: template: spec: { + hostNetwork: true + dnsPolicy: "ClusterFirstWithHostNet" + } +} +deployment: "host-to-b-multi-node-clusterip": _hostnetDeployment +deployment: "host-to-b-multi-node-headless": _hostnetDeployment + +// Hostport checks +_hostPortDeployment: { + metadata: labels: component: "hostport-check" + _probeTarget: "echo-b-host-headless:40000" +} +deployment: "pod-to-b-multi-node-hostport": _hostPortDeployment +deployment: "pod-to-b-intra-node-hostport": _hostPortDeployment