Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Istio gcp integration test #7813

Merged
merged 8 commits into from Jun 7, 2023
Merged

Istio gcp integration test #7813

merged 8 commits into from Jun 7, 2023

Conversation

barchw
Copy link
Contributor

@barchw barchw commented Jun 5, 2023

Description

Changes proposed in this pull request:

  • Run Istio tests on GCP

Related issue(s)

@kyma-bot kyma-bot added the do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. label Jun 5, 2023
@kyma-bot
Copy link
Contributor

kyma-bot commented Jun 5, 2023

Skipping CI for Draft Pull Request.
If you want CI signal for your change, please convert it to an actual PR.
You can still manually trigger a test run with /test all

@kyma-bot kyma-bot added needs-kind needs-area size/M Denotes a PR that changes 30-99 lines, ignoring generated files. labels Jun 5, 2023
@barchw
Copy link
Contributor Author

barchw commented Jun 5, 2023

/test all

@kyma-bot kyma-bot added size/L Denotes a PR that changes 100-499 lines, ignoring generated files. no-changes and removed size/M Denotes a PR that changes 30-99 lines, ignoring generated files. labels Jun 5, 2023
@kyma-bot
Copy link
Contributor

kyma-bot commented Jun 5, 2023

Plan Result

No changes. Your infrastructure matches the configuration.

⚠️ Warnings ⚠️

Warning: "default_secret_name" is no longer applicable for Kubernetes v1.24.0 and above

  with module.untrusted_workload_terraform_executor_k8s_service_account.kubernetes_service_account.terraform_executor,
  on ../../../../development/terraform-executor/terraform/modules/k8s-terraform-executor/main.tf line 15, in resource "kubernetes_service_account" "terraform_executor":
  15: resource "kubernetes_service_account" "terraform_executor" {

Starting from version 1.24.0 Kubernetes does not automatically generate a
token for service accounts, in this case, "default_secret_name" will be empty

(and 2 more similar warnings elsewhere)

@barchw
Copy link
Contributor Author

barchw commented Jun 5, 2023

/test pull-test-infra-pjtester

@barchw
Copy link
Contributor Author

barchw commented Jun 5, 2023

/test all

@barchw
Copy link
Contributor Author

barchw commented Jun 5, 2023

/test all

2 similar comments
@barchw
Copy link
Contributor Author

barchw commented Jun 5, 2023

/test all

@barchw
Copy link
Contributor Author

barchw commented Jun 5, 2023

/test all

@barchw
Copy link
Contributor Author

barchw commented Jun 6, 2023

/test pull-test-infra-pjtester

@kyma-bot kyma-bot added size/M Denotes a PR that changes 30-99 lines, ignoring generated files. and removed size/L Denotes a PR that changes 100-499 lines, ignoring generated files. labels Jun 6, 2023
@barchw barchw added area/service-mesh Issues or PRs related to service-mesh area/ci Issues or PRs related to CI related topics labels Jun 6, 2023
@barchw barchw added the kind/feature Categorizes issue or PR as related to a new feature. label Jun 6, 2023
@barchw barchw marked this pull request as ready for review June 6, 2023 08:29
@barchw barchw requested a review from a team as a code owner June 6, 2023 08:29
@barchw barchw requested review from halamix2 and Sawthis June 6, 2023 08:29
@kyma-bot kyma-bot removed the do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. label Jun 6, 2023
@barchw barchw removed the needs-area label Jun 6, 2023
@barchw barchw mentioned this pull request Jun 6, 2023
4 tasks
triffer
triffer previously approved these changes Jun 6, 2023
@kyma-bot kyma-bot added the lgtm Looks good to me! label Jun 6, 2023
@barchw barchw requested review from a team and neighbors-dev-bot as code owners June 7, 2023 07:33
@kyma-bot kyma-bot removed the lgtm Looks good to me! label Jun 7, 2023
@triffer triffer self-requested a review June 7, 2023 07:35
@kyma-bot kyma-bot added the lgtm Looks good to me! label Jun 7, 2023
@kyma-bot kyma-bot merged commit ea80a55 into kyma-project:main Jun 7, 2023
6 checks passed
@kyma-bot
Copy link
Contributor

kyma-bot commented Jun 7, 2023

@barchw: Updated the job-config configmap in namespace default at cluster default using the following files:

  • key istio-integration.yaml using file prow/jobs/istio/istio-integration.yaml
  • key istio-manager.yaml using file prow/jobs/modules/internal/istio-manager.yaml

In response to this:

Description

Changes proposed in this pull request:

  • Run Istio tests on GCP

Related issue(s)

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@kyma-bot
Copy link
Contributor

kyma-bot commented Jun 7, 2023

✅ Apply Succeeded

Apply complete! Resources: 0 added, 0 changed, 0 destroyed.
Details (Click me)
module.terraform_executor_gcp_service_account.google_service_account.terraform_executor: Refreshing state... [id=projects/sap-kyma-prow/serviceAccounts/terraform-executor@sap-kyma-prow.iam.gserviceaccount.com]
data.google_container_cluster.tekton_k8s_cluster: Reading...
data.google_client_config.gcp: Reading...
data.google_container_cluster.trusted_workload_k8s_cluster: Reading...
data.google_container_cluster.untrusted_workload_k8s_cluster: Reading...
data.google_container_cluster.prow_k8s_cluster: Reading...
data.google_client_config.gcp: Read complete after 0s [id=projects/"sap-kyma-prow"/regions/"europe-west4"/zones/<null>]
data.google_container_cluster.prow_k8s_cluster: Read complete after 0s [id=projects/sap-kyma-prow/locations/europe-west3-a/clusters/prow]
data.google_container_cluster.tekton_k8s_cluster: Read complete after 0s [id=projects/sap-kyma-prow/locations/europe-west4/clusters/tekton]
data.google_container_cluster.untrusted_workload_k8s_cluster: Read complete after 1s [id=projects/sap-kyma-prow/locations/europe-west3/clusters/untrusted-workload-kyma-prow]
data.google_container_cluster.trusted_workload_k8s_cluster: Read complete after 1s [id=projects/sap-kyma-prow/locations/europe-west3/clusters/trusted-workload-kyma-prow]
module.prow_gatekeeper.data.kubectl_path_documents.constraint_templates_path["../../../../opa/gatekeeper/constraint-templates/**.yaml"]: Reading...
module.prow_gatekeeper.data.kubectl_path_documents.constraints_path["../../../../prow/cluster/resources/gatekeeper-constraints/prow/**.yaml"]: Reading...
module.prow_gatekeeper.data.kubectl_path_documents.constraints_path["../../../../prow/cluster/resources/gatekeeper-constraints/prow/**.yaml"]: Read complete after 0s [id=e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855]
module.prow_gatekeeper.data.kubectl_path_documents.constraint_templates_path["../../../../opa/gatekeeper/constraint-templates/**.yaml"]: Read complete after 0s [id=810519eebef2dff5b904832343371bff0c094447de15fadcf7fc8048351f2143]
module.tekton_terraform_executor_k8s_service_account.kubernetes_service_account.terraform_executor: Refreshing state... [id=default/terraform-executor]
module.tekton_gatekeeper.data.kubectl_path_documents.constraint_templates_path["../../../../opa/gatekeeper/constraint-templates/**.yaml"]: Reading...
module.untrusted_workload_terraform_executor_k8s_service_account.kubernetes_service_account.terraform_executor: Refreshing state... [id=default/terraform-executor]
module.prow_gatekeeper.data.kubectl_file_documents.gatekeeper: Reading...
module.tekton_gatekeeper.data.kubectl_path_documents.constraints_path["../../../../tekton/deployments/gatekeeper-constraints/**.yaml"]: Reading...
module.trusted_workload_gatekeeper.data.kubectl_path_documents.constraint_templates_path["../../../../opa/gatekeeper/constraint-templates/**.yaml"]: Reading...
module.trusted_workload_gatekeeper.data.kubectl_file_documents.gatekeeper: Reading...
module.trusted_workload_gatekeeper.data.kubectl_path_documents.constraints_path["../../../../prow/cluster/resources/gatekeeper-constraints/workloads/**.yaml"]: Reading...
module.tekton_gatekeeper.data.kubectl_path_documents.constraints_path["../../../../tekton/deployments/gatekeeper-constraints/**.yaml"]: Read complete after 0s [id=52507a6b3cc8faadb69b744f7cb223e9cc5ccbb6e6abe6fdc3bade397df3e14d]
module.tekton_gatekeeper.data.kubectl_file_documents.gatekeeper: Reading...
module.trusted_workload_gatekeeper.data.kubectl_path_documents.constraints_path["../../../../prow/cluster/resources/gatekeeper-constraints/workloads/**.yaml"]: Read complete after 0s [id=2786ed5f1ca0ae506c24d425a83d593c6a2d31b3415662056ff14c09d1808b0c]
module.trusted_workload_gatekeeper.data.kubectl_path_documents.constraints_path["../../../../prow/cluster/resources/gatekeeper-constraints/trusted/**.yaml"]: Reading...
module.untrusted_workload_gatekeeper.data.kubectl_path_documents.constraints_path["../../../../prow/cluster/resources/gatekeeper-constraints/workloads/**.yaml"]: Reading...
module.trusted_workload_gatekeeper.data.kubectl_path_documents.constraints_path["../../../../prow/cluster/resources/gatekeeper-constraints/trusted/**.yaml"]: Read complete after 0s [id=e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855]
module.untrusted_workload_gatekeeper.data.kubectl_path_documents.constraints_path["../../../../prow/cluster/resources/gatekeeper-constraints/untrusted/**.yaml"]: Reading...
module.untrusted_workload_gatekeeper.data.kubectl_path_documents.constraints_path["../../../../prow/cluster/resources/gatekeeper-constraints/untrusted/**.yaml"]: Read complete after 0s [id=e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855]
module.untrusted_workload_gatekeeper.data.kubectl_path_documents.constraints_path["../../../../prow/cluster/resources/gatekeeper-constraints/workloads/**.yaml"]: Read complete after 0s [id=2786ed5f1ca0ae506c24d425a83d593c6a2d31b3415662056ff14c09d1808b0c]
module.untrusted_workload_gatekeeper.data.kubectl_path_documents.constraint_templates_path["../../../../opa/gatekeeper/constraint-templates/**.yaml"]: Reading...
module.terraform_executor_gcp_service_account.google_project_iam_member.terraform_executor_owner: Refreshing state... [id=sap-kyma-prow/roles/owner/serviceAccount:terraform-executor@sap-kyma-prow.iam.gserviceaccount.com]
module.trusted_workload_gatekeeper.data.kubectl_path_documents.constraint_templates_path["../../../../opa/gatekeeper/constraint-templates/**.yaml"]: Read complete after 0s [id=810519eebef2dff5b904832343371bff0c094447de15fadcf7fc8048351f2143]
module.tekton_gatekeeper.data.kubectl_path_documents.constraint_templates_path["../../../../opa/gatekeeper/constraint-templates/**.yaml"]: Read complete after 0s [id=810519eebef2dff5b904832343371bff0c094447de15fadcf7fc8048351f2143]
module.untrusted_workload_gatekeeper.data.kubectl_file_documents.gatekeeper: Reading...
module.terraform_executor_gcp_service_account.google_service_account_iam_binding.terraform_workload_identity: Refreshing state... [id=projects/sap-kyma-prow/serviceAccounts/terraform-executor@sap-kyma-prow.iam.gserviceaccount.com/roles/iam.workloadIdentityUser]
module.prow_gatekeeper.kubectl_manifest.constraint_templates["apiVersion: templates.gatekeeper.sh/v1\nkind: ConstraintTemplate\nmetadata:\n  name: k8spsphostnetworkingports\n  annotations:\n    metadata.gatekeeper.sh/title: \"Host Networking Ports\"\n    metadata.gatekeeper.sh/version: 1.0.0\n    description: >-\n      Controls usage of host network namespace by pod containers. Specific\n      ports must be specified. Corresponds to the `hostNetwork` and\n      `hostPorts` fields in a PodSecurityPolicy. For more information, see\n      https://kubernetes.io/docs/concepts/policy/pod-security-policy/#host-namespaces\nspec:\n  crd:\n    spec:\n      names:\n        kind: K8sPSPHostNetworkingPorts\n      validation:\n        # Schema for the `parameters` field\n        openAPIV3Schema:\n          type: object\n          description: >-\n            Controls usage of host network namespace by pod containers. Specific\n            ports must be specified. Corresponds to the `hostNetwork` and\n            `hostPorts` fields in a PodSecurityPolicy. For more information, see\n            https://kubernetes.io/docs/concepts/policy/pod-security-policy/#host-namespaces\n          properties:\n            exemptImages:\n              description: >-\n                Any container that uses an image that matches an entry in this list will be excluded\n                from enforcement. Prefix-matching can be signified with `*`. For example: `my-image-*`.\n\n                It is recommended that users use the fully-qualified Docker image name (e.g. start with a domain name)\n                in order to avoid unexpectedly exempting images from an untrusted repository.\n              type: array\n              items:\n                type: string\n            hostNetwork:\n              description: \"Determines if the policy allows the use of HostNetwork in the pod spec.\"\n              type: boolean\n            min:\n              description: \"The start of the allowed port range, inclusive.\"\n              type: integer\n            max:\n              description: \"The end of the allowed port range, inclusive.\"\n              type: integer\n  targets:\n    - target: admission.k8s.gatekeeper.sh\n      rego: |\n        package k8spsphostnetworkingports\n\n        import data.lib.exempt_container.is_exempt\n\n        violation[{\"msg\": msg, \"details\": {}}] {\n            input_share_hostnetwork(input.review.object)\n            msg := sprintf(\"The specified hostNetwork and hostPort are not allowed, pod: %v. Allowed values: %v\", [input.review.object.metadata.name, input.parameters])\n        }\n\n        input_share_hostnetwork(o) {\n            not input.parameters.hostNetwork\n            o.spec.hostNetwork\n        }\n\n        input_share_hostnetwork(o) {\n            hostPort := input_containers[_].ports[_].hostPort\n            hostPort < input.parameters.min\n        }\n\n        input_share_hostnetwork(o) {\n            hostPort := input_containers[_].ports[_].hostPort\n            hostPort > input.parameters.max\n        }\n\n        input_containers[c] {\n            c := input.review.object.spec.containers[_]\n            not is_exempt(c)\n        }\n\n        input_containers[c] {\n            c := input.review.object.spec.initContainers[_]\n            not is_exempt(c)\n        }\n\n        input_containers[c] {\n            c := input.review.object.spec.ephemeralContainers[_]\n            not is_exempt(c)\n        }\n      libs:\n        - |\n          package lib.exempt_container\n\n          is_exempt(container) {\n              exempt_images := object.get(object.get(input, \"parameters\", {}), \"exemptImages\", [])\n              img := container.image\n              exemption := exempt_images[_]\n              _matches_exemption(img, exemption)\n          }\n\n          _matches_exemption(img, exemption) {\n              not endswith(exemption, \"*\")\n              exemption == img\n          }\n\n          _matches_exemption(img, exemption) {\n              endswith(exemption, \"*\")\n              prefix := trim_suffix(exemption, \"*\")\n              startswith(img, prefix)\n          }"]: Refreshing state... [id=/apis/templates.gatekeeper.sh/v1/constrainttemplates/k8spsphostnetworkingports]
module.untrusted_workload_gatekeeper.data.kubectl_path_documents.constraint_templates_path["../../../../opa/gatekeeper/constraint-templates/**.yaml"]: Read complete after 0s [id=810519eebef2dff5b904832343371bff0c094447de15fadcf7fc8048351f2143]
module.prow_gatekeeper.data.kubectl_file_documents.gatekeeper: Read complete after 0s [id=dc39d54a3fa7ea8c38399850c255006d127216f312696358a6b52c8fa4afa801]
module.prow_gatekeeper.kubectl_manifest.constraint_templates["apiVersion: templates.gatekeeper.sh/v1\nkind: ConstraintTemplate\nmetadata:\n  name: k8spspallowprivilegeescalationcontainer\n  annotations:\n    metadata.gatekeeper.sh/title: \"Allow Privilege Escalation in Container\"\n    metadata.gatekeeper.sh/version: 1.0.0\n    description: >-\n      Controls restricting escalation to root privileges. Corresponds to the\n      `allowPrivilegeEscalation` field in a PodSecurityPolicy. For more\n      information, see\n      https://kubernetes.io/docs/concepts/policy/pod-security-policy/#privilege-escalation\nspec:\n  crd:\n    spec:\n      names:\n        kind: K8sPSPAllowPrivilegeEscalationContainer\n      validation:\n        openAPIV3Schema:\n          type: object\n          description: >-\n            Controls restricting escalation to root privileges. Corresponds to the\n            `allowPrivilegeEscalation` field in a PodSecurityPolicy. For more\n            information, see\n            https://kubernetes.io/docs/concepts/policy/pod-security-policy/#privilege-escalation\n          properties:\n            exemptImages:\n              description: >-\n                Any container that uses an image that matches an entry in this list will be excluded\n                from enforcement. Prefix-matching can be signified with `*`. For example: `my-image-*`.\n\n                It is recommended that users use the fully-qualified Docker image name (e.g. start with a domain name)\n                in order to avoid unexpectedly exempting images from an untrusted repository.\n              type: array\n              items:\n                type: string\n  targets:\n    - target: admission.k8s.gatekeeper.sh\n      rego: |\n        package k8spspallowprivilegeescalationcontainer\n\n        import data.lib.exempt_container.is_exempt\n\n        violation[{\"msg\": msg, \"details\": {}}] {\n            c := input_containers[_]\n            not is_exempt(c)\n            input_allow_privilege_escalation(c)\n            msg := sprintf(\"Privilege escalation container is not allowed: %v\", [c.name])\n        }\n\n        input_allow_privilege_escalation(c) {\n            not has_field(c, \"securityContext\")\n        }\n        input_allow_privilege_escalation(c) {\n            not c.securityContext.allowPrivilegeEscalation == false\n        }\n        input_containers[c] {\n            c := input.review.object.spec.containers[_]\n        }\n        input_containers[c] {\n            c := input.review.object.spec.initContainers[_]\n        }\n        input_containers[c] {\n            c := input.review.object.spec.ephemeralContainers[_]\n        }\n        # has_field returns whether an object has a field\n        has_field(object, field) = true {\n            object[field]\n        }\n      libs:\n        - |\n          package lib.exempt_container\n\n          is_exempt(container) {\n              exempt_images := object.get(object.get(input, \"parameters\", {}), \"exemptImages\", [])\n              img := container.image\n              exemption := exempt_images[_]\n              _matches_exemption(img, exemption)\n          }\n\n          _matches_exemption(img, exemption) {\n              not endswith(exemption, \"*\")\n              exemption == img\n          }\n\n          _matches_exemption(img, exemption) {\n              endswith(exemption, \"*\")\n              prefix := trim_suffix(exemption, \"*\")\n              startswith(img, prefix)\n          }"]: Refreshing state... [id=/apis/templates.gatekeeper.sh/v1/constrainttemplates/k8spspallowprivilegeescalationcontainer]
module.prow_gatekeeper.kubectl_manifest.constraint_templates["apiVersion: templates.gatekeeper.sh/v1\nkind: ConstraintTemplate\nmetadata:\n  name: secrettrustedusage\n  annotations:\n    metadata.gatekeeper.sh/title: \"Secret Trusted Usage\"\n    metadata.gatekeeper.sh/version: 1.0.0\n    description: >-\n      Controls any Pod ability to use restricted secret.\nspec:\n  crd:\n    spec:\n      names:\n        kind: SecretTrustedUsage\n      validation:\n        openAPIV3Schema:\n          type: object\n          description: >-\n            Controls any Pod ability to use use restricted secret.\n          properties:\n            labels:\n              type: array\n              description: >-\n                A list of labels and values the object must specify.\n              items:\n                type: object\n                properties:\n                  key:\n                    type: string\n                    description: >-\n                      The required label.\n                  allowedRegex:\n                    type: string\n                    description: >-\n                      Regular expression the label's value must match. The value must contain one exact match for\n                      the regular expression.\n            restrictedSecrets:\n              type: array\n              description: >-\n                A list of restricted secrets.\n              items:\n                type: string\n                description: >-\n                  The restricted secret name.\n            trustedServiceAccounts:\n              type: array\n              description: >-\n                A list of trusted service accounts. If a Pod match criteria from trustedServiceAccount, it is allowed to use restricted secret.\n              items:\n                type: string\n                description: >-\n                  The trusted service account name.\n            trustedImages:\n              type: array\n              description: >-\n                A list of trusted images. If a Pod match criteria from trustedImage, it is allowed to use restricted secret.\n              items:\n                type: object\n                description: >-\n                  The trusted image criteria.\n                properties:\n                  image:\n                    type: string\n                    description: >-\n                      The container trusted image name.\n                  command:\n                    type: array\n                    description: >-\n                      The list of container trusted commands to run.\n                    items:\n                      type: string\n                      description: >-\n                        The trusted command to run.\n                  args:\n                    type: array\n                    description: >-\n                      The trusted arguments to pass to the command.\n                    items:\n                      type: string\n                      description: >-\n                        The trusted argument to pass to the command.\n  targets:\n    - target: admission.k8s.gatekeeper.sh\n      rego: |\n        package kubernetes.secrettrustedusage\n        \n        import future.keywords.contains\n        import future.keywords.if\n        import future.keywords.in\n  \n        # Report violation if the container is using a restricted secret and does not match trusted usage criteria.\n        # Violation is check if secret is used in env.envFrom container spec.\n        violation[{\"msg\": msg}] {\n          some k\n          # Iterate over all containers in the pod.\n          container := input_containers[_]\n        \n          # Check if the container is using a restricted secret.\n          container.envFrom[_].secretRef.name == input.parameters.restrictedSecrets[k]\n        \n          # Check if container is not matching trusted usage criteria.\n          not trustedUsages(container)\n        \n          # Format violation message.\n          msg := sprintf(\"Container %v is not allowed to use restricted secret: %v.\", [container.name, input.parameters.restrictedSecrets[k]])\n        }\n  \n        # Report violation if the container is using a restricted secret and does not match trusted usage criteria.\n        # Violation is check if secret is used in env.valueFrom container spec.\n        violation[{\"msg\": msg}] {\n          some k\n          # Iterate over all containers in the pod.\n          container := input_containers[_]\n        \n          # Check if the container is using a restricted secret.\n          container.env[_].valueFrom.secretKeyRef.name == input.parameters.restrictedSecrets[k]\n        \n          # Check if container is not matching trusted usage criteria.\n          not trustedUsages(container)\n        \n          # Format violation message.\n          msg := sprintf(\"Container %v is not allowed to use restricted secret: %v.\", [container.name, input.parameters.restrictedSecrets[k]])\n        }\n  \n        # Report violation if the container is using a restricted secret and does not match trusted usage criteria.\n        # Violation is check if secret is mount as volume.\n        violation[{\"msg\": msg}] {\n          some k, j\n          # Iterate over all containers in the pod.\n          container := input_containers[_]\n        \n          # Check if the container is using a restricted secret.\n          input.review.object.spec.volumes[j].secret.secretN

# ...
# ... The maximum length of GitHub Comment is 65536, so the content is omitted by tfcmt.
# ...

/v1/namespaces/gatekeeper-system/rolebindings/gatekeeper-manager-rolebinding]
module.untrusted_workload_gatekeeper.kubectl_manifest.gatekeeper["/api/v1/namespaces/gatekeeper-system/secrets/gatekeeper-webhook-server-cert"]: Refreshing state... [id=/api/v1/namespaces/gatekeeper-system/secrets/gatekeeper-webhook-server-cert]
module.untrusted_workload_gatekeeper.kubectl_manifest.gatekeeper["/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/gatekeeper-manager-rolebinding"]: Refreshing state... [id=/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/gatekeeper-manager-rolebinding]
module.untrusted_workload_gatekeeper.kubectl_manifest.gatekeeper["/apis/policy/v1/namespaces/gatekeeper-system/poddisruptionbudgets/gatekeeper-controller-manager"]: Refreshing state... [id=/apis/policy/v1/namespaces/gatekeeper-system/poddisruptionbudgets/gatekeeper-controller-manager]
module.untrusted_workload_gatekeeper.kubectl_manifest.gatekeeper["/apis/apiextensions.k8s.io/v1/customresourcedefinitions/configs.config.gatekeeper.sh"]: Refreshing state... [id=/apis/apiextensions.k8s.io/v1/customresourcedefinitions/configs.config.gatekeeper.sh]
module.tekton_gatekeeper.kubectl_manifest.constraints["# Constraint to allow only image-builder tool trusted usage on tekton cluster run as image-builder service account identity.\napiVersion: constraints.gatekeeper.sh/v1beta1\nkind: ServiceAccountTrustedUsage\nmetadata:\n  name: tekton-image-builder-sa-trusted-usage\nspec:\n  enforcementAction: warn\n  match:\n    kinds:\n      - apiGroups: [\"\"]\n        kinds: [\"Pod\"]\n  parameters:\n    restrictedServiceAccounts:\n      - image-builder\n    trustedImages:\n      - image: \"eu.gcr.io/sap-kyma-neighbors-dev/image-builder:*\"\n        command:\n          - /tekton/bin/entrypoint\n        args:\n          - -wait_file\n          - /tekton/downward/ready\n          - -wait_file_content\n          - -post_file\n          - /tekton/run/0/out\n          - -termination_path\n          - /tekton/termination\n          - -step_metadata_dir\n          - /tekton/run/0/status\n          - -entrypoint\n          - /image-builder\n          - --\n          - '--name=*'\n          - '--config=*'\n          - '--context=*'\n          - '--dockerfile=*'\n          - --log-dir=/\n      - image: \"gcr.io/tekton-releases/github.com/tektoncd/pipeline/cmd/entrypoint:*\"\n        command:\n          - /ko-app/entrypoint\n          - init\n          - /ko-app/entrypoint\n          - /tekton/bin/entrypoint\n          - step-build-image"]: Refreshing state... [id=/apis/constraints.gatekeeper.sh/v1beta1/serviceaccounttrustedusages/tekton-image-builder-sa-trusted-usage]
module.tekton_gatekeeper.kubectl_manifest.constraints["# Constraint to allow only trusted image-builder usages on tekton cluster to use signify-prod-secret secret.\napiVersion: constraints.gatekeeper.sh/v1beta1\nkind: SecretTrustedUsage\nmetadata:\n  name: tekton-signify-prod-secret-trusted-usage\nspec:\n  enforcementAction: warn\n  match:\n    kinds:\n      - apiGroups: [\"\"]\n        kinds: [\"Pod\"]\n  parameters:\n    labels:\n      - key: \"prow.k8s.io/type\"\n        allowedRegex: \"^postsubmit$\"\n    restrictedSecrets:\n      - \"signify-prod-secret\"\n    trustedImages:\n      - image: \"eu.gcr.io/sap-kyma-neighbors-dev/image-builder:*\"\n        command: [\"/tekton/bin/entrypoint\"]\n        args:\n          - '-wait_file'\n          - /tekton/downward/ready\n          - '-wait_file_content'\n          - '-post_file'\n          - /tekton/run/0/out\n          - '-termination_path'\n          - /tekton/termination\n          - '-step_metadata_dir'\n          - /tekton/run/0/status\n          - '-entrypoint'\n          - /image-builder\n          - '--'\n          - '--name=*'\n          - '--config=*'\n          - '--context=*'\n          - '--dockerfile=*'\n          - '--log-dir=/'"]: Refreshing state... [id=/apis/constraints.gatekeeper.sh/v1beta1/secrettrustedusages/tekton-signify-prod-secret-trusted-usage]
module.trusted_workload_terraform_executor_k8s_service_account.kubernetes_secret.terraform_executor: Refreshing state... [id=default/terraform-executor]
module.trusted_workload_gatekeeper.kubectl_manifest.constraints["apiVersion: constraints.gatekeeper.sh/v1beta1\nkind: K8sPSPAllowedUsers\nmetadata:\n  name: psp-pods-allowed-user-ranges\nspec:\n  enforcementAction: warn\n  match:\n    kinds:\n      - apiGroups: [\"\"]\n        kinds: [\"Pod\"]\n    namespaces:\n      - \"default\"\n  # we're not using any users option in prowjobs"]: Refreshing state... [id=/apis/constraints.gatekeeper.sh/v1beta1/k8spspalloweduserses/psp-pods-allowed-user-ranges]
module.trusted_workload_gatekeeper.kubectl_manifest.constraints["apiVersion: constraints.gatekeeper.sh/v1beta1\nkind: K8sPSPAllowPrivilegeEscalationContainer\nmetadata:\n  name: psp-allow-privilege-escalation-container\nspec:\n  enforcementAction: warn\n  match:\n    kinds:\n      - apiGroups: [\"\"]\n        kinds: [\"Pod\"]\n    namespaces:\n      - \"default\""]: Refreshing state... [id=/apis/constraints.gatekeeper.sh/v1beta1/k8spspallowprivilegeescalationcontainers/psp-allow-privilege-escalation-container]
module.tekton_gatekeeper.kubectl_manifest.constraints["# Constraint to allow only trusted image-builder usages on tekton cluster to use signify-dev-secret secret.\napiVersion: constraints.gatekeeper.sh/v1beta1\nkind: SecretTrustedUsage\nmetadata:\n  name: tekton-signify-dev-secret-trusted-usage\nspec:\n  enforcementAction: warn\n  match:\n    kinds:\n      - apiGroups: [\"\"]\n        kinds: [\"Pod\"]\n  parameters:\n    labels:\n      - key: \"prow.k8s.io/type\"\n        allowedRegex: \"^presubmit$\"\n    restrictedSecrets:\n      - \"signify-dev-secret\"\n    trustedImages:\n      - image: \"eu.gcr.io/sap-kyma-neighbors-dev/image-builder:*\"\n        command: [\"/tekton/bin/entrypoint\"]\n        args:\n          - -wait_file\n          - /tekton/downward/ready\n          - -wait_file_content\n          - -post_file\n          - /tekton/run/0/out\n          - -termination_path\n          - /tekton/termination\n          - -step_metadata_dir\n          - /tekton/run/0/status\n          - -entrypoint\n          - /image-builder\n          - --\n          - '--name=*'\n          - '--config=*'\n          - '--context=*'\n          - '--dockerfile=*'\n          - --log-dir=/"]: Refreshing state... [id=/apis/constraints.gatekeeper.sh/v1beta1/secrettrustedusages/tekton-signify-dev-secret-trusted-usage]
module.trusted_workload_gatekeeper.kubectl_manifest.constraints["apiVersion: constraints.gatekeeper.sh/v1beta1\nkind: K8sPSPHostFilesystem\nmetadata:\n  name: psp-host-filesystem\nspec:\n  enforcementAction: warn\n  match:\n    kinds:\n      - apiGroups: [\"\"]\n        kinds: [\"Pod\"]\n    namespaces:\n      - \"default\"\n  parameters:\n    allowedHostPaths:\n      - pathPrefix: \"/lib/modules\"\n      - pathPrefix: \"/sys/fs/cgroup\""]: Refreshing state... [id=/apis/constraints.gatekeeper.sh/v1beta1/k8spsphostfilesystems/psp-host-filesystem]
module.trusted_workload_gatekeeper.kubectl_manifest.constraints["apiVersion: constraints.gatekeeper.sh/v1beta1\nkind: K8sPSPAppArmor\nmetadata:\n  name: psp-apparmor\nspec:\n  enforcementAction: warn\n  match:\n    kinds:\n      - apiGroups: [\"\"]\n        kinds: [\"Pod\"]\n    namespaces:\n      - \"default\"\n  parameters:\n    allowedProfiles:\n      - runtime/default\n    exemptImages:\n      - eu.gcr.io/sap-kyma-neighbors-dev/image-builder:*"]: Refreshing state... [id=/apis/constraints.gatekeeper.sh/v1beta1/k8spspapparmors/psp-apparmor]
module.trusted_workload_gatekeeper.kubectl_manifest.constraints["apiVersion: constraints.gatekeeper.sh/v1beta1\nkind: K8sPSPCapabilities\nmetadata:\n  name: capabilities\nspec:\n  enforcementAction: warn\n  match:\n    kinds:\n      - apiGroups: [\"\"]\n        kinds: [\"Pod\"]\n    namespaces:\n      - \"default\"\n  # we're not using capabilities"]: Refreshing state... [id=/apis/constraints.gatekeeper.sh/v1beta1/k8spspcapabilitieses/capabilities]
module.trusted_workload_gatekeeper.kubectl_manifest.constraints["apiVersion: constraints.gatekeeper.sh/v1beta1\nkind: K8sPSPHostNetworkingPorts\nmetadata:\n  name: psp-host-network-ports\nspec:\n  enforcementAction: warn\n  match:\n    kinds:\n      - apiGroups: [\"\"]\n        kinds: [\"Pod\"]\n    namespaces:\n      - \"default\"\n  # we're not using any hostNetwork option in prowjobs"]: Refreshing state... [id=/apis/constraints.gatekeeper.sh/v1beta1/k8spsphostnetworkingportses/psp-host-network-ports]
module.trusted_workload_gatekeeper.kubectl_manifest.constraints["apiVersion: constraints.gatekeeper.sh/v1beta1\nkind: K8sPSPSeccomp\nmetadata:\n  name: psp-seccomp\nspec:\n  enforcementAction: warn\n  match:\n    kinds:\n      - apiGroups: [\"\"]\n        kinds: [\"Pod\"]\n    namespaces:\n      - \"default\"\n  parameters:\n    allowedProfiles:\n      - runtime/default\n      - docker/default\n    exemptImages:\n      - eu.gcr.io/sap-kyma-neighbors-dev/image-builder:*"]: Refreshing state... [id=/apis/constraints.gatekeeper.sh/v1beta1/k8spspseccomps/psp-seccomp]
module.trusted_workload_gatekeeper.kubectl_manifest.constraints["apiVersion: constraints.gatekeeper.sh/v1beta1\nkind: K8sPSPHostNamespace\nmetadata:\n  name: psp-host-namespace\nspec:\n  enforcementAction: warn\n  match:\n    kinds:\n      - apiGroups: [\"\"]\n        kinds: [\"Pod\"]\n    namespaces:\n      - \"default\""]: Refreshing state... [id=/apis/constraints.gatekeeper.sh/v1beta1/k8spsphostnamespaces/psp-host-namespace]
module.trusted_workload_gatekeeper.kubectl_manifest.constraints["apiVersion: constraints.gatekeeper.sh/v1beta1\nkind: K8sPSPSELinuxV2\nmetadata:\n  name: psp-selinux-v2\nspec:\n  enforcementAction: warn\n  match:\n    kinds:\n      - apiGroups: [\"\"]\n        kinds: [\"Pod\"]\n    namespaces:\n      - \"default\"\n  # we're not using any SELinux option in prowjobs"]: Refreshing state... [id=/apis/constraints.gatekeeper.sh/v1beta1/k8spspselinuxv2s/psp-selinux-v2]
module.untrusted_workload_gatekeeper.kubectl_manifest.constraints["apiVersion: constraints.gatekeeper.sh/v1beta1\nkind: K8sPSPAllowedUsers\nmetadata:\n  name: psp-pods-allowed-user-ranges\nspec:\n  enforcementAction: warn\n  match:\n    kinds:\n      - apiGroups: [\"\"]\n        kinds: [\"Pod\"]\n    namespaces:\n      - \"default\"\n  # we're not using any users option in prowjobs"]: Refreshing state... [id=/apis/constraints.gatekeeper.sh/v1beta1/k8spspalloweduserses/psp-pods-allowed-user-ranges]
module.trusted_workload_gatekeeper.kubectl_manifest.constraints["apiVersion: constraints.gatekeeper.sh/v1beta1\nkind: K8sPSPPrivilegedContainer\nmetadata:\n  name: psp-privileged-container\nspec:\n  enforcementAction: warn\n  match:\n    kinds:\n      - apiGroups: [\"\"]\n        kinds: [\"Pod\"]\n    nanespaces:\n      - \"default\"\n  parameters:\n    exemptImages:\n      - \"aquasec/trivy:*\"\n      - \"eu.gcr.io/kyma-project/prow/cleaner:*\"\n      - \"eu.gcr.io/kyma-project/test-infra/bootstrap:*\"\n      - \"eu.gcr.io/kyma-project/test-infra/buildpack-golang:*\"\n      - \"eu.gcr.io/kyma-project/test-infra/gardener-rotate:*\"\n      - \"eu.gcr.io/kyma-project/test-infra/golangci-lint:*\"\n      - \"eu.gcr.io/kyma-project/test-infra/kyma-integration:*\"\n      - \"europe-docker.pkg.dev/kyma-project/prod/test-infra/prow-tools:*\"\n      - \"gcr.io/k8s-prow/generic-autobumper:*\""]: Refreshing state... [id=/apis/constraints.gatekeeper.sh/v1beta1/k8spspprivilegedcontainers/psp-privileged-container]
module.untrusted_workload_gatekeeper.kubectl_manifest.constraints["apiVersion: constraints.gatekeeper.sh/v1beta1\nkind: K8sPSPCapabilities\nmetadata:\n  name: capabilities\nspec:\n  enforcementAction: warn\n  match:\n    kinds:\n      - apiGroups: [\"\"]\n        kinds: [\"Pod\"]\n    namespaces:\n      - \"default\"\n  # we're not using capabilities"]: Refreshing state... [id=/apis/constraints.gatekeeper.sh/v1beta1/k8spspcapabilitieses/capabilities]
module.untrusted_workload_gatekeeper.kubectl_manifest.constraints["apiVersion: constraints.gatekeeper.sh/v1beta1\nkind: K8sPSPHostFilesystem\nmetadata:\n  name: psp-host-filesystem\nspec:\n  enforcementAction: warn\n  match:\n    kinds:\n      - apiGroups: [\"\"]\n        kinds: [\"Pod\"]\n    namespaces:\n      - \"default\"\n  parameters:\n    allowedHostPaths:\n      - pathPrefix: \"/lib/modules\"\n      - pathPrefix: \"/sys/fs/cgroup\""]: Refreshing state... [id=/apis/constraints.gatekeeper.sh/v1beta1/k8spsphostfilesystems/psp-host-filesystem]
module.untrusted_workload_gatekeeper.kubectl_manifest.constraints["apiVersion: constraints.gatekeeper.sh/v1beta1\nkind: K8sPSPAllowPrivilegeEscalationContainer\nmetadata:\n  name: psp-allow-privilege-escalation-container\nspec:\n  enforcementAction: warn\n  match:\n    kinds:\n      - apiGroups: [\"\"]\n        kinds: [\"Pod\"]\n    namespaces:\n      - \"default\""]: Refreshing state... [id=/apis/constraints.gatekeeper.sh/v1beta1/k8spspallowprivilegeescalationcontainers/psp-allow-privilege-escalation-container]
module.untrusted_workload_gatekeeper.kubectl_manifest.constraints["apiVersion: constraints.gatekeeper.sh/v1beta1\nkind: K8sPSPPrivilegedContainer\nmetadata:\n  name: psp-privileged-container\nspec:\n  enforcementAction: warn\n  match:\n    kinds:\n      - apiGroups: [\"\"]\n        kinds: [\"Pod\"]\n    nanespaces:\n      - \"default\"\n  parameters:\n    exemptImages:\n      - \"aquasec/trivy:*\"\n      - \"eu.gcr.io/kyma-project/prow/cleaner:*\"\n      - \"eu.gcr.io/kyma-project/test-infra/bootstrap:*\"\n      - \"eu.gcr.io/kyma-project/test-infra/buildpack-golang:*\"\n      - \"eu.gcr.io/kyma-project/test-infra/gardener-rotate:*\"\n      - \"eu.gcr.io/kyma-project/test-infra/golangci-lint:*\"\n      - \"eu.gcr.io/kyma-project/test-infra/kyma-integration:*\"\n      - \"europe-docker.pkg.dev/kyma-project/prod/test-infra/prow-tools:*\"\n      - \"gcr.io/k8s-prow/generic-autobumper:*\""]: Refreshing state... [id=/apis/constraints.gatekeeper.sh/v1beta1/k8spspprivilegedcontainers/psp-privileged-container]
module.untrusted_workload_gatekeeper.kubectl_manifest.constraints["apiVersion: constraints.gatekeeper.sh/v1beta1\nkind: K8sPSPHostNetworkingPorts\nmetadata:\n  name: psp-host-network-ports\nspec:\n  enforcementAction: warn\n  match:\n    kinds:\n      - apiGroups: [\"\"]\n        kinds: [\"Pod\"]\n    namespaces:\n      - \"default\"\n  # we're not using any hostNetwork option in prowjobs"]: Refreshing state... [id=/apis/constraints.gatekeeper.sh/v1beta1/k8spsphostnetworkingportses/psp-host-network-ports]
module.untrusted_workload_gatekeeper.kubectl_manifest.constraints["apiVersion: constraints.gatekeeper.sh/v1beta1\nkind: K8sPSPHostNamespace\nmetadata:\n  name: psp-host-namespace\nspec:\n  enforcementAction: warn\n  match:\n    kinds:\n      - apiGroups: [\"\"]\n        kinds: [\"Pod\"]\n    namespaces:\n      - \"default\""]: Refreshing state... [id=/apis/constraints.gatekeeper.sh/v1beta1/k8spsphostnamespaces/psp-host-namespace]
module.untrusted_workload_gatekeeper.kubectl_manifest.constraints["apiVersion: constraints.gatekeeper.sh/v1beta1\nkind: K8sPSPAppArmor\nmetadata:\n  name: psp-apparmor\nspec:\n  enforcementAction: warn\n  match:\n    kinds:\n      - apiGroups: [\"\"]\n        kinds: [\"Pod\"]\n    namespaces:\n      - \"default\"\n  parameters:\n    allowedProfiles:\n      - runtime/default\n    exemptImages:\n      - eu.gcr.io/sap-kyma-neighbors-dev/image-builder:*"]: Refreshing state... [id=/apis/constraints.gatekeeper.sh/v1beta1/k8spspapparmors/psp-apparmor]
module.untrusted_workload_gatekeeper.kubectl_manifest.constraints["apiVersion: constraints.gatekeeper.sh/v1beta1\nkind: K8sPSPSELinuxV2\nmetadata:\n  name: psp-selinux-v2\nspec:\n  enforcementAction: warn\n  match:\n    kinds:\n      - apiGroups: [\"\"]\n        kinds: [\"Pod\"]\n    namespaces:\n      - \"default\"\n  # we're not using any SELinux option in prowjobs"]: Refreshing state... [id=/apis/constraints.gatekeeper.sh/v1beta1/k8spspselinuxv2s/psp-selinux-v2]
module.untrusted_workload_gatekeeper.kubectl_manifest.constraints["apiVersion: constraints.gatekeeper.sh/v1beta1\nkind: K8sPSPSeccomp\nmetadata:\n  name: psp-seccomp\nspec:\n  enforcementAction: warn\n  match:\n    kinds:\n      - apiGroups: [\"\"]\n        kinds: [\"Pod\"]\n    namespaces:\n      - \"default\"\n  parameters:\n    allowedProfiles:\n      - runtime/default\n      - docker/default\n    exemptImages:\n      - eu.gcr.io/sap-kyma-neighbors-dev/image-builder:*"]: Refreshing state... [id=/apis/constraints.gatekeeper.sh/v1beta1/k8spspseccomps/psp-seccomp]

No changes. Your infrastructure matches the configuration.

Terraform has compared your real infrastructure against your configuration
and found no differences, so no changes are needed.

Warning: "default_secret_name" is no longer applicable for Kubernetes v1.24.0 and above

  with module.trusted_workload_terraform_executor_k8s_service_account.kubernetes_service_account.terraform_executor,
  on ../../../../development/terraform-executor/terraform/modules/k8s-terraform-executor/main.tf line 15, in resource "kubernetes_service_account" "terraform_executor":
  15: resource "kubernetes_service_account" "terraform_executor" {

Starting from version 1.24.0 Kubernetes does not automatically generate a
token for service accounts, in this case, "default_secret_name" will be empty

(and 2 more similar warnings elsewhere)

Apply complete! Resources: 0 added, 0 changed, 0 destroyed.

Outputs:

tekton_gatekeeper = <sensitive>
tekton_terraform_executor_k8s_service_account = {
  "terraform_executor_k8s_service_account" = {
    "automount_service_account_token" = true
    "default_secret_name" = ""
    "id" = "default/terraform-executor"
    "image_pull_secret" = toset([])
    "metadata" = tolist([
      {
        "annotations" = tomap({
          "iam.gke.io/gcp-service-account" = "terraform-executor@sap-kyma-prow.iam.gserviceaccount.com"
        })
        "generate_name" = ""
        "generation" = 0
        "labels" = tomap({})
        "name" = "terraform-executor"
        "namespace" = "default"
        "resource_version" = "128307926"
        "uid" = "51d95a38-fc8f-434f-bcb4-fa84ce96db29"
      },
    ])
    "secret" = toset([])
    "timeouts" = null /* object */
  }
}
terraform_executor_gcp_service_account = <sensitive>
trusted_workload_gatekeeper = <sensitive>
trusted_workload_terraform_executor_k8s_service_account = {
  "terraform_executor_k8s_service_account" = {
    "automount_service_account_token" = true
    "default_secret_name" = ""
    "id" = "default/terraform-executor"
    "image_pull_secret" = toset([])
    "metadata" = tolist([
      {
        "annotations" = tomap({
          "iam.gke.io/gcp-service-account" = "terraform-executor@sap-kyma-prow.iam.gserviceaccount.com"
        })
        "generate_name" = ""
        "generation" = 0
        "labels" = tomap({})
        "name" = "terraform-executor"
        "namespace" = "default"
        "resource_version" = "604056833"
        "uid" = "802f1b39-dbf0-4429-9612-cbc74ca7bccf"
      },
    ])
    "secret" = toset([])
    "timeouts" = null /* object */
  }
}
untrusted_workload_gatekeeper = <sensitive>
untrusted_workload_terraform_executor_k8s_service_account = {
  "terraform_executor_k8s_service_account" = {
    "automount_service_account_token" = true
    "default_secret_name" = ""
    "id" = "default/terraform-executor"
    "image_pull_secret" = toset([])
    "metadata" = tolist([
      {
        "annotations" = tomap({
          "iam.gke.io/gcp-service-account" = "terraform-executor@sap-kyma-prow.iam.gserviceaccount.com"
        })
        "generate_name" = ""
        "generation" = 0
        "labels" = tomap({})
        "name" = "terraform-executor"
        "namespace" = "default"
        "resource_version" = "599762309"
        "uid" = "e14bae6f-2239-4e1d-8b99-708e3c63c19c"
      },
    ])
    "secret" = toset([])
    "timeouts" = null /* object */
  }
}

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/ci Issues or PRs related to CI related topics area/service-mesh Issues or PRs related to service-mesh kind/feature Categorizes issue or PR as related to a new feature. lgtm Looks good to me! no-changes size/M Denotes a PR that changes 30-99 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants