Skip to content

Conversation

@constanca-m
Copy link
Contributor

@constanca-m constanca-m commented Jan 31, 2023

What

Support use of Kustomize to remove having to maintain KSM version according to the latest supported Kubernetes version. See more details in this issue.

Note: This change only affects the testing of Kubernetes integration. More information on this here.

Walk through

There are four steps to run the tests (check the code here):

  1. Verify context.
  2. Connect to the elastic stack network.
  3. Install Elastic Agent.
  4. Install custom definitions. These are present in the folder kubernetes/_dev/deploy/k8s.

Of these steps, manifests are being deployed through 3 and 4. However, step 3 does not use files from our custom definitions, and therefore, is not important for the specific testing files of Kubernetes integration.

To not install custom definitions, we need the presence of an .empty file. Taking apiserver datastream as an example, since we have the .empty file, nothing from kubernetes/_dev/deploy/k8s will be deployed.
On the other hand, state_cronjob datastream does not have that file, so custom definitions will be installed.

The main function responsible for applying the custom manifests is this one. This function is only used to deploy the custom definitions, not the elastic agent. So changing it won't affect anything related to it. Our specific goal to Kubernetes integration is to deploy the newest KSM version in an automated way, so we need to use a kustomization.yaml file to ensure that. This will stop requiring maintenance in regard to future KSM version. For that, we need to change our -f flag to -k flag.

To determine the correct path for the custom files we use this function, which is, once again, only used to installed custom definitions, so step 3 will not be affected. It will be refactored to:

  1. Check if _dev/deploy/k8s directory given as argument exists. (Same as before)
  2. Check if kustomization.yaml file exists: if it does not exist, then deploy the existent .yaml. manifests.

Outcome

If we update the custom definitions folder to have the right kustomization.yaml file - i.e. one that deploys kube state metrics and all the needed resources - the logs for each state_* datastream would be like this:

2023/01/31 16:58:01 DEBUG Running system tests for data stream
2023/01/31 16:58:01 DEBUG running test with configuration 'default'
2023/01/31 16:58:01 DEBUG setting up service...
2023/01/31 16:58:01 DEBUG ensure that kind context is selected
2023/01/31 16:58:01 DEBUG output command: /usr/local/bin/kubectl config current-context
2023/01/31 16:58:02 DEBUG find "kind-control-plane" container
2023/01/31 16:58:02 DEBUG output command: /usr/local/bin/docker ps --filter name=kind-control-plane --format {{.ID}}
2023/01/31 16:58:02 DEBUG check network connectivity between service container kind-control-plane (ID: 7765398977cd) and the stack network elastic-package-stack_default
2023/01/31 16:58:02 DEBUG output command: /usr/local/bin/docker network inspect elastic-package-stack_default
2023/01/31 16:58:02 DEBUG container kind-control-plane is already attached to the elastic-package-stack_default network
2023/01/31 16:58:02 DEBUG install Elastic Agent in the Kubernetes cluster
2023/01/31 16:58:02 DEBUG GET https://127.0.0.1:5601/api/status
2023/01/31 16:58:02 DEBUG Prepare YAML definition for Elastic Agent running in stack v8.6.0
2023/01/31 16:58:02 DEBUG Apply Kubernetes stdin
2023/01/31 16:58:02 DEBUG run command: /usr/local/bin/kubectl apply -f - -o yaml
2023/01/31 16:58:03 DEBUG Handle "apply" command output
2023/01/31 16:58:03 DEBUG Extract resources from command output
2023/01/31 16:58:03 DEBUG Wait for ready resources
2023/01/31 16:58:03 DEBUG Sync resource info: elastic-package-certs (kind: Secret, namespace: kube-system)
2023/01/31 16:58:03 DEBUG Sync resource info: elastic-agent (kind: DaemonSet, namespace: kube-system)
2023/01/31 16:58:03 DEBUG Sync resource info: elastic-agent (kind: ClusterRoleBinding, namespace: )
2023/01/31 16:58:03 DEBUG Sync resource info: elastic-agent (kind: RoleBinding, namespace: kube-system)
2023/01/31 16:58:03 DEBUG Sync resource info: elastic-agent-kubeadm-config (kind: RoleBinding, namespace: kube-system)
2023/01/31 16:58:03 DEBUG Sync resource info: elastic-agent (kind: ClusterRole, namespace: )
2023/01/31 16:58:03 DEBUG Sync resource info: elastic-agent (kind: Role, namespace: kube-system)
2023/01/31 16:58:03 DEBUG Sync resource info: elastic-agent-kubeadm-config (kind: Role, namespace: kube-system)
2023/01/31 16:58:03 DEBUG Sync resource info: elastic-agent (kind: ServiceAccount, namespace: kube-system)
2023/01/31 16:58:03 DEBUG beginning wait for 9 resources with timeout of 10m0s
2023/01/31 16:58:03 DEBUG install custom Kubernetes definitions (directory: /home/c/go/src/github.com/elastic/integrations/packages/kubernetes/_dev/deploy/k8s)
2023/01/31 16:58:03 DEBUG Apply Kubernetes custom definitions
2023/01/31 16:58:03 DEBUG run command: /usr/local/bin/kubectl apply -k /home/c/go/src/github.com/elastic/integrations/packages/kubernetes/_dev/deploy/k8s -o yaml
2023/01/31 16:58:05 DEBUG Handle "apply" command output
2023/01/31 16:58:05 DEBUG Extract resources from command output
2023/01/31 16:58:05 DEBUG Wait for ready resources
2023/01/31 16:58:05 DEBUG Sync resource info: pods-high (kind: ResourceQuota, namespace: default)
2023/01/31 16:58:05 DEBUG Sync resource info: kube-state-metrics (kind: ServiceAccount, namespace: kube-system)
2023/01/31 16:58:06 DEBUG Sync resource info: kube-state-metrics (kind: ClusterRole, namespace: )
2023/01/31 16:58:06 DEBUG Sync resource info: kube-state-metrics (kind: ClusterRoleBinding, namespace: )
2023/01/31 16:58:06 DEBUG Sync resource info: example-redis-config (kind: ConfigMap, namespace: default)
2023/01/31 16:58:06 DEBUG Sync resource info: kube-state-metrics (kind: Service, namespace: kube-system)
2023/01/31 16:58:06 DEBUG Sync resource info: task-pv-volume (kind: PersistentVolume, namespace: )
2023/01/31 16:58:06 DEBUG Sync resource info: task-pv-claim (kind: PersistentVolumeClaim, namespace: default)
2023/01/31 16:58:06 DEBUG Sync resource info: kube-state-metrics (kind: Deployment, namespace: kube-system)
2023/01/31 16:58:06 DEBUG Sync resource info: web (kind: StatefulSet, namespace: default)
2023/01/31 16:58:06 DEBUG Sync resource info: hello (kind: CronJob, namespace: default)
2023/01/31 16:58:06 DEBUG Sync resource info: fluentd-elasticsearch (kind: DaemonSet, namespace: kube-system)
2023/01/31 16:58:06 DEBUG Sync resource info: hello (kind: Job, namespace: default)
2023/01/31 16:58:06 DEBUG beginning wait for 13 resources with timeout of 10m0s
2023/01/31 16:58:06 DEBUG Deployment is not ready: kube-system/kube-state-metrics. 0 out of 1 expected pods are ready
2023/01/31 16:58:08 DEBUG Deployment is not ready: kube-system/kube-state-metrics. 0 out of 1 expected pods are ready
2023/01/31 16:58:10 DEBUG Deployment is not ready: kube-system/kube-state-metrics. 0 out of 1 expected pods are ready
2023/01/31 16:58:12 DEBUG Deployment is not ready: kube-system/kube-state-metrics. 0 out of 1 expected pods are ready
2023/01/31 16:58:14 DEBUG Deployment is not ready: kube-system/kube-state-metrics. 0 out of 1 expected pods are ready
2023/01/31 16:58:16 DEBUG StatefulSet is ready: default/web. 1 out of 1 expected pods are ready
2023/01/31 16:58:16 DEBUG creating test policy...
2023/01/31 16:58:16 DEBUG POST https://127.0.0.1:5601/api/fleet/agent_policies
2023/01/31 16:58:20 DEBUG adding package data stream to test policy...
2023/01/31 16:58:20 DEBUG POST https://127.0.0.1:5601/api/fleet/package_policies
2023/01/31 16:58:23 DEBUG deleting old data in data stream...
2023/01/31 16:58:23 DEBUG found 0 hits in metrics-kubernetes.state_storageclass-ep data stream
2023/01/31 16:58:23 DEBUG GET https://127.0.0.1:5601/api/fleet/agents
2023/01/31 16:58:23 DEBUG filter agents using criteria: NamePrefix=kind-control-plane
2023/01/31 16:58:23 DEBUG found 1 enrolled agent(s)
2023/01/31 16:58:23 DEBUG GET https://127.0.0.1:5601/api/fleet/agent_policies/134afa00-a180-11ed-be41-0b0968616500
2023/01/31 16:58:23 DEBUG assigning package data stream to agent...
2023/01/31 16:58:23 DEBUG PUT https://127.0.0.1:5601/api/fleet/agents/25fd07a7-522f-4399-af4d-c441b2867560/reassign
2023/01/31 16:58:25 DEBUG GET https://127.0.0.1:5601/api/fleet/agents/25fd07a7-522f-4399-af4d-c441b2867560
2023/01/31 16:58:25 DEBUG Agent data: {"id":"25fd07a7-522f-4399-af4d-c441b2867560","policy_id":"134afa00-a180-11ed-be41-0b0968616500","local_metadata":{"host":{"name":"kind-control-plane"}}}
2023/01/31 16:58:25 DEBUG Wait until the policy (ID: 134afa00-a180-11ed-be41-0b0968616500, revision: 2) is assigned to the agent (ID: 25fd07a7-522f-4399-af4d-c441b2867560)...
2023/01/31 16:58:27 DEBUG GET https://127.0.0.1:5601/api/fleet/agents/25fd07a7-522f-4399-af4d-c441b2867560
2023/01/31 16:58:27 DEBUG Agent data: {"id":"25fd07a7-522f-4399-af4d-c441b2867560","policy_id":"134afa00-a180-11ed-be41-0b0968616500","local_metadata":{"host":{"name":"kind-control-plane"}}}
2023/01/31 16:58:27 DEBUG Wait until the policy (ID: 134afa00-a180-11ed-be41-0b0968616500, revision: 2) is assigned to the agent (ID: 25fd07a7-522f-4399-af4d-c441b2867560)...
2023/01/31 16:58:29 DEBUG GET https://127.0.0.1:5601/api/fleet/agents/25fd07a7-522f-4399-af4d-c441b2867560
2023/01/31 16:58:29 DEBUG Agent data: {"id":"25fd07a7-522f-4399-af4d-c441b2867560","policy_id":"134afa00-a180-11ed-be41-0b0968616500","local_metadata":{"host":{"name":"kind-control-plane"}}}
2023/01/31 16:58:29 DEBUG Wait until the policy (ID: 134afa00-a180-11ed-be41-0b0968616500, revision: 2) is assigned to the agent (ID: 25fd07a7-522f-4399-af4d-c441b2867560)...
2023/01/31 16:58:31 DEBUG GET https://127.0.0.1:5601/api/fleet/agents/25fd07a7-522f-4399-af4d-c441b2867560
2023/01/31 16:58:31 DEBUG Agent data: {"id":"25fd07a7-522f-4399-af4d-c441b2867560","policy_id":"134afa00-a180-11ed-be41-0b0968616500","policy_revision":2,"local_metadata":{"host":{"name":"kind-control-plane"}}}
2023/01/31 16:58:31 DEBUG Policy revision assigned to the agent (ID: 25fd07a7-522f-4399-af4d-c441b2867560)...
2023/01/31 16:58:31 DEBUG checking for expected data in data stream...
2023/01/31 16:58:31 DEBUG found 0 hits in metrics-kubernetes.state_storageclass-ep data stream
2023/01/31 16:58:32 DEBUG found 0 hits in metrics-kubernetes.state_storageclass-ep data stream
2023/01/31 16:58:33 DEBUG found 0 hits in metrics-kubernetes.state_storageclass-ep data stream
2023/01/31 16:58:34 DEBUG found 1 hits in metrics-kubernetes.state_storageclass-ep data stream
2023/01/31 16:58:34 DEBUG reassigning original policy back to agent...
2023/01/31 16:58:34 DEBUG PUT https://127.0.0.1:5601/api/fleet/agents/25fd07a7-522f-4399-af4d-c441b2867560/reassign
2023/01/31 16:58:36 DEBUG GET https://127.0.0.1:5601/api/fleet/agents/25fd07a7-522f-4399-af4d-c441b2867560
2023/01/31 16:58:36 DEBUG Agent data: {"id":"25fd07a7-522f-4399-af4d-c441b2867560","policy_id":"elastic-agent-managed-ep","local_metadata":{"host":{"name":"kind-control-plane"}}}
2023/01/31 16:58:36 DEBUG Wait until the policy (ID: elastic-agent-managed-ep, revision: 3) is assigned to the agent (ID: 25fd07a7-522f-4399-af4d-c441b2867560)...
2023/01/31 16:58:38 DEBUG GET https://127.0.0.1:5601/api/fleet/agents/25fd07a7-522f-4399-af4d-c441b2867560
2023/01/31 16:58:38 DEBUG Agent data: {"id":"25fd07a7-522f-4399-af4d-c441b2867560","policy_id":"elastic-agent-managed-ep","local_metadata":{"host":{"name":"kind-control-plane"}}}
2023/01/31 16:58:38 DEBUG Wait until the policy (ID: elastic-agent-managed-ep, revision: 3) is assigned to the agent (ID: 25fd07a7-522f-4399-af4d-c441b2867560)...
2023/01/31 16:58:40 DEBUG GET https://127.0.0.1:5601/api/fleet/agents/25fd07a7-522f-4399-af4d-c441b2867560
2023/01/31 16:58:40 DEBUG Agent data: {"id":"25fd07a7-522f-4399-af4d-c441b2867560","policy_id":"elastic-agent-managed-ep","local_metadata":{"host":{"name":"kind-control-plane"}}}
2023/01/31 16:58:40 DEBUG Wait until the policy (ID: elastic-agent-managed-ep, revision: 3) is assigned to the agent (ID: 25fd07a7-522f-4399-af4d-c441b2867560)...
2023/01/31 16:58:42 DEBUG GET https://127.0.0.1:5601/api/fleet/agents/25fd07a7-522f-4399-af4d-c441b2867560
2023/01/31 16:58:42 DEBUG Agent data: {"id":"25fd07a7-522f-4399-af4d-c441b2867560","policy_id":"elastic-agent-managed-ep","local_metadata":{"host":{"name":"kind-control-plane"}}}
2023/01/31 16:58:42 DEBUG Wait until the policy (ID: elastic-agent-managed-ep, revision: 3) is assigned to the agent (ID: 25fd07a7-522f-4399-af4d-c441b2867560)...
2023/01/31 16:58:44 DEBUG GET https://127.0.0.1:5601/api/fleet/agents/25fd07a7-522f-4399-af4d-c441b2867560
2023/01/31 16:58:44 DEBUG Agent data: {"id":"25fd07a7-522f-4399-af4d-c441b2867560","policy_id":"elastic-agent-managed-ep","policy_revision":3,"local_metadata":{"host":{"name":"kind-control-plane"}}}
2023/01/31 16:58:44 DEBUG Policy revision assigned to the agent (ID: 25fd07a7-522f-4399-af4d-c441b2867560)...
2023/01/31 16:58:44 DEBUG deleting test policy...
2023/01/31 16:58:44 DEBUG POST https://127.0.0.1:5601/api/fleet/agent_policies/delete
2023/01/31 16:58:47 DEBUG tearing down service...
2023/01/31 16:58:47 DEBUG uninstall custom Kubernetes definitions (directory: /home/c/go/src/github.com/elastic/integrations/packages/kubernetes/_dev/deploy/k8s)
2023/01/31 16:58:47 DEBUG run command: /usr/local/bin/kubectl delete -k /home/c/go/src/github.com/elastic/integrations/packages/kubernetes/_dev/deploy/k8s
2023/01/31 16:58:49 DEBUG deleting data in data stream...

Related issues

@constanca-m constanca-m self-assigned this Jan 31, 2023
@constanca-m constanca-m added the Team:Cloudnative-Monitoring Label for the Cloud Native Monitoring team label Jan 31, 2023
@elasticmachine
Copy link
Collaborator

elasticmachine commented Jan 31, 2023

💚 Build Succeeded

the below badges are clickable and redirect to their specific view in the CI or DOCS
Pipeline View Test View Changes Artifacts preview preview

Expand to view the summary

Build stats

  • Start Time: 2023-02-09T09:58:11.119+0000

  • Duration: 33 min 17 sec

Test stats 🧪

Test Results
Failed 0
Passed 888
Skipped 0
Total 888

🤖 GitHub comments

Expand to view the GitHub comments

To re-run your PR in the CI, just comment with:

  • /test : Re-trigger the build.

@elasticmachine
Copy link
Collaborator

elasticmachine commented Jan 31, 2023

🌐 Coverage report

Name Metrics % (covered/total) Diff
Packages 100.0% (35/35) 💚
Files 65.909% (87/132) 👍
Classes 61.376% (116/189) 👍
Methods 49.146% (403/820) 👍 0.375
Lines 31.767% (3576/11257) 👎 -0.006
Conditionals 100.0% (0/0) 💚

@constanca-m constanca-m requested a review from gsantoro February 6, 2023 10:13
@constanca-m constanca-m requested a review from gsantoro February 6, 2023 13:39
Copy link

@tetianakravchenko tetianakravchenko left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  1. I still don't think that should be enforced usage of kustomization.yaml, if manually defined manifests stored in _dev/deploy/k8s covers all needs and there is no reason to use kustomization.yml - this should be supported
  2. I think you also need to update https://github.com/elastic/elastic-package/tree/main/test/packages/with-kind/kubernetes with accordance with your changes

@constanca-m
Copy link
Contributor Author

  1. I still don't think that should be enforced usage of kustomization.yaml, if manually defined manifests stored in _dev/deploy/k8s covers all needs and there is no reason to use kustomization.yml - this should be supported

But what is the reason for that? It seems unnecessary. This is used just for testing, so once there, it doesn't really matter what happens next. And since at this moment, Kubernetes integration is the only one that uses this, it is an easy way to change system testing as we won't need to spread this change to any other package. @tetianakravchenko

@tetianakravchenko
Copy link

tetianakravchenko commented Feb 7, 2023

   I still don't think that should be enforced usage of kustomization.yaml, if manually defined manifests stored in _dev/deploy/k8s covers all needs and there is no reason to use kustomization.yml - this should be supported

But what is the reason for that? It seems unnecessary. This is used just for testing, so once there, it doesn't really matter what happens next. And since at this moment, Kubernetes integration is the only one that uses this, it is an easy way to change system testing as we won't need to spread this change to any other package. @tetianakravchenko

yes, Kubernetes integration is the only one that uses it, but I don't think that it implies that this should be enforced for other integrations, that are not using those tests yet (like cloud_defend and cloud_security_posture), I see it more like an alternative

  1. I think you also need to update https://github.com/elastic/elastic-package/tree/main/test/packages/with-kind/kubernetes with accordance with your changes

@constanca-m also wdyt about this? namely content of https://github.com/elastic/elastic-package/tree/main/test/packages/with-kind/kubernetes/_dev/deploy/k8s to align with your changes?

@constanca-m
Copy link
Contributor Author

yes, Kubernetes integration is the only one that uses it, but I don't think that it implies that this should be enforced for other integrations, that are not using those tests yet (like cloud_defend and cloud_security_posture), I see it more like an alternative

They are using the .empty file so they don't require any changes.

@constanca-m also wdyt about this? namely content of https://github.com/elastic/elastic-package/tree/main/test/packages/with-kind/kubernetes/_dev/deploy/k8s to align with your changes?

I hadn't notice that, but the datastream being used for testing has the .empty file so we were not testing the custom definitions. I will update it to include the new resources + one more data stream to check the kustomization.yaml file. @tetianakravchenko

@tetianakravchenko tetianakravchenko requested a review from a team February 7, 2023 09:33
@tetianakravchenko
Copy link

They are using the .empty file so they don't require any changes.

ok, lets iterate over it

I hadn't notice that, but the datastream being used for testing has the .empty file so we were not testing the custom definitions. I will update it to include the new resources + one more data stream to check the kustomization.yaml file. @tetianakravchenko

hmm, maybe I misunderstood the purpose of this test folder. But I think it is good to have an example in this repo of the package with kustomization.yaml

@constanca-m
Copy link
Contributor Author

hmm, maybe I misunderstood the purpose of this test folder. But I think it is good to have an example in this repo of the package with kustomization.yaml

You got it right, that folder is being used for testing. The tests were not covering the two cases before, but now they are.

@@ -0,0 +1,48 @@
apiVersion: apps/v1
kind: DaemonSet
metadata:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

where is this daemonset.yaml coming from?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We need these resources to check the state_* datastreams, but since I only included one for the pod, maybe I should delete all the others? I was trying to follow the same logic as the one that was here before.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

where is this daemonset.yaml coming from?

@gsantoro it is the same as used in integrations repo - https://github.com/elastic/integrations/blob/main/packages/kubernetes/_dev/deploy/k8s/daemonset.yaml

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am torn between deleting some of the resources because they are useless, or keeping them so they stay the same as in the integrations repo

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

let me see if I understand.

you are adding a state_pod datastream here since pod didn't even need the kube-state-metrics resource that was already defined in those manifests resources (before the kustomization was introduced). those resources in deploy/k8s are in theory used to setup the environment for testing but since we don't check for output against an expected result, they are kind of useless. we only check that the setup didn't have any issues.

I would say, keep them in sync with what we are doing in the integrations repo. it's definitively another PR

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, they were useless before because the only data stream defined had the .empty file. They are no longer useless.

@tetianakravchenko
Copy link

@constanca-m please double check with @elastic/ecosystem team regarding failing buildkite/elastic-package before merging

@constanca-m
Copy link
Contributor Author

@constanca-m please double check with @elastic/ecosystem team regarding failing buildkite/elastic-package before merging

I don't think the error is related to any change of this PR @tetianakravchenko

@gsantoro
Copy link
Contributor

gsantoro commented Feb 7, 2023

@tetianakravchenko I checked that error yesterday. The pipeline has not been setup entirely. I would say that we should notify them. At the same time I expect them to reply they haven't created one yet and they have enabled that step on all the repos automatically.

@constanca-m
Copy link
Contributor Author

@tetianakravchenko @gsantoro The error is more specific than the screenshot yesterday:

Deprecated Environment Variables

The following environment variables have been deprecated, and should no longer be used:

BUILDKITE_PROJECT_SLUG="elastic/elastic-package"
BUILDKITE_PROJECT_PROVIDER="github"

But it seems that this should be a different PR. I tried to find these variables specific to this change, but I don't think they exist.

@gsantoro
Copy link
Contributor

gsantoro commented Feb 7, 2023

@constanca-m actually the pipeline is now there at https://buildkite.com/elastic/elastic-package.

I think you need to rebase from upstream to get the builkite pipeline definition at

@constanca-m
Copy link
Contributor Author

@constanca-m actually the pipeline is now there at https://buildkite.com/elastic/elastic-package

That's for every run, but if you check the error of the specific cause of this PR, that is the log that appears. @gsantoro

@mlunadia
Copy link

mlunadia commented Feb 7, 2023

@constanca-m is this linked to this epic?

@constanca-m
Copy link
Contributor Author

@constanca-m is this linked to this epic?

I wouldn't say so @mlunadia. KSM was already being deployed, this change is just so we don't have to update the version every time we support a new version of K8s.

Copy link
Member

@jsoriano jsoriano left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This looks like a breaking change, and it looks feasible to keep backwards compatibility by checking if kustomize is used or not. Could we do it?

Please correct me if this will work with current packages.

// if it does not exist, then the .empty file needs to be present
if _, err := os.Stat(filepath.Join(definitionsPath, ".empty")); err != nil {
return false, errors.Errorf("kustomization.yaml file is missing (path: %s). Add one or create an .empty file"+
" if no custom definitions are required.", definitionsPath)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we require certain files to be present, it would be better to make these checks on the package-spec.

@constanca-m constanca-m requested a review from jsoriano February 8, 2023 11:02
@@ -0,0 +1,198 @@
- name: cloud
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since it's only for testing, could we reduce the amount of fields we have in this file?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can, but it was left as it is to be in sync with Kubernetes integration. Should I update it?

Copy link
Member

@jsoriano jsoriano left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the updates to avoid the breaking change!

@jsoriano jsoriano dismissed their stale review February 9, 2023 09:53

No blockers on sight.

Copy link
Member

@jsoriano jsoriano left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Team:Cloudnative-Monitoring Label for the Cloud Native Monitoring team

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[K8s testing] Support use of Kustomize

8 participants