-
Notifications
You must be signed in to change notification settings - Fork 127
[K8s testing] Support use of Kustomize #1125
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
🌐 Coverage report
|
internal/testrunner/runners/system/servicedeployer/kubernetes.go
Outdated
Show resolved
Hide resolved
internal/testrunner/runners/system/servicedeployer/kubernetes.go
Outdated
Show resolved
Hide resolved
internal/testrunner/runners/system/servicedeployer/kubernetes.go
Outdated
Show resolved
Hide resolved
internal/testrunner/runners/system/servicedeployer/kubernetes.go
Outdated
Show resolved
Hide resolved
internal/testrunner/runners/system/servicedeployer/kubernetes.go
Outdated
Show resolved
Hide resolved
internal/testrunner/runners/system/servicedeployer/kubernetes.go
Outdated
Show resolved
Hide resolved
internal/testrunner/runners/system/servicedeployer/kubernetes.go
Outdated
Show resolved
Hide resolved
internal/testrunner/runners/system/servicedeployer/kubernetes.go
Outdated
Show resolved
Hide resolved
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- I still don't think that should be enforced usage of
kustomization.yaml, if manually defined manifests stored in_dev/deploy/k8scovers all needs and there is no reason to use kustomization.yml - this should be supported - I think you also need to update https://github.com/elastic/elastic-package/tree/main/test/packages/with-kind/kubernetes with accordance with your changes
But what is the reason for that? It seems unnecessary. This is used just for testing, so once there, it doesn't really matter what happens next. And since at this moment, Kubernetes integration is the only one that uses this, it is an easy way to change system testing as we won't need to spread this change to any other package. @tetianakravchenko |
yes, Kubernetes integration is the only one that uses it, but I don't think that it implies that this should be enforced for other integrations, that are not using those tests yet (like cloud_defend and cloud_security_posture), I see it more like an alternative
@constanca-m also wdyt about this? namely content of https://github.com/elastic/elastic-package/tree/main/test/packages/with-kind/kubernetes/_dev/deploy/k8s to align with your changes? |
They are using the .empty file so they don't require any changes.
I hadn't notice that, but the datastream being used for testing has the .empty file so we were not testing the custom definitions. I will update it to include the new resources + one more data stream to check the kustomization.yaml file. @tetianakravchenko |
ok, lets iterate over it
hmm, maybe I misunderstood the purpose of this test folder. But I think it is good to have an example in this repo of the package with |
You got it right, that folder is being used for testing. The tests were not covering the two cases before, but now they are. |
| @@ -0,0 +1,48 @@ | |||
| apiVersion: apps/v1 | |||
| kind: DaemonSet | |||
| metadata: | |||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
where is this daemonset.yaml coming from?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We need these resources to check the state_* datastreams, but since I only included one for the pod, maybe I should delete all the others? I was trying to follow the same logic as the one that was here before.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
where is this daemonset.yaml coming from?
@gsantoro it is the same as used in integrations repo - https://github.com/elastic/integrations/blob/main/packages/kubernetes/_dev/deploy/k8s/daemonset.yaml
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am torn between deleting some of the resources because they are useless, or keeping them so they stay the same as in the integrations repo
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
let me see if I understand.
you are adding a state_pod datastream here since pod didn't even need the kube-state-metrics resource that was already defined in those manifests resources (before the kustomization was introduced). those resources in deploy/k8s are in theory used to setup the environment for testing but since we don't check for output against an expected result, they are kind of useless. we only check that the setup didn't have any issues.
I would say, keep them in sync with what we are doing in the integrations repo. it's definitively another PR
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, they were useless before because the only data stream defined had the .empty file. They are no longer useless.
|
@constanca-m please double check with @elastic/ecosystem team regarding failing |
I don't think the error is related to any change of this PR @tetianakravchenko |
|
@tetianakravchenko I checked that error yesterday. The pipeline has not been setup entirely. I would say that we should notify them. At the same time I expect them to reply they haven't created one yet and they have enabled that step on all the repos automatically. |
|
@tetianakravchenko @gsantoro The error is more specific than the screenshot yesterday: But it seems that this should be a different PR. I tried to find these variables specific to this change, but I don't think they exist. |
|
@constanca-m actually the pipeline is now there at https://buildkite.com/elastic/elastic-package. I think you need to rebase from upstream to get the builkite pipeline definition at
|
That's for every run, but if you check the error of the specific cause of this PR, that is the log that appears. @gsantoro |
|
@constanca-m is this linked to this epic? |
I wouldn't say so @mlunadia. KSM was already being deployed, this change is just so we don't have to update the version every time we support a new version of K8s. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This looks like a breaking change, and it looks feasible to keep backwards compatibility by checking if kustomize is used or not. Could we do it?
Please correct me if this will work with current packages.
test/packages/with-kind/kubernetes/_dev/deploy/k8s/kustomization.yaml
Outdated
Show resolved
Hide resolved
internal/testrunner/runners/system/servicedeployer/kubernetes.go
Outdated
Show resolved
Hide resolved
| // if it does not exist, then the .empty file needs to be present | ||
| if _, err := os.Stat(filepath.Join(definitionsPath, ".empty")); err != nil { | ||
| return false, errors.Errorf("kustomization.yaml file is missing (path: %s). Add one or create an .empty file"+ | ||
| " if no custom definitions are required.", definitionsPath) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If we require certain files to be present, it would be better to make these checks on the package-spec.
| @@ -0,0 +1,198 @@ | |||
| - name: cloud | |||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since it's only for testing, could we reduce the amount of fields we have in this file?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We can, but it was left as it is to be in sync with Kubernetes integration. Should I update it?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the updates to avoid the breaking change!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👍
What
Support use of Kustomize to remove having to maintain KSM version according to the latest supported Kubernetes version. See more details in this issue.
Note: This change only affects the testing of Kubernetes integration. More information on this here.
Walk through
There are four steps to run the tests (check the code here):
kubernetes/_dev/deploy/k8s.Of these steps, manifests are being deployed through 3 and 4. However, step 3 does not use files from our custom definitions, and therefore, is not important for the specific testing files of Kubernetes integration.
To not install custom definitions, we need the presence of an
.emptyfile. Takingapiserverdatastream as an example, since we have the.emptyfile, nothing fromkubernetes/_dev/deploy/k8swill be deployed.On the other hand,
state_cronjobdatastream does not have that file, so custom definitions will be installed.The main function responsible for applying the custom manifests is this one. This function is only used to deploy the custom definitions, not the elastic agent. So changing it won't affect anything related to it. Our specific goal to Kubernetes integration is to deploy the newest KSM version in an automated way, so we need to use a
kustomization.yamlfile to ensure that. This will stop requiring maintenance in regard to future KSM version. For that, we need to change our-fflag to-kflag.To determine the correct path for the custom files we use this function, which is, once again, only used to installed custom definitions, so step 3 will not be affected. It will be refactored to:
_dev/deploy/k8sdirectory given as argument exists. (Same as before)kustomization.yamlfile exists: if it does not exist, then deploy the existent.yaml. manifests.Outcome
If we update the custom definitions folder to have the right
kustomization.yamlfile - i.e. one that deploys kube state metrics and all the needed resources - the logs for eachstate_*datastream would be like this:Related issues