Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Integration tests for k8s 1.27 #719

Merged
merged 22 commits into from
May 25, 2023
Merged

Integration tests for k8s 1.27 #719

merged 22 commits into from
May 25, 2023

Conversation

csongnr
Copy link
Contributor

@csongnr csongnr commented Apr 13, 2023

e2e tests were tested on minikube with a k8s version 1.27.0-rc.0 instead of 1.27.0 since minikube has yet to officially release support for k8s 1.27.0
the release candidate should be an acceptable alternative for now since we are past the SLA for supporting 1.27 officially. When official support is released, we will come back and update the tests.

~ docker pull kindest/node:v1.27.0                                                                                                                                                                              
v1.27.0: Pulling from kindest/node
2f52feee3a52: Pull complete 
5159c2d9f7c0: Pull complete 
Digest: sha256:c6b22e613523b1af67d4bc8a0c38a4c3ea3a2b8fbc5b367ae36345c9cb844518
Status: Downloaded newer image for kindest/node:v1.27.0
docker.io/kindest/node:v1.27.0
~ docker images                                                                                                                                                                                                
REPOSITORY                             TAG         IMAGE ID       CREATED         SIZE
kindest/node                           v1.27.0     33174a6dbb5d   27 hours ago    874MB
~ kind create cluster --image=33174a6dbb5d    
~/nri-kubernetes [cs/k8s-v127] make test                                                                                                                                                                        17:40:07
[test] Running unit tests
?   	github.com/newrelic/nri-kubernetes/v3/cmd/kubernetes-static	[no test files]
ok  	github.com/newrelic/nri-kubernetes/v3/cmd/nri-kubernetes	2.062s
ok  	github.com/newrelic/nri-kubernetes/v3/internal/config	0.664s
ok  	github.com/newrelic/nri-kubernetes/v3/internal/discovery	5.177s
?   	github.com/newrelic/nri-kubernetes/v3/internal/logutil	[no test files]
ok  	github.com/newrelic/nri-kubernetes/v3/internal/storer	4.515s
?   	github.com/newrelic/nri-kubernetes/v3/internal/testutil	[no test files]
ok  	github.com/newrelic/nri-kubernetes/v3/internal/testutil/asserter	0.893s
?   	github.com/newrelic/nri-kubernetes/v3/internal/testutil/asserter/exclude	[no test files]
?   	github.com/newrelic/nri-kubernetes/v3/src/client	[no test files]
ok  	github.com/newrelic/nri-kubernetes/v3/src/controlplane	7.747s
ok  	github.com/newrelic/nri-kubernetes/v3/src/controlplane/client	5.475s
ok  	github.com/newrelic/nri-kubernetes/v3/src/controlplane/client/authenticator	1.701s
ok  	github.com/newrelic/nri-kubernetes/v3/src/controlplane/client/connector	0.953s
ok  	github.com/newrelic/nri-kubernetes/v3/src/controlplane/discoverer	1.678s
?   	github.com/newrelic/nri-kubernetes/v3/src/controlplane/grouper	[no test files]
ok  	github.com/newrelic/nri-kubernetes/v3/src/data	1.923s
ok  	github.com/newrelic/nri-kubernetes/v3/src/definition	1.299s
?   	github.com/newrelic/nri-kubernetes/v3/src/integration	[no test files]
ok  	github.com/newrelic/nri-kubernetes/v3/src/integration/prober	6.646s
ok  	github.com/newrelic/nri-kubernetes/v3/src/integration/sink	4.119s
ok  	github.com/newrelic/nri-kubernetes/v3/src/ksm	1.578s
ok  	github.com/newrelic/nri-kubernetes/v3/src/ksm/client	1.796s
ok  	github.com/newrelic/nri-kubernetes/v3/src/ksm/grouper	0.945s
ok  	github.com/newrelic/nri-kubernetes/v3/src/ksm/metric	0.839s
ok  	github.com/newrelic/nri-kubernetes/v3/src/kubelet	1.284s
ok  	github.com/newrelic/nri-kubernetes/v3/src/kubelet/client	0.885s
ok  	github.com/newrelic/nri-kubernetes/v3/src/kubelet/grouper	0.519s
ok  	github.com/newrelic/nri-kubernetes/v3/src/kubelet/metric	0.397s
ok  	github.com/newrelic/nri-kubernetes/v3/src/metric	0.476s
?   	github.com/newrelic/nri-kubernetes/v3/src/network	[no test files]
ok  	github.com/newrelic/nri-kubernetes/v3/src/prometheus	0.573s
ok  	github.com/newrelic/nri-kubernetes/v3/src/scrape	5.145s

e2e tests:

INFO[0000] running e2e                                  
DEBU[0000] parsing the content of the spec file         
DEBU[0000] return with settings                         
DEBU[0000] validating the spec definition               
DEBU[0000] [scenario]: This scenario will verify that metrics from a k8s Cluster are correctly collected.
, [Tag]: e2e-test-st-yqhvh 
DEBU[0000] execute command 'helm dependency update ../charts/internal/e2e-resources' from path 'e2e' 
::group::helm dependency update ../charts/internal/e2e-resources
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "newrelic-prometheus" chart repository
...Successfully got an update from the "prometheus-community" chart repository
...Successfully got an update from the "newrelic" chart repository
Update Complete. ⎈Happy Helming!⎈
Saving 1 charts
Downloading kube-state-metrics from repo https://prometheus-community.github.io/helm-charts
Deleting outdated charts
::endgroup::
DEBU[0002] execute command 'helm dependency update ../charts/newrelic-infrastructure' from path 'e2e' 
::group::helm dependency update ../charts/newrelic-infrastructure
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "newrelic-prometheus" chart repository
...Successfully got an update from the "prometheus-community" chart repository
...Successfully got an update from the "newrelic" chart repository
Update Complete. ⎈Happy Helming!⎈
Saving 1 charts
Downloading common-library from repo https://helm-charts.newrelic.com
Deleting outdated charts
::endgroup::
DEBU[0004] execute command 'function ver { printf $((10#$(printf "%03d%03d" $(echo "$1" | tr '.' ' ')))); } && K8S_VERSION=$(kubectl version --short 2>&1 | grep 'Server Version' | awk -F' v' '{ print $2; }' | awk -F. '{ print $1"."$2; }') && if [[ $(ver $K8S_VERSION) -lt $(ver "1.25") ]]; then KSM_IMAGE_VERSION="v2.6.0"; else KSM_IMAGE_VERSION="v2.7.0"; fi && echo "Will use KSM image version ${KSM_IMAGE_VERSION}" && helm upgrade --install ${SCENARIO_TAG}-resources -n nr-${SCENARIO_TAG} --create-namespace ../charts/internal/e2e-resources --set persistentVolume.enabled=true --set kube-state-metrics.image.tag=${KSM_IMAGE_VERSION} --set global.nrStaging=true' from path 'e2e' 
::group::function ver { printf $((10#$(printf "%03d%03d" $(echo "$1" | tr '.' ' ')))); } && K8S_VERSION=$(kubectl version --short 2>&1 | grep 'Server Version' | awk -F' v' '{ print $2; }' | awk -F. '{ print $1"."$2; }') && if [[ $(ver $K8S_VERSION) -lt $(ver "1.25") ]]; then KSM_IMAGE_VERSION="v2.6.0"; else KSM_IMAGE_VERSION="v2.7.0"; fi && echo "Will use KSM image version ${KSM_IMAGE_VERSION}" && helm upgrade --install ${SCENARIO_TAG}-resources -n nr-${SCENARIO_TAG} --create-namespace ../charts/internal/e2e-resources --set persistentVolume.enabled=true --set kube-state-metrics.image.tag=${KSM_IMAGE_VERSION} --set global.nrStaging=true
Will use KSM image version v2.7.0
Release "e2e-test-st-yqhvh-resources" does not exist. Installing it now.
NAME: e2e-test-st-yqhvh-resources
LAST DEPLOYED: Thu May 18 17:32:29 2023
NAMESPACE: nr-e2e-test-st-yqhvh
STATUS: deployed
REVISION: 1
TEST SUITE: None
::endgroup::
DEBU[0005] execute command 'helm upgrade --install ${SCENARIO_TAG} -n nr-${SCENARIO_TAG} --create-namespace ../charts/newrelic-infrastructure --values e2e-values.yml --set global.licenseKey=${LICENSE_KEY} --set global.cluster=${SCENARIO_TAG} --set global.nrStaging=true' from path 'e2e' 
::group::helm upgrade --install ${SCENARIO_TAG} -n nr-${SCENARIO_TAG} --create-namespace ../charts/newrelic-infrastructure --values e2e-values.yml --set global.licenseKey=${LICENSE_KEY} --set global.cluster=${SCENARIO_TAG} --set global.nrStaging=true
Release "e2e-test-st-yqhvh" does not exist. Installing it now.
NAME: e2e-test-st-yqhvh
LAST DEPLOYED: Thu May 18 17:32:30 2023
NAMESPACE: nr-e2e-test-st-yqhvh
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:

::endgroup::
DEBU[0005] parsing the content of the metrics source file 
WARN[0006] Error detected                                iteration=0
ERRO[0006] finding keyset: query did not return any result: SELECT keyset() from Metric where k8s.clusterName = 'e2e-test-st-yqhvh' 
DEBU[0066] parsing the content of the metrics source file 
WARN[0067] Error detected                                iteration=1
ERRO[0067] finding keyset: query did not return any result: SELECT keyset() from Metric where k8s.clusterName = 'e2e-test-st-yqhvh' 
DEBU[0127] parsing the content of the metrics source file 
DEBU[0128] parsing the content of the except metrics source file: e2e/${EXCEPTIONS_SOURCE_FILE} 
WARN[0128] Error detected                                iteration=2
ERRO[0128] finding Metric: k8s.apiserver.currentInflightRequestsMutating 
ERRO[0128] finding Metric: k8s.apiserver.currentInflightRequestsReadOnly 
ERRO[0128] finding Metric: k8s.apiserver.etcd.objectCount_* 
ERRO[0128] finding Metric: k8s.job.failedPods           
DEBU[0188] parsing the content of the metrics source file 
DEBU[0189] parsing the content of the except metrics source file: e2e/${EXCEPTIONS_SOURCE_FILE} 
WARN[0189] Error detected                                iteration=3
ERRO[0189] finding Metric: k8s.apiserver.currentInflightRequestsMutating 
ERRO[0189] finding Metric: k8s.apiserver.currentInflightRequestsReadOnly 
ERRO[0189] finding Metric: k8s.apiserver.etcd.objectCount_* 
ERRO[0189] finding Metric: k8s.job.failedPods           
DEBU[0249] parsing the content of the metrics source file 
DEBU[0250] parsing the content of the except metrics source file: e2e/${EXCEPTIONS_SOURCE_FILE} 
WARN[0250] Error detected                                iteration=4
ERRO[0250] finding Metric: k8s.apiserver.currentInflightRequestsMutating 
ERRO[0250] finding Metric: k8s.apiserver.currentInflightRequestsReadOnly 
ERRO[0250] finding Metric: k8s.apiserver.etcd.objectCount_* 
ERRO[0250] finding Metric: k8s.job.failedPods           
DEBU[0310] execute command 'kubectl logs -l app.kubernetes.io/name=newrelic-infrastructure -n nr-${SCENARIO_TAG} --all-containers --prefix=true' from path 'e2e' 
::group::kubectl logs -l app.kubernetes.io/name=newrelic-infrastructure -n nr-${SCENARIO_TAG} --all-containers --prefix=true
[pod/e2e-test-st-yqhvh-nrk8s-kubelet-qm8gj/kubelet] time="2023-05-19T00:32:31Z" level=info msg="Waiting for agent container to be ready..."
[pod/e2e-test-st-yqhvh-nrk8s-kubelet-qm8gj/kubelet] time="2023-05-19T00:33:31Z" level=info msg="New Relic Kubernetes integration Version: dev, Platform: linux/arm64, GoVersion: go1.19.1, GitCommit: 0b82ff6e70418abe5864b9a53fcdf4c8987f431e, BuildDate: Thu May 18 17:28:54 PDT 2023\n"
[pod/e2e-test-st-yqhvh-nrk8s-kubelet-qm8gj/kubelet] time="2023-05-19T00:33:31Z" level=info msg="Trying to connect to kubelet locally with scheme=\"https\" hostURL=\"192.168.49.2:10250\""
[pod/e2e-test-st-yqhvh-nrk8s-kubelet-qm8gj/kubelet] time="2023-05-19T00:33:31Z" level=info msg="Connected to Kubelet through nodeIP with scheme=\"https\" hostURL=\"192.168.49.2:10250\""
[pod/e2e-test-st-yqhvh-nrk8s-kubelet-qm8gj/agent] time="2023-05-19T00:34:28Z" level=warning msg="instantiating docker sampler process decorator" component=ProcessSampler error="Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
[pod/e2e-test-st-yqhvh-nrk8s-kubelet-qm8gj/agent] time="2023-05-19T00:34:48Z" level=warning msg="instantiating docker sampler process decorator" component=ProcessSampler error="Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
[pod/e2e-test-st-yqhvh-nrk8s-kubelet-qm8gj/agent] time="2023-05-19T00:35:08Z" level=warning msg="instantiating docker sampler process decorator" component=ProcessSampler error="Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
[pod/e2e-test-st-yqhvh-nrk8s-kubelet-qm8gj/agent] time="2023-05-19T00:35:28Z" level=warning msg="instantiating docker sampler process decorator" component=ProcessSampler error="Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
[pod/e2e-test-st-yqhvh-nrk8s-kubelet-qm8gj/agent] time="2023-05-19T00:35:48Z" level=warning msg="instantiating docker sampler process decorator" component=ProcessSampler error="Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
[pod/e2e-test-st-yqhvh-nrk8s-kubelet-qm8gj/agent] time="2023-05-19T00:36:08Z" level=warning msg="instantiating docker sampler process decorator" component=ProcessSampler error="Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
[pod/e2e-test-st-yqhvh-nrk8s-kubelet-qm8gj/agent] time="2023-05-19T00:36:28Z" level=warning msg="instantiating docker sampler process decorator" component=ProcessSampler error="Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
[pod/e2e-test-st-yqhvh-nrk8s-kubelet-qm8gj/agent] time="2023-05-19T00:36:48Z" level=warning msg="instantiating docker sampler process decorator" component=ProcessSampler error="Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
[pod/e2e-test-st-yqhvh-nrk8s-kubelet-qm8gj/agent] time="2023-05-19T00:37:08Z" level=warning msg="instantiating docker sampler process decorator" component=ProcessSampler error="Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
[pod/e2e-test-st-yqhvh-nrk8s-kubelet-qm8gj/agent] time="2023-05-19T00:37:28Z" level=warning msg="instantiating docker sampler process decorator" component=ProcessSampler error="Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
[pod/e2e-test-st-yqhvh-nrk8s-controlplane-khfkx/controlplane] time="2023-05-19T00:34:26Z" level=info msg="Waiting for agent container to be ready..."
[pod/e2e-test-st-yqhvh-nrk8s-controlplane-khfkx/controlplane] time="2023-05-19T00:34:36Z" level=info msg="New Relic Kubernetes integration Version: dev, Platform: linux/arm64, GoVersion: go1.19.1, GitCommit: 0b82ff6e70418abe5864b9a53fcdf4c8987f431e, BuildDate: Thu May 18 17:28:54 PDT 2023\n"
[pod/e2e-test-st-yqhvh-nrk8s-controlplane-khfkx/forwarder] time="2023-05-19T00:34:18Z" level=info msg="Checking network connectivity..." component=AgentService service=newrelic-infra
[pod/e2e-test-st-yqhvh-nrk8s-controlplane-khfkx/forwarder] time="2023-05-19T00:34:28Z" level=warning msg="URL error detected. May be a configuration problem or a network connectivity issue." component=AgentService error="Head \"https://staging-infra-api.newrelic.com\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" service=newrelic-infra
[pod/e2e-test-st-yqhvh-nrk8s-controlplane-khfkx/forwarder] time="2023-05-19T00:34:28Z" level=warning msg="Collector endpoint not reachable, retrying..." collector_url="https://staging-infra-api.newrelic.com" component=AgentService error="Head \"https://staging-infra-api.newrelic.com\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" service=newrelic-infra
[pod/e2e-test-st-yqhvh-nrk8s-controlplane-khfkx/forwarder] time="2023-05-19T00:34:29Z" level=info msg=Initializing component=AgentService elapsedTime=11.272346588s service=newrelic-infra version=1.41.0
[pod/e2e-test-st-yqhvh-nrk8s-controlplane-khfkx/forwarder] time="2023-05-19T00:34:36Z" level=info msg="New Relic infrastructure agent is running." component=AgentService elapsedTime=17.90768455s service=newrelic-infra
[pod/e2e-test-st-yqhvh-nrk8s-controlplane-khfkx/forwarder] time="2023-05-19T00:34:36Z" level=info msg="Starting up agent..." component=Agent
[pod/e2e-test-st-yqhvh-nrk8s-controlplane-khfkx/forwarder] time="2023-05-19T00:34:36Z" level=warning msg="failed to connect to DBus. make sure systemd is present." component=NotificationHandler
[pod/e2e-test-st-yqhvh-nrk8s-controlplane-khfkx/forwarder] time="2023-05-19T00:34:36Z" level=warning msg="failed to init shutdown monitor" component=NotificationHandler error="no systemd found"
[pod/e2e-test-st-yqhvh-nrk8s-controlplane-khfkx/forwarder] time="2023-05-19T00:34:36Z" level=info msg="inventory submission disabled" component=Agent
[pod/e2e-test-st-yqhvh-nrk8s-controlplane-khfkx/forwarder] time="2023-05-19T00:34:36Z" level=warning msg="httpapi readiness probe failed" component=api error="Get \"http://localhost:8001/v1/data/ready\": dial tcp 127.0.0.1:8001: connect: connection refused"
[pod/e2e-test-st-yqhvh-nrk8s-ksm-7cfc5c6684-5tmph/ksm] time="2023-05-19T00:32:31Z" level=info msg="Waiting for agent container to be ready..."
[pod/e2e-test-st-yqhvh-nrk8s-ksm-7cfc5c6684-5tmph/ksm] time="2023-05-19T00:33:31Z" level=info msg="New Relic Kubernetes integration Version: dev, Platform: linux/arm64, GoVersion: go1.19.1, GitCommit: 0b82ff6e70418abe5864b9a53fcdf4c8987f431e, BuildDate: Thu May 18 17:28:54 PDT 2023\n"
[pod/e2e-test-st-yqhvh-nrk8s-ksm-7cfc5c6684-5tmph/forwarder] time="2023-05-19T00:33:21Z" level=info msg="Creating service..."
[pod/e2e-test-st-yqhvh-nrk8s-ksm-7cfc5c6684-5tmph/forwarder] time="2023-05-19T00:33:21Z" level=info msg="runtime configuration" agentUser=nri-agent component="New Relic Infrastructure Agent" executablePath=/usr/bin/newrelic-infra maxProcs=1 pluginDir="[/etc/newrelic-infra/integrations.d /var/db/newrelic-infra/integrations.d]"
[pod/e2e-test-st-yqhvh-nrk8s-ksm-7cfc5c6684-5tmph/forwarder] time="2023-05-19T00:33:21Z" level=info msg="Checking network connectivity..." component=AgentService service=newrelic-infra
[pod/e2e-test-st-yqhvh-nrk8s-ksm-7cfc5c6684-5tmph/forwarder] time="2023-05-19T00:33:21Z" level=info msg=Initializing component=AgentService elapsedTime=219.401542ms service=newrelic-infra version=1.41.0
[pod/e2e-test-st-yqhvh-nrk8s-ksm-7cfc5c6684-5tmph/forwarder] time="2023-05-19T00:33:30Z" level=info msg="New Relic infrastructure agent is running." component=AgentService elapsedTime=9.105103921s service=newrelic-infra
[pod/e2e-test-st-yqhvh-nrk8s-ksm-7cfc5c6684-5tmph/forwarder] time="2023-05-19T00:33:30Z" level=info msg="Starting up agent..." component=Agent
[pod/e2e-test-st-yqhvh-nrk8s-ksm-7cfc5c6684-5tmph/forwarder] time="2023-05-19T00:33:30Z" level=warning msg="failed to connect to DBus. make sure systemd is present." component=NotificationHandler
[pod/e2e-test-st-yqhvh-nrk8s-ksm-7cfc5c6684-5tmph/forwarder] time="2023-05-19T00:33:30Z" level=warning msg="failed to init shutdown monitor" component=NotificationHandler error="no systemd found"
[pod/e2e-test-st-yqhvh-nrk8s-ksm-7cfc5c6684-5tmph/forwarder] time="2023-05-19T00:33:30Z" level=info msg="inventory submission disabled" component=Agent
::endgroup::
DEBU[0310] execute command 'kubectl get pods -n nr-${SCENARIO_TAG}' from path 'e2e' 
::group::kubectl get pods -n nr-${SCENARIO_TAG}
NAME                                                              READY   STATUS             RESTARTS       AGE
e2e-test-st-yqhvh-nrk8s-controlplane-khfkx                        2/2     Running            1 (3m9s ago)   5m4s
e2e-test-st-yqhvh-nrk8s-ksm-7cfc5c6684-5tmph                      2/2     Running            0              5m4s
e2e-test-st-yqhvh-nrk8s-kubelet-qm8gj                             2/2     Running            0              5m4s
e2e-test-st-yqhvh-resources-container-creating                    0/1     CrashLoopBackOff   5 (35s ago)    5m4s
e2e-test-st-yqhvh-resources-cronjob-28074275-pksk2                0/1     Completed          0              2m34s
e2e-test-st-yqhvh-resources-cronjob-28074276-582gc                0/1     Completed          0              94s
e2e-test-st-yqhvh-resources-cronjob-28074277-p5qzq                0/1     Completed          0              34s
e2e-test-st-yqhvh-resources-depl-6df75fff8d-2lv2r                 1/1     Running            0              5m4s
e2e-test-st-yqhvh-resources-depl-6df75fff8d-p86cq                 1/1     Running            0              5m4s
e2e-test-st-yqhvh-resources-ds-znw4d                              1/1     Running            0              5m4s
e2e-test-st-yqhvh-resources-failjob-6mrcb                         0/1     Error              0              5m4s
e2e-test-st-yqhvh-resources-failjob-b5gf2                         0/1     Error              0              2m40s
e2e-test-st-yqhvh-resources-failjob-mvmcr                         0/1     Error              0              2m40s
e2e-test-st-yqhvh-resources-failjob-sg56r                         0/1     Error              0              4m5s
e2e-test-st-yqhvh-resources-failjob-tkxw7                         0/1     Error              0              4m5s
e2e-test-st-yqhvh-resources-failjob-tpcl5                         0/1     Error              0              5m4s
e2e-test-st-yqhvh-resources-hpa-78585f866d-8rp45                  1/1     Running            0              5m4s
e2e-test-st-yqhvh-resources-kube-state-metrics-54d465dfc7-tlt87   1/1     Running            0              5m4s
e2e-test-st-yqhvh-resources-pending                               0/1     Pending            0              5m4s
e2e-test-st-yqhvh-resources-statefulset-0                         1/1     Running            0              5m4s
e2e-test-st-yqhvh-resources-statefulset-1                         1/1     Running            0              4m33s
::endgroup::
DEBU[0310] execute command 'helm delete ${SCENARIO_TAG}-resources -n nr-${SCENARIO_TAG}' from path 'e2e' 
::group::helm delete ${SCENARIO_TAG}-resources -n nr-${SCENARIO_TAG}
release "e2e-test-st-yqhvh-resources" uninstalled
::endgroup::
DEBU[0310] execute command 'helm delete ${SCENARIO_TAG} -n nr-${SCENARIO_TAG}' from path 'e2e' 
::group::helm delete ${SCENARIO_TAG} -n nr-${SCENARIO_TAG}
release "e2e-test-st-yqhvh" uninstalled
::endgroup::
FATA[0310] after 5 attempts, last errors: [finding Metric: k8s.apiserver.currentInflightRequestsMutating finding Metric: k8s.apiserver.currentInflightRequestsReadOnly finding Metric: k8s.apiserver.etcd.objectCount_* finding Metric: k8s.job.failedPods] 
exit status 1

@csongnr csongnr requested a review from a team as a code owner April 13, 2023 01:34
@csongnr csongnr changed the title Cs/k8s v127 WIP don't merge Apr 13, 2023
@csongnr csongnr changed the title WIP don't merge Integration tests for k8s 1.27 (WIP) Apr 13, 2023
@csongnr csongnr marked this pull request as draft April 13, 2023 02:04
@htroisi
Copy link
Contributor

htroisi commented Apr 21, 2023

You'll want to update this file too: internal/testutil/testutil.go

@csongnr csongnr marked this pull request as ready for review May 22, 2023 18:08
@juanjjaramillo juanjjaramillo merged commit 86576f2 into main May 25, 2023
19 checks passed
@juanjjaramillo juanjjaramillo deleted the cs/k8s-v127 branch May 25, 2023 23:34
@juanjjaramillo juanjjaramillo changed the title Integration tests for k8s 1.27 (WIP) Integration tests for k8s 1.27 May 25, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants