Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Runner Pod not created #178

Closed
ghost opened this issue Jan 16, 2023 · 4 comments
Closed

Runner Pod not created #178

ghost opened this issue Jan 16, 2023 · 4 comments
Labels
question Further information is requested

Comments

@ghost
Copy link

ghost commented Jan 16, 2023

Brief summary

I was following this tutorial
https://k6.io/blog/running-distributed-tests-on-k8s/#cloning-the-repository
and nearly finished, but the k6-sample-starter-xxxxx pod does not spawn.

i used the config map from the example and also the resource description from the example.

apiVersion: k6.io/v1alpha1
kind: K6
metadata:
  name: k6-worker
  namespace: default
spec:
  parallelism: 2
  script:
    configMap:
      name: test-loadtest
      file: test.js

After a while of debugging i found:

kubectl logs k6-operator-controller-manager-5c464c798-6fvjb -n k6-operator-system -c manager
[...]
ERROR   controllers.K6  failed to get status from k6-worker-service-1   {"k6": "default/k6-worker", "error": "Get \"http://k6-worker-service-1.default.svc.cluster.local:6565/v1/status\": dial tcp: lookup k6-worker-service-1.default.svc.cluster.local on 10.96.0.10:53: no such host"}

Has anyone an idea what the problem could be ?
I deployed the configmap and the K6 instance in the default namespace as described in the tutorial

To me it looks like some networking issues, does k6 require some specific firewall/ingress/egress rules to communicate in the cluster ?
Would be grateful for any help

k6-operator version or image

v0.0.8

K6 YAML

apiVersion: k6.io/v1alpha1
kind: K6
metadata:
name: k6-worker
namespace: default
spec:
parallelism: 2
script:
configMap:
name: test-loadtest
file: test.js

Other environment details (if applicable)

No response

Steps to reproduce the problem

cd k6-operator && git checkout v0.0.8
make deploy
kubectl create cm test-loadtest --from-file test.js
k appyl -f k6-worker.yml

Expected behaviour

starter and initializier pods get spawned aswell as worker pods and the loadtest gets executed

Actual behaviour

initializer and worker pods get spawned and k6-operator (manager) logs errors

@ghost ghost added the bug Something isn't working label Jan 16, 2023
@yorugac
Copy link
Collaborator

yorugac commented Jan 20, 2023

Hi @hannes-unite, could you include info on what cluster you're using? Right now, this question seems to be more about setup than actual bug. For example, there were similar issues on Minikube:
https://community.k6.io/t/k6-dont-work-with-ingress-on-minikube/3696/16

@ghost
Copy link
Author

ghost commented Jan 23, 2023

Thanks for your reply !
I am running a managed cluster only containing the k6-operator.
I've checked the points (https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/):

  1. Create a simple Pod to use as a test environment and coreDNS - everything works fine
  2. test the dns settings for the operator:
$ kubectl exec -i -t k6-operator-controller-manager-* -n k6-operator-system -- nslookup kubernetes.default
Defaulted container "kube-rbac-proxy" out of: kube-rbac-proxy, manager
Server:         10.96.0.10
Address:        10.96.0.10:53

** server can't find kubernetes.default: NXDOMAIN

** server can't find kubernetes.default: NXDOMAIN

command terminated with exit code 1

So the k6-operator can't communicate with the dns(?)
Also the dns logs look kinda weird:

$ kubectl logs --namespace=kube-system -l k8s-app=kube-dns
[INFO] 192.168.192.193:35014 - 22745 "AAAA IN k6-worker-service-1.default.svc.cluster.local.k6-operator-system.svc.f887f546-47e7-4858-80cc-f63f3d04f754.cluster.local. udp 137 false 512" NXDOMAIN qr,aa,rd 341 0.000334s
[INFO] 192.168.192.193:50648 - 36565 "A IN k6-worker-service-1.default.svc.cluster.local.f887f546-47e7-4858-80cc-f63f3d04f754.cluster.local. udp 114 false 512" NXDOMAIN qr,aa,rd 318 0.000237584s
[INFO] 192.168.192.193:40293 - 31107 "AAAA IN k6-worker-service-1.default.svc.cluster.local.svc.f887f546-47e7-4858-80cc-f63f3d04f754.cluster.local. udp 118 false 512" NXDOMAIN qr,aa,rd 322 0.00022496s
[INFO] 192.168.192.193:46102 - 24329 "A IN k6-worker-service-1.default.svc.cluster.local.svc.f887f546-47e7-4858-80cc-f63f3d04f754.cluster.local. udp 118 false 512" NXDOMAIN qr,aa,rd 322 0.000383681s
[INFO] 192.168.192.193:53169 - 29545 "A IN k6-worker-service-1.default.svc.cluster.local.f887f546-47e7-4858-80cc-f63f3d04f754.cluster.local. udp 114 false 512" NXDOMAIN qr,aa,rd 318 0.000121947s
[INFO] 192.168.156.197:44365 - 21810 "A IN kubernetes.default.default.svc.f887f546-47e7-4858-80cc-f63f3d04f754.cluster.local. udp 99 false 512" NXDOMAIN qr,aa,rd 303 0.000215539s
[INFO] 192.168.156.197:45820 - 60506 "A IN kubernetes.default.default.svc.f887f546-47e7-4858-80cc-f63f3d04f754.cluster.local. udp 99 false 512" NXDOMAIN qr,aa,rd 303 0.000211776s
[INFO] 192.168.156.197:53005 - 55052 "A IN kubernetes.default.svc.f887f546-47e7-4858-80cc-f63f3d04f754.cluster.local. udp 91 false 512" NOERROR qr,aa,rd 180 0.000180072s
[INFO] 192.168.156.197:39273 - 47315 "A IN kubernetes.default.default.svc.f887f546-47e7-4858-80cc-f63f3d04f754.cluster.local. udp 99 false 512" NXDOMAIN qr,aa,rd 303 0.000254902s
[INFO] 192.168.156.197:57330 - 48287 "A IN kubernetes.default.svc.f887f546-47e7-4858-80cc-f63f3d04f754.cluster.local. udp 91 false 512" NOERROR qr,aa,rd 180 0.000203166s
[INFO] 192.168.192.193:35901 - 16901 "AAAA IN k6-worker-service-1.default.svc.cluster.local.f887f546-47e7-4858-80cc-f63f3d04f754.cluster.local. udp 114 false 512" NXDOMAIN qr,aa,rd 318 0.000079062s
[INFO] 192.168.192.193:43683 - 36204 "AAAA IN k6-worker-service-1.default.svc.cluster.local. udp 63 false 512" NXDOMAIN qr,aa,rd,ra 138 0.000074199s
[INFO] 192.168.192.193:38384 - 48948 "A IN k6-worker-service-1.default.svc.cluster.local. udp 63 false 512" NXDOMAIN qr,aa,rd,ra 138 0.0000886s
[INFO] 192.168.192.193:34505 - 19590 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.000706773s
[INFO] 192.168.192.193:34505 - 19179 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.000722183s
[INFO] 192.168.156.197:46990 - 37843 "A IN kubernetes.default.svc.f887f546-47e7-4858-80cc-f63f3d04f754.cluster.local. udp 91 false 512" NOERROR qr,aa,rd 180 0.000176065s
[INFO] 192.168.156.197:48075 - 9734 "A IN kubernetes.default.svc.f887f546-47e7-4858-80cc-f63f3d04f754.cluster.local. udp 91 false 512" NOERROR qr,aa,rd 180 0.00026773s
[INFO] 192.168.156.197:42420 - 238 "A IN kubernetes.default.default.svc.f887f546-47e7-4858-80cc-f63f3d04f754.cluster.local. udp 99 false 512" NXDOMAIN qr,aa,rd 303 0.000261169s
[INFO] 192.168.192.193:41487 - 61320 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.000981266s
[INFO] 192.168.192.193:41487 - 61736 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.000776043s

This is the resolv.conf from the operator pod:

Defaulted container "kube-rbac-proxy" out of: kube-rbac-proxy, manager
search k6-operator-system.svc.f887f546-47e7-4858-80cc-f63f3d04f754.cluster.local svc.f887f546-47e7-4858-80cc-f63f3d04f754.cluster.local f887f546-47e7-4858-80cc-f63f3d04f754.cluster.local
nameserver 10.96.0.10
options ndots:5

@yorugac yorugac added question Further information is requested and removed bug Something isn't working labels Jan 26, 2023
@ghost
Copy link
Author

ghost commented Feb 1, 2023

After further research and some help from our Cloudhosting, it turned out, that it is the same Problem as described and fixed here: #146
So i guess this ticket can be closed and we are hoping you find a promising solution for this Problem.
Thanks again for your help :)

@ghost ghost closed this as completed Feb 1, 2023
@yorugac
Copy link
Collaborator

yorugac commented Feb 2, 2023

@hannes-unite Thanks for clarifying the use case you encountered! I've opened the umbrella issue #186 - so fixing this is definitely on our roadmap.

This issue was closed.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

1 participant