-
Notifications
You must be signed in to change notification settings - Fork 147
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Runner Pod not created #178
Comments
Hi @hannes-unite, could you include info on what cluster you're using? Right now, this question seems to be more about setup than actual bug. For example, there were similar issues on Minikube: |
Thanks for your reply !
$ kubectl exec -i -t k6-operator-controller-manager-* -n k6-operator-system -- nslookup kubernetes.default
Defaulted container "kube-rbac-proxy" out of: kube-rbac-proxy, manager
Server: 10.96.0.10
Address: 10.96.0.10:53
** server can't find kubernetes.default: NXDOMAIN
** server can't find kubernetes.default: NXDOMAIN
command terminated with exit code 1 So the k6-operator can't communicate with the dns(?) $ kubectl logs --namespace=kube-system -l k8s-app=kube-dns
[INFO] 192.168.192.193:35014 - 22745 "AAAA IN k6-worker-service-1.default.svc.cluster.local.k6-operator-system.svc.f887f546-47e7-4858-80cc-f63f3d04f754.cluster.local. udp 137 false 512" NXDOMAIN qr,aa,rd 341 0.000334s
[INFO] 192.168.192.193:50648 - 36565 "A IN k6-worker-service-1.default.svc.cluster.local.f887f546-47e7-4858-80cc-f63f3d04f754.cluster.local. udp 114 false 512" NXDOMAIN qr,aa,rd 318 0.000237584s
[INFO] 192.168.192.193:40293 - 31107 "AAAA IN k6-worker-service-1.default.svc.cluster.local.svc.f887f546-47e7-4858-80cc-f63f3d04f754.cluster.local. udp 118 false 512" NXDOMAIN qr,aa,rd 322 0.00022496s
[INFO] 192.168.192.193:46102 - 24329 "A IN k6-worker-service-1.default.svc.cluster.local.svc.f887f546-47e7-4858-80cc-f63f3d04f754.cluster.local. udp 118 false 512" NXDOMAIN qr,aa,rd 322 0.000383681s
[INFO] 192.168.192.193:53169 - 29545 "A IN k6-worker-service-1.default.svc.cluster.local.f887f546-47e7-4858-80cc-f63f3d04f754.cluster.local. udp 114 false 512" NXDOMAIN qr,aa,rd 318 0.000121947s
[INFO] 192.168.156.197:44365 - 21810 "A IN kubernetes.default.default.svc.f887f546-47e7-4858-80cc-f63f3d04f754.cluster.local. udp 99 false 512" NXDOMAIN qr,aa,rd 303 0.000215539s
[INFO] 192.168.156.197:45820 - 60506 "A IN kubernetes.default.default.svc.f887f546-47e7-4858-80cc-f63f3d04f754.cluster.local. udp 99 false 512" NXDOMAIN qr,aa,rd 303 0.000211776s
[INFO] 192.168.156.197:53005 - 55052 "A IN kubernetes.default.svc.f887f546-47e7-4858-80cc-f63f3d04f754.cluster.local. udp 91 false 512" NOERROR qr,aa,rd 180 0.000180072s
[INFO] 192.168.156.197:39273 - 47315 "A IN kubernetes.default.default.svc.f887f546-47e7-4858-80cc-f63f3d04f754.cluster.local. udp 99 false 512" NXDOMAIN qr,aa,rd 303 0.000254902s
[INFO] 192.168.156.197:57330 - 48287 "A IN kubernetes.default.svc.f887f546-47e7-4858-80cc-f63f3d04f754.cluster.local. udp 91 false 512" NOERROR qr,aa,rd 180 0.000203166s
[INFO] 192.168.192.193:35901 - 16901 "AAAA IN k6-worker-service-1.default.svc.cluster.local.f887f546-47e7-4858-80cc-f63f3d04f754.cluster.local. udp 114 false 512" NXDOMAIN qr,aa,rd 318 0.000079062s
[INFO] 192.168.192.193:43683 - 36204 "AAAA IN k6-worker-service-1.default.svc.cluster.local. udp 63 false 512" NXDOMAIN qr,aa,rd,ra 138 0.000074199s
[INFO] 192.168.192.193:38384 - 48948 "A IN k6-worker-service-1.default.svc.cluster.local. udp 63 false 512" NXDOMAIN qr,aa,rd,ra 138 0.0000886s
[INFO] 192.168.192.193:34505 - 19590 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.000706773s
[INFO] 192.168.192.193:34505 - 19179 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.000722183s
[INFO] 192.168.156.197:46990 - 37843 "A IN kubernetes.default.svc.f887f546-47e7-4858-80cc-f63f3d04f754.cluster.local. udp 91 false 512" NOERROR qr,aa,rd 180 0.000176065s
[INFO] 192.168.156.197:48075 - 9734 "A IN kubernetes.default.svc.f887f546-47e7-4858-80cc-f63f3d04f754.cluster.local. udp 91 false 512" NOERROR qr,aa,rd 180 0.00026773s
[INFO] 192.168.156.197:42420 - 238 "A IN kubernetes.default.default.svc.f887f546-47e7-4858-80cc-f63f3d04f754.cluster.local. udp 99 false 512" NXDOMAIN qr,aa,rd 303 0.000261169s
[INFO] 192.168.192.193:41487 - 61320 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.000981266s
[INFO] 192.168.192.193:41487 - 61736 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.000776043s This is the resolv.conf from the operator pod:
|
After further research and some help from our Cloudhosting, it turned out, that it is the same Problem as described and fixed here: #146 |
@hannes-unite Thanks for clarifying the use case you encountered! I've opened the umbrella issue #186 - so fixing this is definitely on our roadmap. |
Brief summary
I was following this tutorial
https://k6.io/blog/running-distributed-tests-on-k8s/#cloning-the-repository
and nearly finished, but the k6-sample-starter-xxxxx pod does not spawn.
i used the config map from the example and also the resource description from the example.
After a while of debugging i found:
Has anyone an idea what the problem could be ?
I deployed the configmap and the K6 instance in the default namespace as described in the tutorial
To me it looks like some networking issues, does k6 require some specific firewall/ingress/egress rules to communicate in the cluster ?
Would be grateful for any help
k6-operator version or image
v0.0.8
K6 YAML
apiVersion: k6.io/v1alpha1
kind: K6
metadata:
name: k6-worker
namespace: default
spec:
parallelism: 2
script:
configMap:
name: test-loadtest
file: test.js
Other environment details (if applicable)
No response
Steps to reproduce the problem
cd k6-operator && git checkout v0.0.8
make deploy
kubectl create cm test-loadtest --from-file test.js
k appyl -f k6-worker.yml
Expected behaviour
starter and initializier pods get spawned aswell as worker pods and the loadtest gets executed
Actual behaviour
initializer and worker pods get spawned and k6-operator (manager) logs errors
The text was updated successfully, but these errors were encountered: